Tag Archives: Google Maps

Navigating new routes, places and distance: Introducing Google Maps Platform to Dev Library

Posted by Swathi Dharshna Subbaraj, Project Coordinator, Google Dev Library

We are excited to announce that Google Maps Platform has now been officially added to the Dev Library! Continuous innovation and the integration of technology into our physical environment have become increasingly important. One product, Google Maps, has played a critical role in shaping the future of the internet. With these resources, developers have created applications that enable them to visualize geospatial data and build projects ranging from hyperlocal logistics to location-driven app development.

By adding Google Maps Platform, Dev Library contributors will be better able to create innovative and useful applications that utilize Google’s mapping, places, and routing data and features. Developers now have access to even more resources that can help take their projects to the next level.

As Alex Muramoto, the Google Maps Platform curator for Dev Library, said,“We’re excited to see developers across tech stacks using Google Maps Platform to build and showcase their projects on Google Dev Library. We hope these projects will provide inspiration and guidance to help your own development efforts”.

Let's explore some contributions from Dev Library authors who have implemented Google Maps Platform APIs and SDKs into their applications.


Contributions in Spotlights:



Flutter Maps by Souvik Biswas

This app uses Google Maps SDK & Directions API on flutter framework. It offers several location-based functionalities, including the ability to detect the user's current location.

It also utilizes Geocoding to convert addresses into coordinates and vice versa, and allows users to add markers to the map view. Moreover, it enables the drawing of routes between two places through the use of Polylines and Directions API, and calculates the actual distance of the route.

Learn more about Flutter Maps


How to integrate a customized Google Map in Flutter by Jaimil Patel

Learn how to use the Google Maps Flutter plugin to display a customized Google Maps view.

Explore key customization features like configuring the integration with Google Maps, adding a custom style to the map, and fetching the current location with the user's permission.

Learn more about the blog post

Customize the Google Map marker icon In Flutter by Lakshydeep Vikram

Learn how to use the Google Maps Flutter plugin to display a customized Google Maps view.

EDiscover how to customize a Google Maps marker icon by adding an image of your choice in Flutter in just a few steps: add the Google Maps Flutter plugin to the Flutter application, then describe how to use the GoogleMap widget provided by the plugin to display the map on the screen.

See how it's done

Google Dev Library is a platform for showcasing open-source projects and technical blogs featuring Google technologies. Join our global community of developers and showcase your Google Maps projects by submitting your content to the Dev Library.

Google Workspace Updates Weekly Recap – September 2, 2022

New updates 

Unless otherwise indicated, the features below are fully launched or in the process of rolling out (rollouts should take no more than 15 business days to complete), launching to both Rapid and Scheduled Release at the same time (if not, each stage of rollout should take no more than 15 business days to complete), and available to all Google Workspace and G Suite customers. 


Refining notifications on the Google Classroom app 
We’ve made the following improvement to Google Classroom notifications: 
In addition to setting your preference for Classroom push notifications, you can now tailor your email notifications from Android or iOS mobile devices. | Learn more.





The text for all push notifications has been updated and we’ve enhanced Classroom action options, such as “Join class” or “View comment.” With this update, Classroom push notifications are much clearer and more actionable. | Learn more. 

These features are available now to Education Fundamentals, Education Plus, Education Standard, the Teaching and Learning Upgrade customers only. 


Insert Google Maps place chips into Google Docs
Last year, we added the ability for you to insert a Google Maps place chip into a Google Doc by pasting a Maps link directly into the document. Now, you can insert place chips into your Docs using the @ menu. | Roll out to Rapid Release began August 22, 2022; launch to Scheduled Release planned for September 8, 2022. | Learn more. 




Google Meet now automatically adjusts the volume of meeting participants 
With meeting participants using various devices to join a meeting, this can lead to discrepancies in volume, with some participants sounding louder than others. Meet will adjust the audio of all participants, helping to ensure everyone is equally loud. To take advantage of this feature, make sure noise cancellation is turned on. We hope this makes for smoother meetings, with less disruptions. 


Available to Google Workspace Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Education Plus, the Teaching and Learning Upgrade, Frontline, and Individual customers 



Previous announcements 

The announcements below were published on the Workspace Updates blog earlier this week. Please refer to the original blog posts for complete details. 



Dark Canvas theme now available on Google Meet hardware home screen 
We’re adding support for a dark home page theme for Google Meet hardware devices. When using Dark Canvas, devices will now feature dark user interface elements on the home screen when not in an active call. | Available for all supported Google Meet hardware devices that have not yet reached their auto-update expiration date. | Learn more.



Easily customize digital signage on your Google Meet hardware through Appspace 
We’re giving admins more options for customization by using their Appspace digital signage content. | Learn more. 



Insert emojis inline with text in Google Docs 
You can now express yourself in a new way by searching for and inserting emojis directly inline with your text in Google Docs. | Learn more. 



Work Insights reporting for Google Chat and Google Meet 
With the recent upgrade from Hangouts to Google Chat for Google Workspace customers, we’re pleased to introduce a Work Insights product for Meet and Chat. Work Insights allows for optimal visibility into your organization’s digital transformation journey, and helps to improve collaboration, promote growth, and much more. | Learn More. | Available to Google Workspace Enterprise Plus customers 



Google Hangouts will be fully upgraded to Google Chat starting November 1, 2022 
As a final step in the migration, beginning November 1, 2022, Google Hangouts on web will redirect to Google Chat on web, and Hangouts will no longer be accessible. Admins will receive an email containing more information about this change, as well as changes in Vault and exporting Hangouts data. | Learn More. 


For a recap of announcements in the past six months, check out What’s new in Google Workspace (recent releases).

Efficient Partitioning of Road Networks

Design techniques based on classical algorithms have proved useful for recent innovation on several large-scale problems, such as travel itineraries and routing challenges. For example, Dijkstra’s algorithm is often used to compute routes in graphs, but the size of the computation can increase quickly beyond the scale of a small town. The process of "partitioning" a road network, however, can greatly speed up algorithms by effectively shrinking how much of the graph is searched during computation.

In this post, we cover how we engineered a graph partitioning algorithm for road networks using ideas from classic algorithms, parts of which were presented in “Sketch-based Algorithms for Approximate Shortest Paths in Road Networks” at WWW 2021. Using random walks, a classical concept that is counterintuitively useful for computing shortest routes by decreasing the network size significantly, our algorithm can find a high quality partitioning of the whole road network of the North America continent nearly an order of magnitude faster1 than other partitioning algorithms with similar output qualities.

Using Graphs to Model Road Networks
There is a well-known and useful correspondence between road networks and graphs, where intersections become nodes and roads become edges.

Image from Wikipedia

To understand how routing might benefit from partitioning, consider the most well-known solution for finding the fastest route: the Dijkstra algorithm, which works in a breadth-first search manner. The Dijkstra algorithm performs an exhaustive search starting from the source until it finds the destination. Because of this, as the distance between the source and the destination increases, the computation can become an order of magnitude slower. For example, it is faster to compute a route inside Seattle, WA than from Seattle, WA to San Francisco, CA. Moreover, even for intra-metro routes, the exhaustive volume of space explored by the Dijkstra algorithm during computation results in an impractical latency on the order of seconds. However, identifying regions that have more connections inside themselves, but fewer connections to the outside (such as Staten Island, NY) makes it possible to split the computation into multiple, smaller chunks.

Top: A routing problem around Staten Island, NY. Bottom: Corresponding partitioning as a graph. Blue nodes indicate the only entrances to/exits from Staten Island.

Consider driving from point A to point B in the above image. Once one decides where to enter Staten Island (Outerbridge or Goethals) and where to exit (Verrazzano), the problem can be broken into the three smaller pieces of driving: To the entrance, the exit, and then the destination using the best route available. That means a routing algorithm only needs to consider these special points (beacons) to navigate between points A and B and can thus find the shortest accurate path faster.

Note that beacons are only useful as long as there are not too many of them—the fewer beacons there are, the fewer shortcuts need to be added, the smaller the search space, and the faster the computation—so a good partitioning should have relatively fewer beacons for the number of components (i.e., a particular area of a road network).

As the example of Staten Island illustrates, real-life road networks have many beacons (special points such as bridges, tunnels, or mountain passes) that result in some areas being very well-connected (e.g., with large grids of streets) and others being poorly connected (e.g., an island only accessible via a couple of bridges). The question becomes how to efficiently define the components and identify the smallest number of beacons that connect the road network.

Our Partitioning Algorithm
Because each connection between two components is a potential beacon, the approach we take to ensure there are not too many beacons is to divide the road network in a way that minimizes the number of connections between components.

To do this, we start by dividing the network into two balanced (i.e., of similar size) components while also minimizing the number of roads that connect those two components, which results in an effectively small ratio of beacons to roads in each component. Then, the algorithm keeps dividing the network into two at a time until all the components reach the desired size, in terms of the number of roads inside, that yields a useful multi-component partition. There is a careful balance here: If the size is too small, we will get too many beacons; whereas if it is too large, then it will be useful only for long routes. Therefore the size is left as an input parameter and found through experimentation when the algorithm is being finalized.

While there are numerous partitioning schemes, such as METIS (for general networks), PUNCH and inertial-flow (both optimized for road-network likes), our solution is based on the inertial-flow algorithm, augmented to run as efficiently on whole continents as it does on cities.

Balanced Partitioning for Road Networks
How does one divide a road network represented as a graph into two balanced components, as mentioned above? A first step is to make a graph smaller by grouping closely connected nodes together, which allows us to speed up the following two-way partitioning phase. This is where a random walk is useful.

Random walks enjoy many useful theoretical properties—which is why they have been used to study a range of topics from the motion of mosquitoes in a forest to heat diffusion—and that most relevant for our application is that they tend to get “trapped” in regions that are well connected inside but poorly connected outside. Consider a random walk on the streets of Staten Island for a fixed number of steps: because relatively few roads exit the island, most of the steps happen inside the island, and the probability of stepping outside the island is low.

Illustration of a random walk. Suppose the blue graph is a hypothetical road network corresponding to Staten Island. 50 random walks are performed, all starting at the middle point. Each random walk continues for 10 steps or until it steps out of the island. The numbers at each node depict how many times they were visited by a random walk. By the end, any node inside the island is visited much more frequently than the nodes outside.

After finding these small components, which will be highly connected nodes grouped together (such as Staten Island in the above example), the algorithm contracts each group into a new, single node.

Reducing the size of the original graph (left) by finding groups of nodes (middle) and coalescing each group into a single “super” node (right). Example here chosen manually to better illustrate the rest of the algorithm.

The final steps of the algorithm are to partition this much smaller graph into two parts and then refine the partitioning on this small graph to one on the original graph of the road network. We then use the inertial flow algorithm to find the cut on the smaller graph that minimizes the ratio of beacons (i.e., edges being cut) to nodes.

The algorithm evaluates different directions. For each direction, we find the division that minimizes the number of edges cut (e.g., beacons) between the first and last 10% of the nodes

Having found a cut on the small graph, the algorithm performs a refinement step to project the cut back to the original graph of the road network.

Conclusion
This work shows how classical algorithms offer many useful tools for solving problems at large scale. Graph partitioning can be used to break down a large scale graph problem into smaller subproblems to be solved independently and in parallel—which is particularly relevant in Google maps, where this partitioning algorithm is used to efficiently compute routes.

Acknowledgements
We thank our collaborators Lisa Fawcett, Sreenivas Gollapudi, Kostas Kollias, Ravi Kumar, Andrew Tomkins, Ameya Velingker from Google Research and Pablo Beltran, Geoff Hulten, Steve Jackson, Du Nguyen from Google Maps.


1This technique can also be used for any network structure, such as that for brain neurons. 

Source: Google AI Blog


Find detailed information on vaccination availability near you

As the COVID-19 pandemic continues to be a priority within our communities, vaccines remain one of our biggest protections. Nationwide vaccination drives are in full swing, and as more people look to get vaccinated, their requirements for information continue to evolve: finding vaccine availability by location, specific information about vaccination services offered, and details on appointment availability are increasingly important to know.

In March 2021, we started showing COVID-19 vaccination centers on Google, in partnership with the Ministry of Health and Family Welfare. Starting this week, for over 13,000 locations across the country, people will be able to get more helpful information about vaccine availability and appointments -- powered by real-time data from the CoWIN APIs. This includes information such as:

  • Availability of appointment slots at each center

  • Vaccines and doses offered (Dose 1 or Dose 2)

  • Expectations for pricing (Paid or Free)

  • Link to the CoWIN website for booking

Across Google Search, Maps, and Google Assistant, now find more detailed information on vaccination availability, including vaccines and doses available, appointments and more

The above information will automatically show up when users search for vaccine centers near them, or in any specific area – across Google Search, Maps and Google Assistant. In addition to English, users can also search in eight Indian languages including Hindi, Bengali, Telugu, Tamil, Malayalam, Kannada, Gujarati, and Marathi. We will continue to partner closely with the CoWIN team to extend this functionality to all vaccination centers across India.

As people continue to seek information related to the pandemic to manage their lives around it, we remain committed to finding and sharing authoritative and timely information across our platforms.

Posted by Hema Budaraju, Director, Google Search


Top questions you ask Google about privacy across our products

“Hey Google, I have some questions…” 

Privacy and security is personal. It means different things to different people, but our commitment is the same to everyone who uses our products: we will keep your personal information private, safe, and secure. We think everyone should be in the know about what data is collected, how their information is used, and most importantly, how they control the data they share with us.

Here are some of the top questions that people commonly ask us:

Q. Is Google Assistant recording everything I say?

No, it isn’t.

Google Assistant is designed to wait in standby mode until it is activated, like when you say, "Hey Google" or "Ok Google". In standby mode, it processes short snippets of audio (a few seconds) to detect an activation (such as “Ok Google”). If no activation is detected, then those audio snippets won’t be sent or saved to Google. When an activation is detected, the Assistant comes out of standby mode to fulfill your request. The status indicator on your device lets you know when the Assistant is activated. And when it’s in standby mode, the Assistant won’t send what you are saying to Google or anyone else. To help keep you in control, we're constantly working to make the Assistant better at reducing unintended activations.

To better tailor Google Assistant to your environment, you can now adjust how sensitive your Assistant is to the activation phrase (like 'Hey Google') through the Google Home app for smart speakers and smart displays. We also provide controls to turn off cameras and mics, and when they’re active we’ll provide a clear visual indicator (like flashing dots on top of your device).

Deleting your Google Assistant activity is easy, by simply using your voice. Just say something like, “Hey Google, delete this week’s activity”, or “Hey Google, delete my last conversation”, and Google Assistant will delete your Assistant activity. This will reflect on your My Activity page, and you can also use this page to review and delete activity across the Google products you use. And if you have people coming over, you can also activate a “Guest Mode” on Google Assistant – Just say, “Hey Google, turn on Guest Mode,” and your Google Assistant interactions will not be saved to your account. 

Q. How does Google decide what ads it shows me? How can I control this?

The Ads you see can be based on a number of things, such as your previous searches, the sites you visit, ads clicked, and more.

For example, you may discover that you are seeing a camera ad because you’ve searched for cameras, visited photography websites or clicked on ads for cameras before. The 'Why this ad?' feature helps you understand why you are seeing a given ad. 

Data helps us personalise ads so that they're more useful to you, but we never use the content of your emails or documents, or sensitive information like health, race, religion or sexual orientation, to tailor ads to you.

It is also easy to personalize the kinds of ads that are shown to you, or even disable ads personalization completely. Visit your Ad Settings page.

Q. Are you building a profile of my personal information across your products, for targeting ads?

We do not sell your personal information — not to advertisers, not to anyone. And we don’t use information in apps where you primarily store personal content — such as Gmail, Drive, Calendar and Photos — for advertising purposes.

We use information to improve our products and services for you and for everyone. And we use anonymous, aggregated data to do so.

A small subset of information may be used to serve you relevant ads (for things you may actually want to hear about), but only with your consent. You can always turn these settings off.

It is also important to note that you can use most of Google’s products completely anonymously, without logging in -- you can Search in incognito mode, or clear your search history; you can watch YouTube videos and use Maps. However, when you share your data with us we can create a better experience with our products based on the information shared with us.

Q. Are you reading my emails to sell ads?

We do not scan or read your Gmail messages to show you ads. 

In fact, we have a host of products like Gmail, Drive and Photos that are  designed to store your personal content, and this content is never used to show ads. When you use your personal Google account and open the promotions or social tabs in Gmail, you'll see ads that were selected to be the most useful and relevant for you. The process of selecting and showing personalized ads in Gmail is fully automated. The ads you see in Gmail are based on data associated with your Google Account such as your activity in other Google services such as YouTube or Search, which could affect the types of ads that you see in Gmail. To remember which ads you've dismissed, avoid showing you the same ads, and show you ads you may like better, we save your past ad interactions, like which ads you've clicked or dismissed. Google does not use keywords or messages in your inbox to show you ads – nobody reads your email in order to show you ads.

Also, if you have a work or school account, you will never be shown ads in Gmail.

You can adjust your ad settings anytime. Learn more about Gmail ads.

Q. Why do you need location information on Maps?

If you want to get from A to B, it’s quicker to have your phone tell us where you are, than to have you figure out your address or location. Location information helps in many other ways too, like helping us figure out how busy traffic is. If you choose to enable location sharing, your phone will send anonymous bits of information back to Google. This is combined with anonymous data from people around you to recognise traffic patterns.

This only happens for people who turn location history on. It is off by default. If you turn it on, but then change your mind, you can visit Your Data in Maps -- a single place for people to manage Google account location settings.

Q. What information does Google know about me? How do I control it?

You can see a summary of what Google services you use and the data saved in your account from your Google Dashboard. There are also powerful privacy controls like Activity Controls and Ad Settings, which allow you to switch the collection and use of data on or off to decide how all of Google can work better for you.

We’ve made it easier for you to make decisions about your data directly within the Google services you use every day. For example, without ever leaving Search, you can review and delete your recent search activity, get quick access to relevant privacy controls from your Google Account, and learn more about how Search works with your data. You can quickly access these controls in Search, Maps, and the Assistant.

Privacy features and controls have always been built into our services, and we’re continuously working to make it even easier to control and manage your privacy and security. But we know that the web is a constantly evolving space, where new threats and bad actors will unfortunately emerge. There will always be more work to be done, and safeguarding people who use our products and services every day will remain our focus. 

For more on how we keep you and your information private, safe and secure visit the Google Safety Center.

Posted by the Google India Team


Google I/O 2021: Being helpful in moments that matter

 

It’s great to be back hosting our I/O Developers Conference this year. Pulling up to our Mountain View campus this morning, I felt a sense of normalcy for the first time in a long while. Of course, it’s not the same without our developer community here in person. COVID-19 has deeply affected our entire global community over the past year and continues to take a toll. Places such as Brazil, and my home country of India, are now going through their most difficult moments of the pandemic yet. Our thoughts are with everyone who has been affected by COVID and we are all hoping for better days ahead.

The last year has put a lot into perspective. At Google, it’s also given renewed purpose to our mission to organize the world's information and make it universally accessible and useful. We continue to approach that mission with a singular goal: building a more helpful Google, for everyone. That means being helpful to people in the moments that matter and giving everyone the tools to increase their knowledge, success, health, and happiness. 

Helping in moments that matter

Sometimes it’s about helping in big moments, like keeping 150 million students and educators learning virtually over the last year with Google Classroom. Other times it’s about helping in little moments that add up to big changes for everyone. For example, we’re introducing safer routing in Maps. This AI-powered capability in Maps can identify road, weather, and traffic conditions where you are likely to brake suddenly; our aim is to reduce up to 100 million events like this every year. 

Reimagining the future of work

One of the biggest ways we can help is by reimagining the future of work. Over the last year, we’ve seen work transform in unprecedented ways, as offices and coworkers have been replaced by kitchen countertops and pets. Many companies, including ours, will continue to offer flexibility even when it’s safe to be in the same office again. Collaboration tools have never been more critical, and today we announced a new smart canvas experience in Google Workspace that enables even richer collaboration. 

Smart Canvas integration with Google Meet

Responsible next-generation AI

We’ve made remarkable advances over the past 22 years, thanks to our progress in some of the most challenging areas of AI, including translation, images and voice. These advances have powered improvements across Google products, making it possible to talk to someone in another language using Assistant’s interpreter mode, view cherished memories on Photos, or use Google Lens to solve a tricky math problem. 

We’ve also used AI to improve the core Search experience for billions of people by taking a huge leap forward in a computer’s ability to process natural language. Yet, there are still moments when computers just don’t understand us. That’s because language is endlessly complex: We use it to tell stories, crack jokes, and share ideas — weaving in concepts we’ve learned over the course of our lives. The richness and flexibility of language make it one of humanity’s greatest tools and one of computer science’s greatest challenges. 

Today I am excited to share our latest research in natural language understanding: LaMDA. LaMDA is a language model for dialogue applications. It’s open domain, which means it is designed to converse on any topic. For example, LaMDA understands quite a bit about the planet Pluto. So if a student wanted to discover more about space, they could ask about Pluto and the model would give sensible responses, making learning even more fun and engaging. If that student then wanted to switch over to a different topic — say, how to make a good paper airplane — LaMDA could continue the conversation without any retraining.

This is one of the ways we believe LaMDA can make information and computing radically more accessible and easier to use (and you can learn more about that here). 

We have been researching and developing language models for many years. We’re focused on ensuring LaMDA meets our incredibly high standards on fairness, accuracy, safety, and privacy, and that it is developed consistently with our AI Principles. And we look forward to incorporating conversation features into products like Google Assistant, Search, and Workspace, as well as exploring how to give capabilities to developers and enterprise customers.

LaMDA is a huge step forward in natural conversation, but it’s still only trained on text. When people communicate with each other they do it across images, text, audio, and video. So we need to build multimodal models (MUM) to allow people to naturally ask questions across different types of information. With MUM you could one day plan a road trip by asking Google to “find a route with beautiful mountain views.” This is one example of how we’re making progress towards more natural and intuitive ways of interacting with Search.

Pushing the frontier of computing

Translation, image recognition, and voice recognition laid the foundation for complex models like LaMDA and multimodal models. Our compute infrastructure is how we drive and sustain these advances, and TPUs, our custom-built machine learning processes, are a big part of that. Today we announced our next generation of TPUs: the TPU v4. These are powered by the v4 chip, which is more than twice as fast as the previous generation. One pod can deliver more than one exaflop, equivalent to the computing power of 10 million laptops combined. This is the fastest system we’ve ever deployed, and a historic milestone for us. Previously to get to an exaflop, you needed to build a custom supercomputer. And we'll soon have dozens of TPUv4 pods in our data centers, many of which will be operating at or near 90% carbon-free energy. They’ll be available to our Cloud customers later this year.

(Left) TPU v4 chip tray; (Right) TPU v4 pods at our Oklahoma data center 

It’s tremendously exciting to see this pace of innovation. As we look further into the future, there are types of problems that classical computing will not be able to solve in reasonable time. Quantum computing can help. Achieving our quantum milestone was a tremendous accomplishment, but we’re still at the beginning of a multiyear journey. We continue to work to get to our next big milestone in quantum computing: building an error-corrected quantum computer, which could help us increase battery efficiency, create more sustainable energy, and improve drug discovery. To help us get there, we’ve opened a new state of the art Quantum AI campus with our first quantum data center and quantum processor chip fabrication facilities.

Inside our new Quantum AI campus.

Safer with Google

At Google we know that our products can only be as helpful as they are safe. And advances in computer science and AI are how we continue to make them better. We keep more users safe by blocking malware, phishing attempts, spam messages, and potential cyber attacks than anyone else in the world.

Our focus on data minimization pushes us to do more, with less data. Two years ago at I/O, I announced Auto-Delete, which encourages users to have their activity data automatically and continuously deleted. We’ve since made Auto-Delete the default for all new Google Accounts. Now, after 18 months we automatically delete your activity data, unless you tell us to do it sooner. It’s now active for over 2 billion accounts.

All of our products are guided by three important principles: With one of the world’s most advanced security infrastructures, our products are secure by default. We strictly uphold responsible data practices so every product we build is private by design. And we create easy to use privacy and security settings so you’re in control.

Long term research: Project Starline

We were all grateful to have video conferencing over the last year to stay in touch with family and friends, and keep schools and businesses going. But there is no substitute for being together in the room with someone. 

Several years ago we kicked off a project called Project Starline to use technology to explore what’s possible. Using high-resolution cameras and custom-built depth sensors, it captures your shape and appearance from multiple perspectives, and then fuses them together to create an extremely detailed, real-time 3D model. The resulting data is many gigabits per second, so to send an image this size over existing networks, we developed novel compression and streaming algorithms that reduce the data by a factor of more than 100. We also developed a breakthrough light-field display that shows you the realistic representation of someone sitting in front of you. As sophisticated as the technology is, it vanishes, so you can focus on what’s most important. 

We’ve spent thousands of hours testing it at our own offices, and the results are promising. There’s also excitement from our lead enterprise partners, and we’re working with partners in health care and media to get early feedback. In pushing the boundaries of remote collaboration, we've made technical advances that will improve our entire suite of communications products. We look forward to sharing more in the months ahead.

A person having a conversation with someone over Project Starline.

Solving complex sustainability challenges

Another area of research is our work to drive forward sustainability. Sustainability has been a core value for us for more than 20 years. We were the first major company to become carbon neutral in 2007. We were the first to match our operations with 100% renewable energy in 2017, and we’ve been doing it ever since. Last year we eliminated our entire carbon legacy. 

Our next ambition is our biggest yet: operating on carbon free energy by the year 2030. This represents a significant step change from current approaches and is a moonshot on the same scale as quantum computing. It presents equally hard problems to solve, from sourcing carbon-free energy in every place we operate to ensuring it can run every hour of every day. 

Building on the first carbon-intelligent computing platform that we rolled out last year, we’ll soon be the first company to implement carbon-intelligent load shifting across both time and place within our data center network. By this time next year we’ll be shifting more than a third of non-production compute to times and places with greater availability of carbon-free energy. And we are working to apply our Cloud AI with novel drilling techniques and fiber optic sensing to deliver geothermal power in more places, starting in our Nevada data centers next year.

Investments like these are needed to get to 24/7 carbon-free energy, and it’s happening in Mountain View, California, too. We’re building our new campus to the highest sustainability standards. When completed, these buildings will feature a first- of- its- kind, dragonscale solar skin, equipped with 90,000 silver solar panels and the capacity to generate nearly 7 megawatts. They will house the largest geothermal pile system in North America to help heat buildings in the winter and cool them in the summer. It’s been amazing to see it come to life.

(Left) Rendering of the new Charleston East campus in Mountain View, California; (Right) Model view with dragon scale solar skin.

A celebration of technology

I/O isn’t just a celebration of technology but of the people who use it, and build it — including the millions of developers around the world who joined us virtually today. Over the past year we’ve seen people use technology in profound ways: to keep themselves healthy and safe, to learn and grow, to connect, and to help one another through really difficult times. It’s been inspiring to see and has made us more committed than ever to being helpful in the moments that matter. 

I look forward to seeing everyone at next year’s I/O — in person, I hope. Until then, be safe and well.

Posted by Sundar Pichai, CEO of Google and Alphabet

Search, explore and shop the world’s information, powered by AI

AI advancements push the boundaries of what Google products can do. Nowhere is this clearer than at the core of our mission to make information more accessible and useful for everyone.


We've spent more than two decades developing not just a better understanding of information on the web, but a better understanding of the world. Because when we understand information, we can make it more helpful  — whether you’re a remote student learning a complex new subject, a caregiver looking for trusted information on COVID vaccines or a parent searching for the best route home.


Deeper understanding with MUM

One of the hardest problems for search engines today is helping you with complex tasks — like planning what to do on a family outing. These often require multiple searches to get the information you need. In fact, we find that it takes people eight searches on average to complete complex tasks.


With a new technology called Multitask Unified Model, or MUM, we're able to better understand much more complex questions and needs, so in the future, it will require fewer searches to get things done. Like BERT, MUM is built on a Transformer architecture, but it’s 1,000 times more powerful and can multitask in order to unlock information in new ways. MUM not only understands language, but also generates it. It’s trained across 75 different languages and many different tasks at once, allowing it to develop a more comprehensive understanding of information and world knowledge than previous models. And MUM is multimodal, so it understands information across text and images and in the future, can expand to more modalities like video and audio.


Imagine a question like: “I’ve hiked Mt. Adams and now want to hike Mt. Fuji next fall, what should I do differently to prepare?” This would stump search engines today, but in the future, MUM could understand this complex task and generate a response, pointing to highly relevant results to dive deeper. We’ve already started internal pilots with MUM and are excited about its potential for improving Google products.

 

Information comes to life with Lens and AR

People come to Google to learn new things, and visuals can make all the difference. Google Lens lets you search what you see — from your camera, your photos or even your search bar. Today we’re seeing more than 3 billion searches with Lens every month, and an increasingly popular use case is learning. For example, many students might have schoolwork in a language they aren't very familiar with. That’s why we’re updating the Translate filter in Lens so it’s easy to copy, listen to or search translated text, helping students access education content from the web in over 100 languages.

 

Google Lens’s Translate filter applied to homework.

AR is also a powerful tool for visual learning. With the new AR athletes in Search, you can see signature moves from some of your favorite athletes in AR — like Simone Biles’s famous balance beam routine.

Simone Biles’s balance beam routine surfaced by the AR athletes in Search feature.

Evaluate information with About This Result 

Helpful information should be credible and reliable, and especially during moments like the pandemic or elections, people turn to Google for trustworthy information. 

 

Our ranking systems are designed to prioritize high-quality information, but we also help you evaluate the credibility of sources, right in Google Search. Our About This Result feature provides details about a website before you visit it, including its description, when it was first indexed and whether your connection to the site is secure. 

 

 

This month, we’ll start rolling out About This Result to all English results worldwide, with more languages to come. Later this year, we’ll add even more detail, like how a site describes itself, what other sources are saying about it and related articles to check out. 

 

Exploring the real world with Maps

Google Maps transformed how people navigate, explore and get things done in the world — and we continue to push the boundaries of what a map can be with industry-first features like AR navigation in Live View at scale. We recently announced we’re on track to launch over 100 AI-powered improvements to Google Maps by the end of year, and today, we’re introducing a few of the newest ones. Our new routing updates are designed to reduce the likelihood of hard-braking on your drive using machine learning and historical navigation information — which we believe could eliminate over 100 million hard-braking events in routes driven with Google Maps each year.

 

If you’re looking for things to do, our more tailored map will spotlight relevant places based on time of day and whether or not you’re traveling. Enhancements to Live View and detailed street maps will help you explore and get a deep understanding of an area as quickly as possible. And if you want to see how busy neighborhoods and parts of town are, you’ll be able to do this at a glance as soon as you open Maps.


More ways to shop with Google 

People are shopping across Google more than a billion times per day, and our AI-enhanced Shopping Graph — our deep understanding of products, sellers, brands, reviews, product information and inventory data — powers many features that help you find exactly what you’re looking for.


Because shopping isn’t always a linear experience, we’re introducing new ways to explore and keep track of products. Now, when you take a screenshot, Google Photos will prompt you to search the photo with Lens, so you can immediately shop for that item if you want. And on Chrome, we’ll help you keep track of shopping carts you’ve begun to fill, so you can easily resume your virtual shopping trip. We're also working with retailers to surface loyalty benefits for customers earlier, to help inform their decisions.


Last year we made it free for merchants to sell their products on Google. Now, we’re introducing a new, simplified process that helps Shopify’s 1.7 million merchants make their products discoverable across Google in just a few clicks.  


Whether we’re understanding the world’s information, or helping you understand it too, we’re dedicated to making our products more useful every day. And with the power of AI, no matter how complex your task, we’ll be able to bring you the highest quality, most relevant results.  


Posted by Prabhakar Raghavan, Senior Vice President

An update on our COVID response priorities

 Our teams at Google continue to support the tireless work of hospitals, nonprofits, and public health service providers across the country. Right now, we’re focused on three priority areas: ensuring people can access the latest and most authoritative information; amplifying vital safety and vaccination messages; and providing financial backing for affected communities, health authorities and other organizations.

Providing critical and authoritative information

On all our platforms, we’re taking steps to surface the critical information families and communities need to care for their own health and look after others.

Searches on the COVID-19 vaccine display key information around side effects, effectiveness, and registration details, while treatment-related queries surface guidance from ministry resources

When people ask questions about vaccines on Google Search, they see information panels that display the latest updates on vaccine safety, efficacy and side-effects, plus registration information that directs users to the Co-WIN website. You will also find information about prevention, self-care, and treatment under the Prevention and Treatment tab, in easy-to-understand language sourced from authorised medical sources and the Ministry of Health and Family Welfare. 

On YouTube we’re surfacing authoritative information in a set of playlists, about vaccines, preventing the spread of COVID-19, and facts from experts on COVID-19 care.

Our YouTube India channel features a set of playlists to share tips and information on COVID-19 care 

Testing and vaccination center locations

In addition to showing 2,500 testing centers on Search and Maps, we’re now sharing the locations of over 23,000 vaccination centers nationwide, in English and eight Indian languages. And we’re continuing to work closely with the Ministry of Health and Family Welfare to make more vaccination center information available to users throughout India.

Searching for vaccines in Maps and Search now shows over 23,000 vaccination centers across the country, in English and eight Indian languages

Pilot on hospital beds and medical oxygen availability

We know that some of the most crucial information people are searching for is the availability of hospital beds and access to medical oxygen. To help them find answers more easily, we’re testing a new feature using the Q&A function in Maps that enables people to ask about and share local information on availability of beds and medical oxygen in select locations. As this will be user generated content and not provided by authorised sources, it may be required to verify the accuracy and freshness of the information before utilizing it.

Amplifying vital safety and vaccination messages

As well as providing authoritative answers to queries, we’re using our channels to help extend the reach of health information campaigns. That includes the ‘Get the Facts’ around vaccines campaign, to encourage people to focus on authoritative information and content for vaccines. We’re also surfacing important safety messages through promotions on the Google homepage, Doodles and reminders within our apps and services.

Via the Google Search homepage and reminders within our apps and services, we are reminding people to stay safe and stay masked, and get authoritative information on vaccines

Supporting health authorities, organizations, and affected communities

Since the second wave began, we’ve been running an internal donation campaign to raise funds for nonprofit organizations helping those most in need, including GiveIndia, Charities Aid Foundation India, GOONJ, and United Way of Mumbai. This campaign has raised over $4.6 million (INR 33 crore) to date, and continues to generate much-needed support for relief efforts. 

We recognize that many more nonprofits need donations, and that Indians are eager to help where they can—so we’ve rolled out a COVID Aid campaign on Google Pay, featuring non-profit organizations like GiveIndia, Charities Aid Foundation, Goonj, Save the Children, Seeds, UNICEF India  (National NGOs) and United Way. We want to thank all our Google Pay users who have contributed to these organisations, and we hope this effort will make a difference where it matters most. 

On Google Pay people can contribute funds to non-profit organizations involved in COVID response

As India battles this devastating wave, we’ll keep doing all we can to support the selfless individuals and committed organizations on the front lines of the response. There’s a long way to go—but standing together in solidarity, working together with determination, we can and will turn the tide.  

Posted by the Covid Response team, Google India


Helping people find credible information as India gets into vaccination overdrive

Even as our country gradually returns to regular work and life, COVID-19 continues to be a reality for many. The commencement of vaccinations is a source of hope, especially with the second phase now underway, potentially targeting 100  million people who can benefit from it.

As the government continues to manage the logistics of the vaccine roll out -- one of the largest in the world -- it has taken proactive steps to provide timely, accurate, and science-based information about the vaccines to the public. This is crucial because instances of misinformation and disinformation about the vaccine,  its need, and it’s efficacy can seriously undermine this public health intervention.

As the government activates the processes involved in implementing these large-scale vaccinations, our teams have been hard at work to surface authoritative and timely information for people asking vaccine-related questions. We have worked with the Ministry of Health & Family Welfare (MoHFW) and the  Bill & Melinda Gates Foundation to amplify this science-based narrative around vaccination drive, and have been working closely with the Rapid Risk Response team at the MoHFW that is tracking misinformation using social media listening tools across region and languages, and countering it with science-based messaging on vaccines and pandemic response overall. 

Shortly after the first phase of vaccinations commenced, to help people find credible information we rolled out knowledge panels in Google Search that show up for queries relating to the COVID vaccine. These panels provide consolidated information such as details on the two vaccines, effectiveness, safety, distribution, side effects, and more, and is available in English and eight Indian languages (Tamil, Telugu, Malayalam, Kannada, Marathi, Gujarati, Bengali, and Hindi). This information is sourced from MoHFW, and provides answers to commonly asked questions, displays real-time statistics around vaccinations completed, and provides links to the MoHFW website for additional local resources.

Search queries on the COVID-19 vaccine display organized information on the subject including top news stories and resources from MoHFW on side effects, where to get it and more.

Our teams also supported the MoHFW in helping optimize their website for mobile viewers by improving the website’s page load times, enabling users to find information swiftly. We also helped localize their various vaccination resource pages into the eight Indian languages listed above.

On YouTube we launched information panels that show up when searching for COVID-related queries and also have a banner on the YouTube homepage, both of which redirect to key vaccine resources on the MoHFW website. We also featured FAQ videos from the MoHFW on the YouTube homepage.

With vaccinations for the vulnerable population having commenced from 1st March in thousands of hospitals across the country, we are also working with the MoHFW and the Bill & Melinda Gates Foundation to accurately surface the information on vaccination centers on Google Search, Maps and Google Assistant, and expect to roll this out in the coming weeks . 

To enable government officials as they make critical decisions during these vaccination rollouts, we also deliver regular Google Trends reports on COVID vaccine queries that reflect interest around the vaccination from month to month across regions.

As COVID-19 continues to challenge our communities, we remain committed to doing all we can to assist the country’s health agencies at this key juncture of the pandemic, where the successful rollout of these large-scale vaccinations can help us collectively turn a corner and see a much-needed return to normalcy.

Posted by the Google India team

Addressing Range Anxiety with Smart Electric Vehicle Routing

Mapping algorithms used for navigation often rely on Dijkstra’s algorithm, a fundamental textbook solution for finding shortest paths in graphs. Dijkstra’s algorithm is simple and elegant -- rather than considering all possible routes (an exponential number) it iteratively improves an initial solution, and works in polynomial time. The original algorithm and practical extensions of it (such as the A* algorithm) are used millions of times per day for routing vehicles on the global road network. However, due to the fact that most vehicles are gas-powered, these algorithms ignore refueling considerations because a) gas stations are usually available everywhere at the cost of a small detour, and b) the time needed to refuel is typically only a few minutes and is negligible compared to the total travel time.

This situation is different for electric vehicles (EVs). First, EV charging stations are not as commonly available as gas stations, which can cause range anxiety, the fear that the car will run out of power before reaching a charging station. This concern is common enough that it is considered one of the barriers to the widespread adoption of EVs. Second, charging an EV’s battery is a more decision-demanding task, because the charging time can be a significant fraction of the total travel time and can vary widely by station, vehicle model, and battery level. In addition, the charging time is non-linear — e.g., it takes longer to charge a battery from 90% to 100% than from 20% to 30%.

The EV can only travel a distance up to the illustrated range before needing to recharge. Different roads and different stations have different time costs. The goal is to optimize for the total trip time.

Today, we present a new approach for routing of EVs integrated into the latest release of Google Maps built into your car for participating EVs that reduces range anxiety by integrating recharging stations into the navigational route. Based on the battery level and the destination, Maps will recommend the charging stops and the corresponding charging levels that will minimize the total duration of the trip. To accomplish this we engineered a highly scalable solution for recommending efficient routes through charging stations, which optimizes the sum of the driving time and the charging time together.

The fastest route from Berlin to Paris for a gas fueled car is shown in the top figure. The middle figure shows the optimal route for a 400 km range EV (travel time indicated - charging time excluded), where the larger white circles along the route indicate charging stops. The bottom figure shows the optimal route for a 200 km range EV.

Routing Through Charging Stations
A fundamental constraint on route selection is that the distance between recharging stops cannot be higher than what the vehicle can reach on a full charge. Consequently, the route selection model emphasizes the graph of charging stations, as opposed to the graph of road segments of the road network, where each charging station is a node and each trip between charging stations is an edge. Taking into consideration the various characteristics of each EV (such as the weight, maximum battery level, plug type, etc.) the algorithm identifies which of the edges are feasible for the EV under consideration and which are not. Once the routing request comes in, Maps EV routing augments the feasible graph with two new nodes, the origin and the destination, and with multiple new (feasible) edges that outline the potential trips from the origin to its nearby charging stations and to the destination from each of its nearby charging stations.

Routing using Dijkstra’s algorithm or A* on this graph is sufficient to give a feasible solution that optimizes for the travel time for drivers that do not care at all about the charging time, (i.e., drivers who always fully charge their batteries at each charging station). However, such algorithms are not sufficient to account for charging times. In this case, the algorithm constructs a new graph by replicating each charging station node multiple times. Half of the copies correspond to entering the station with a partially charged battery, with a charge, x, ranging from 0%-100%. The other half correspond to exiting the station with a fractional charge, y (again from 0%-100%). We add an edge from the entry node at the charge x to the exit node at charge y (constrained by y > x), with a corresponding charging time to get from x to y. When the trip from Station A to Station B spends some fraction (z) of the battery charge, we introduce an edge between every exit node of Station A to the corresponding entry node of Station B (at charge x-z). After performing this transformation, using Dijkstra or A* recovers the solution.

An example of our node/edge replication. In this instance the algorithm opts to pass through the first station without charging and charges at the second station from 20% to 80% battery.

Graph Sparsification
To perform the above operations while addressing range anxiety with confidence, the algorithm must compute the battery consumption of each trip between stations with good precision. For this reason, Maps maintains detailed information about the road characteristics along the trip between any two stations (e.g., the length, elevation, and slope, for each segment of the trip), taking into consideration the properties of each type of EV.

Due to the volume of information required for each segment, maintaining a large number of edges can become a memory intensive task. While this is not a problem for areas where EV charging stations are sparse, there exist locations in the world (such as Northern Europe) where the density of stations is very high. In such locations, adding an edge for every pair of stations between which an EV can travel quickly grows to billions of possible edges.

The figure on the left illustrates the high density of charging stations in Northern Europe. Different colors correspond to different plug types. The figure on the right illustrates why the routing graph scales up very quickly in size as the density of stations increases. When there are many stations within range of each other, the induced routing graph is a complete graph that stores detailed information for each edge.

However, this high density implies that a trip between two stations that are relatively far apart will undoubtedly pass through multiple other stations. In this case, maintaining information about the long edge is redundant, making it possible to simply add the smaller edges (spanners) in the graph, resulting in sparser, more computationally feasible, graphs.

The spanner construction algorithm is a direct generalization of the greedy geometric spanner. The trips between charging stations are sorted from fastest to slowest and are processed in that order. For each trip between points a and b, the algorithm examines whether smaller subtrips already included in the spanner subsume the direct trip. To do so it compares the trip time and battery consumption that can be achieved using subtrips already in the spanner, against the same quantities for the direct a-b route. If they are found to be within a tiny error threshold, the direct trip from a to b is not added to the spanner, otherwise it is. Applying this sparsification algorithm has a notable impact and allows the graph to be served efficiently in responding to users’ routing requests.

On the left is the original road network (EV stations in light red). The station graph in the middle has edges for all feasible trips between stations. The sparse graph on the right maintains the distances with much fewer edges.

Summary
In this work we engineer a scalable solution for routing EVs on long trips to include access to charging stations through the use of graph sparsification and novel framing of standard routing algorithms. We are excited to put algorithmic ideas and techniques in the hands of Maps users and look forward to serving stress-free routes for EV drivers across the globe!

Acknowledgements
We thank our collaborators Dixie Wang, Xin Wei Chow, Navin Gunatillaka, Stephen Broadfoot, Alex Donaldson, and Ivan Kuznetsov.

Source: Google AI Blog