Tag Archives: Google Maps

Find detailed information on vaccination availability near you

As the COVID-19 pandemic continues to be a priority within our communities, vaccines remain one of our biggest protections. Nationwide vaccination drives are in full swing, and as more people look to get vaccinated, their requirements for information continue to evolve: finding vaccine availability by location, specific information about vaccination services offered, and details on appointment availability are increasingly important to know.

In March 2021, we started showing COVID-19 vaccination centers on Google, in partnership with the Ministry of Health and Family Welfare. Starting this week, for over 13,000 locations across the country, people will be able to get more helpful information about vaccine availability and appointments -- powered by real-time data from the CoWIN APIs. This includes information such as:

  • Availability of appointment slots at each center

  • Vaccines and doses offered (Dose 1 or Dose 2)

  • Expectations for pricing (Paid or Free)

  • Link to the CoWIN website for booking

Across Google Search, Maps, and Google Assistant, now find more detailed information on vaccination availability, including vaccines and doses available, appointments and more

The above information will automatically show up when users search for vaccine centers near them, or in any specific area – across Google Search, Maps and Google Assistant. In addition to English, users can also search in eight Indian languages including Hindi, Bengali, Telugu, Tamil, Malayalam, Kannada, Gujarati, and Marathi. We will continue to partner closely with the CoWIN team to extend this functionality to all vaccination centers across India.

As people continue to seek information related to the pandemic to manage their lives around it, we remain committed to finding and sharing authoritative and timely information across our platforms.

Posted by Hema Budaraju, Director, Google Search


Top questions you ask Google about privacy across our products

“Hey Google, I have some questions…” 

Privacy and security is personal. It means different things to different people, but our commitment is the same to everyone who uses our products: we will keep your personal information private, safe, and secure. We think everyone should be in the know about what data is collected, how their information is used, and most importantly, how they control the data they share with us.

Here are some of the top questions that people commonly ask us:

Q. Is Google Assistant recording everything I say?

No, it isn’t.

Google Assistant is designed to wait in standby mode until it is activated, like when you say, "Hey Google" or "Ok Google". In standby mode, it processes short snippets of audio (a few seconds) to detect an activation (such as “Ok Google”). If no activation is detected, then those audio snippets won’t be sent or saved to Google. When an activation is detected, the Assistant comes out of standby mode to fulfill your request. The status indicator on your device lets you know when the Assistant is activated. And when it’s in standby mode, the Assistant won’t send what you are saying to Google or anyone else. To help keep you in control, we're constantly working to make the Assistant better at reducing unintended activations.

To better tailor Google Assistant to your environment, you can now adjust how sensitive your Assistant is to the activation phrase (like 'Hey Google') through the Google Home app for smart speakers and smart displays. We also provide controls to turn off cameras and mics, and when they’re active we’ll provide a clear visual indicator (like flashing dots on top of your device).

Deleting your Google Assistant activity is easy, by simply using your voice. Just say something like, “Hey Google, delete this week’s activity”, or “Hey Google, delete my last conversation”, and Google Assistant will delete your Assistant activity. This will reflect on your My Activity page, and you can also use this page to review and delete activity across the Google products you use. And if you have people coming over, you can also activate a “Guest Mode” on Google Assistant – Just say, “Hey Google, turn on Guest Mode,” and your Google Assistant interactions will not be saved to your account. 

Q. How does Google decide what ads it shows me? How can I control this?

The Ads you see can be based on a number of things, such as your previous searches, the sites you visit, ads clicked, and more.

For example, you may discover that you are seeing a camera ad because you’ve searched for cameras, visited photography websites or clicked on ads for cameras before. The 'Why this ad?' feature helps you understand why you are seeing a given ad. 

Data helps us personalise ads so that they're more useful to you, but we never use the content of your emails or documents, or sensitive information like health, race, religion or sexual orientation, to tailor ads to you.

It is also easy to personalize the kinds of ads that are shown to you, or even disable ads personalization completely. Visit your Ad Settings page.

Q. Are you building a profile of my personal information across your products, for targeting ads?

We do not sell your personal information — not to advertisers, not to anyone. And we don’t use information in apps where you primarily store personal content — such as Gmail, Drive, Calendar and Photos — for advertising purposes.

We use information to improve our products and services for you and for everyone. And we use anonymous, aggregated data to do so.

A small subset of information may be used to serve you relevant ads (for things you may actually want to hear about), but only with your consent. You can always turn these settings off.

It is also important to note that you can use most of Google’s products completely anonymously, without logging in -- you can Search in incognito mode, or clear your search history; you can watch YouTube videos and use Maps. However, when you share your data with us we can create a better experience with our products based on the information shared with us.

Q. Are you reading my emails to sell ads?

We do not scan or read your Gmail messages to show you ads. 

In fact, we have a host of products like Gmail, Drive and Photos that are  designed to store your personal content, and this content is never used to show ads. When you use your personal Google account and open the promotions or social tabs in Gmail, you'll see ads that were selected to be the most useful and relevant for you. The process of selecting and showing personalized ads in Gmail is fully automated. The ads you see in Gmail are based on data associated with your Google Account such as your activity in other Google services such as YouTube or Search, which could affect the types of ads that you see in Gmail. To remember which ads you've dismissed, avoid showing you the same ads, and show you ads you may like better, we save your past ad interactions, like which ads you've clicked or dismissed. Google does not use keywords or messages in your inbox to show you ads – nobody reads your email in order to show you ads.

Also, if you have a work or school account, you will never be shown ads in Gmail.

You can adjust your ad settings anytime. Learn more about Gmail ads.

Q. Why do you need location information on Maps?

If you want to get from A to B, it’s quicker to have your phone tell us where you are, than to have you figure out your address or location. Location information helps in many other ways too, like helping us figure out how busy traffic is. If you choose to enable location sharing, your phone will send anonymous bits of information back to Google. This is combined with anonymous data from people around you to recognise traffic patterns.

This only happens for people who turn location history on. It is off by default. If you turn it on, but then change your mind, you can visit Your Data in Maps -- a single place for people to manage Google account location settings.

Q. What information does Google know about me? How do I control it?

You can see a summary of what Google services you use and the data saved in your account from your Google Dashboard. There are also powerful privacy controls like Activity Controls and Ad Settings, which allow you to switch the collection and use of data on or off to decide how all of Google can work better for you.

We’ve made it easier for you to make decisions about your data directly within the Google services you use every day. For example, without ever leaving Search, you can review and delete your recent search activity, get quick access to relevant privacy controls from your Google Account, and learn more about how Search works with your data. You can quickly access these controls in Search, Maps, and the Assistant.

Privacy features and controls have always been built into our services, and we’re continuously working to make it even easier to control and manage your privacy and security. But we know that the web is a constantly evolving space, where new threats and bad actors will unfortunately emerge. There will always be more work to be done, and safeguarding people who use our products and services every day will remain our focus. 

For more on how we keep you and your information private, safe and secure visit the Google Safety Center.

Posted by the Google India Team


Google I/O 2021: Being helpful in moments that matter

 

It’s great to be back hosting our I/O Developers Conference this year. Pulling up to our Mountain View campus this morning, I felt a sense of normalcy for the first time in a long while. Of course, it’s not the same without our developer community here in person. COVID-19 has deeply affected our entire global community over the past year and continues to take a toll. Places such as Brazil, and my home country of India, are now going through their most difficult moments of the pandemic yet. Our thoughts are with everyone who has been affected by COVID and we are all hoping for better days ahead.

The last year has put a lot into perspective. At Google, it’s also given renewed purpose to our mission to organize the world's information and make it universally accessible and useful. We continue to approach that mission with a singular goal: building a more helpful Google, for everyone. That means being helpful to people in the moments that matter and giving everyone the tools to increase their knowledge, success, health, and happiness. 

Helping in moments that matter

Sometimes it’s about helping in big moments, like keeping 150 million students and educators learning virtually over the last year with Google Classroom. Other times it’s about helping in little moments that add up to big changes for everyone. For example, we’re introducing safer routing in Maps. This AI-powered capability in Maps can identify road, weather, and traffic conditions where you are likely to brake suddenly; our aim is to reduce up to 100 million events like this every year. 

Reimagining the future of work

One of the biggest ways we can help is by reimagining the future of work. Over the last year, we’ve seen work transform in unprecedented ways, as offices and coworkers have been replaced by kitchen countertops and pets. Many companies, including ours, will continue to offer flexibility even when it’s safe to be in the same office again. Collaboration tools have never been more critical, and today we announced a new smart canvas experience in Google Workspace that enables even richer collaboration. 

Smart Canvas integration with Google Meet

Responsible next-generation AI

We’ve made remarkable advances over the past 22 years, thanks to our progress in some of the most challenging areas of AI, including translation, images and voice. These advances have powered improvements across Google products, making it possible to talk to someone in another language using Assistant’s interpreter mode, view cherished memories on Photos, or use Google Lens to solve a tricky math problem. 

We’ve also used AI to improve the core Search experience for billions of people by taking a huge leap forward in a computer’s ability to process natural language. Yet, there are still moments when computers just don’t understand us. That’s because language is endlessly complex: We use it to tell stories, crack jokes, and share ideas — weaving in concepts we’ve learned over the course of our lives. The richness and flexibility of language make it one of humanity’s greatest tools and one of computer science’s greatest challenges. 

Today I am excited to share our latest research in natural language understanding: LaMDA. LaMDA is a language model for dialogue applications. It’s open domain, which means it is designed to converse on any topic. For example, LaMDA understands quite a bit about the planet Pluto. So if a student wanted to discover more about space, they could ask about Pluto and the model would give sensible responses, making learning even more fun and engaging. If that student then wanted to switch over to a different topic — say, how to make a good paper airplane — LaMDA could continue the conversation without any retraining.

This is one of the ways we believe LaMDA can make information and computing radically more accessible and easier to use (and you can learn more about that here). 

We have been researching and developing language models for many years. We’re focused on ensuring LaMDA meets our incredibly high standards on fairness, accuracy, safety, and privacy, and that it is developed consistently with our AI Principles. And we look forward to incorporating conversation features into products like Google Assistant, Search, and Workspace, as well as exploring how to give capabilities to developers and enterprise customers.

LaMDA is a huge step forward in natural conversation, but it’s still only trained on text. When people communicate with each other they do it across images, text, audio, and video. So we need to build multimodal models (MUM) to allow people to naturally ask questions across different types of information. With MUM you could one day plan a road trip by asking Google to “find a route with beautiful mountain views.” This is one example of how we’re making progress towards more natural and intuitive ways of interacting with Search.

Pushing the frontier of computing

Translation, image recognition, and voice recognition laid the foundation for complex models like LaMDA and multimodal models. Our compute infrastructure is how we drive and sustain these advances, and TPUs, our custom-built machine learning processes, are a big part of that. Today we announced our next generation of TPUs: the TPU v4. These are powered by the v4 chip, which is more than twice as fast as the previous generation. One pod can deliver more than one exaflop, equivalent to the computing power of 10 million laptops combined. This is the fastest system we’ve ever deployed, and a historic milestone for us. Previously to get to an exaflop, you needed to build a custom supercomputer. And we'll soon have dozens of TPUv4 pods in our data centers, many of which will be operating at or near 90% carbon-free energy. They’ll be available to our Cloud customers later this year.

(Left) TPU v4 chip tray; (Right) TPU v4 pods at our Oklahoma data center 

It’s tremendously exciting to see this pace of innovation. As we look further into the future, there are types of problems that classical computing will not be able to solve in reasonable time. Quantum computing can help. Achieving our quantum milestone was a tremendous accomplishment, but we’re still at the beginning of a multiyear journey. We continue to work to get to our next big milestone in quantum computing: building an error-corrected quantum computer, which could help us increase battery efficiency, create more sustainable energy, and improve drug discovery. To help us get there, we’ve opened a new state of the art Quantum AI campus with our first quantum data center and quantum processor chip fabrication facilities.

Inside our new Quantum AI campus.

Safer with Google

At Google we know that our products can only be as helpful as they are safe. And advances in computer science and AI are how we continue to make them better. We keep more users safe by blocking malware, phishing attempts, spam messages, and potential cyber attacks than anyone else in the world.

Our focus on data minimization pushes us to do more, with less data. Two years ago at I/O, I announced Auto-Delete, which encourages users to have their activity data automatically and continuously deleted. We’ve since made Auto-Delete the default for all new Google Accounts. Now, after 18 months we automatically delete your activity data, unless you tell us to do it sooner. It’s now active for over 2 billion accounts.

All of our products are guided by three important principles: With one of the world’s most advanced security infrastructures, our products are secure by default. We strictly uphold responsible data practices so every product we build is private by design. And we create easy to use privacy and security settings so you’re in control.

Long term research: Project Starline

We were all grateful to have video conferencing over the last year to stay in touch with family and friends, and keep schools and businesses going. But there is no substitute for being together in the room with someone. 

Several years ago we kicked off a project called Project Starline to use technology to explore what’s possible. Using high-resolution cameras and custom-built depth sensors, it captures your shape and appearance from multiple perspectives, and then fuses them together to create an extremely detailed, real-time 3D model. The resulting data is many gigabits per second, so to send an image this size over existing networks, we developed novel compression and streaming algorithms that reduce the data by a factor of more than 100. We also developed a breakthrough light-field display that shows you the realistic representation of someone sitting in front of you. As sophisticated as the technology is, it vanishes, so you can focus on what’s most important. 

We’ve spent thousands of hours testing it at our own offices, and the results are promising. There’s also excitement from our lead enterprise partners, and we’re working with partners in health care and media to get early feedback. In pushing the boundaries of remote collaboration, we've made technical advances that will improve our entire suite of communications products. We look forward to sharing more in the months ahead.

A person having a conversation with someone over Project Starline.

Solving complex sustainability challenges

Another area of research is our work to drive forward sustainability. Sustainability has been a core value for us for more than 20 years. We were the first major company to become carbon neutral in 2007. We were the first to match our operations with 100% renewable energy in 2017, and we’ve been doing it ever since. Last year we eliminated our entire carbon legacy. 

Our next ambition is our biggest yet: operating on carbon free energy by the year 2030. This represents a significant step change from current approaches and is a moonshot on the same scale as quantum computing. It presents equally hard problems to solve, from sourcing carbon-free energy in every place we operate to ensuring it can run every hour of every day. 

Building on the first carbon-intelligent computing platform that we rolled out last year, we’ll soon be the first company to implement carbon-intelligent load shifting across both time and place within our data center network. By this time next year we’ll be shifting more than a third of non-production compute to times and places with greater availability of carbon-free energy. And we are working to apply our Cloud AI with novel drilling techniques and fiber optic sensing to deliver geothermal power in more places, starting in our Nevada data centers next year.

Investments like these are needed to get to 24/7 carbon-free energy, and it’s happening in Mountain View, California, too. We’re building our new campus to the highest sustainability standards. When completed, these buildings will feature a first- of- its- kind, dragonscale solar skin, equipped with 90,000 silver solar panels and the capacity to generate nearly 7 megawatts. They will house the largest geothermal pile system in North America to help heat buildings in the winter and cool them in the summer. It’s been amazing to see it come to life.

(Left) Rendering of the new Charleston East campus in Mountain View, California; (Right) Model view with dragon scale solar skin.

A celebration of technology

I/O isn’t just a celebration of technology but of the people who use it, and build it — including the millions of developers around the world who joined us virtually today. Over the past year we’ve seen people use technology in profound ways: to keep themselves healthy and safe, to learn and grow, to connect, and to help one another through really difficult times. It’s been inspiring to see and has made us more committed than ever to being helpful in the moments that matter. 

I look forward to seeing everyone at next year’s I/O — in person, I hope. Until then, be safe and well.

Posted by Sundar Pichai, CEO of Google and Alphabet

Search, explore and shop the world’s information, powered by AI

AI advancements push the boundaries of what Google products can do. Nowhere is this clearer than at the core of our mission to make information more accessible and useful for everyone.


We've spent more than two decades developing not just a better understanding of information on the web, but a better understanding of the world. Because when we understand information, we can make it more helpful  — whether you’re a remote student learning a complex new subject, a caregiver looking for trusted information on COVID vaccines or a parent searching for the best route home.


Deeper understanding with MUM

One of the hardest problems for search engines today is helping you with complex tasks — like planning what to do on a family outing. These often require multiple searches to get the information you need. In fact, we find that it takes people eight searches on average to complete complex tasks.


With a new technology called Multitask Unified Model, or MUM, we're able to better understand much more complex questions and needs, so in the future, it will require fewer searches to get things done. Like BERT, MUM is built on a Transformer architecture, but it’s 1,000 times more powerful and can multitask in order to unlock information in new ways. MUM not only understands language, but also generates it. It’s trained across 75 different languages and many different tasks at once, allowing it to develop a more comprehensive understanding of information and world knowledge than previous models. And MUM is multimodal, so it understands information across text and images and in the future, can expand to more modalities like video and audio.


Imagine a question like: “I’ve hiked Mt. Adams and now want to hike Mt. Fuji next fall, what should I do differently to prepare?” This would stump search engines today, but in the future, MUM could understand this complex task and generate a response, pointing to highly relevant results to dive deeper. We’ve already started internal pilots with MUM and are excited about its potential for improving Google products.

 

Information comes to life with Lens and AR

People come to Google to learn new things, and visuals can make all the difference. Google Lens lets you search what you see — from your camera, your photos or even your search bar. Today we’re seeing more than 3 billion searches with Lens every month, and an increasingly popular use case is learning. For example, many students might have schoolwork in a language they aren't very familiar with. That’s why we’re updating the Translate filter in Lens so it’s easy to copy, listen to or search translated text, helping students access education content from the web in over 100 languages.

 

Google Lens’s Translate filter applied to homework.

AR is also a powerful tool for visual learning. With the new AR athletes in Search, you can see signature moves from some of your favorite athletes in AR — like Simone Biles’s famous balance beam routine.

Simone Biles’s balance beam routine surfaced by the AR athletes in Search feature.

Evaluate information with About This Result 

Helpful information should be credible and reliable, and especially during moments like the pandemic or elections, people turn to Google for trustworthy information. 

 

Our ranking systems are designed to prioritize high-quality information, but we also help you evaluate the credibility of sources, right in Google Search. Our About This Result feature provides details about a website before you visit it, including its description, when it was first indexed and whether your connection to the site is secure. 

 

 

This month, we’ll start rolling out About This Result to all English results worldwide, with more languages to come. Later this year, we’ll add even more detail, like how a site describes itself, what other sources are saying about it and related articles to check out. 

 

Exploring the real world with Maps

Google Maps transformed how people navigate, explore and get things done in the world — and we continue to push the boundaries of what a map can be with industry-first features like AR navigation in Live View at scale. We recently announced we’re on track to launch over 100 AI-powered improvements to Google Maps by the end of year, and today, we’re introducing a few of the newest ones. Our new routing updates are designed to reduce the likelihood of hard-braking on your drive using machine learning and historical navigation information — which we believe could eliminate over 100 million hard-braking events in routes driven with Google Maps each year.

 

If you’re looking for things to do, our more tailored map will spotlight relevant places based on time of day and whether or not you’re traveling. Enhancements to Live View and detailed street maps will help you explore and get a deep understanding of an area as quickly as possible. And if you want to see how busy neighborhoods and parts of town are, you’ll be able to do this at a glance as soon as you open Maps.


More ways to shop with Google 

People are shopping across Google more than a billion times per day, and our AI-enhanced Shopping Graph — our deep understanding of products, sellers, brands, reviews, product information and inventory data — powers many features that help you find exactly what you’re looking for.


Because shopping isn’t always a linear experience, we’re introducing new ways to explore and keep track of products. Now, when you take a screenshot, Google Photos will prompt you to search the photo with Lens, so you can immediately shop for that item if you want. And on Chrome, we’ll help you keep track of shopping carts you’ve begun to fill, so you can easily resume your virtual shopping trip. We're also working with retailers to surface loyalty benefits for customers earlier, to help inform their decisions.


Last year we made it free for merchants to sell their products on Google. Now, we’re introducing a new, simplified process that helps Shopify’s 1.7 million merchants make their products discoverable across Google in just a few clicks.  


Whether we’re understanding the world’s information, or helping you understand it too, we’re dedicated to making our products more useful every day. And with the power of AI, no matter how complex your task, we’ll be able to bring you the highest quality, most relevant results.  


Posted by Prabhakar Raghavan, Senior Vice President

An update on our COVID response priorities

 Our teams at Google continue to support the tireless work of hospitals, nonprofits, and public health service providers across the country. Right now, we’re focused on three priority areas: ensuring people can access the latest and most authoritative information; amplifying vital safety and vaccination messages; and providing financial backing for affected communities, health authorities and other organizations.

Providing critical and authoritative information

On all our platforms, we’re taking steps to surface the critical information families and communities need to care for their own health and look after others.

Searches on the COVID-19 vaccine display key information around side effects, effectiveness, and registration details, while treatment-related queries surface guidance from ministry resources

When people ask questions about vaccines on Google Search, they see information panels that display the latest updates on vaccine safety, efficacy and side-effects, plus registration information that directs users to the Co-WIN website. You will also find information about prevention, self-care, and treatment under the Prevention and Treatment tab, in easy-to-understand language sourced from authorised medical sources and the Ministry of Health and Family Welfare. 

On YouTube we’re surfacing authoritative information in a set of playlists, about vaccines, preventing the spread of COVID-19, and facts from experts on COVID-19 care.

Our YouTube India channel features a set of playlists to share tips and information on COVID-19 care 

Testing and vaccination center locations

In addition to showing 2,500 testing centers on Search and Maps, we’re now sharing the locations of over 23,000 vaccination centers nationwide, in English and eight Indian languages. And we’re continuing to work closely with the Ministry of Health and Family Welfare to make more vaccination center information available to users throughout India.

Searching for vaccines in Maps and Search now shows over 23,000 vaccination centers across the country, in English and eight Indian languages

Pilot on hospital beds and medical oxygen availability

We know that some of the most crucial information people are searching for is the availability of hospital beds and access to medical oxygen. To help them find answers more easily, we’re testing a new feature using the Q&A function in Maps that enables people to ask about and share local information on availability of beds and medical oxygen in select locations. As this will be user generated content and not provided by authorised sources, it may be required to verify the accuracy and freshness of the information before utilizing it.

Amplifying vital safety and vaccination messages

As well as providing authoritative answers to queries, we’re using our channels to help extend the reach of health information campaigns. That includes the ‘Get the Facts’ around vaccines campaign, to encourage people to focus on authoritative information and content for vaccines. We’re also surfacing important safety messages through promotions on the Google homepage, Doodles and reminders within our apps and services.

Via the Google Search homepage and reminders within our apps and services, we are reminding people to stay safe and stay masked, and get authoritative information on vaccines

Supporting health authorities, organizations, and affected communities

Since the second wave began, we’ve been running an internal donation campaign to raise funds for nonprofit organizations helping those most in need, including GiveIndia, Charities Aid Foundation India, GOONJ, and United Way of Mumbai. This campaign has raised over $4.6 million (INR 33 crore) to date, and continues to generate much-needed support for relief efforts. 

We recognize that many more nonprofits need donations, and that Indians are eager to help where they can—so we’ve rolled out a COVID Aid campaign on Google Pay, featuring non-profit organizations like GiveIndia, Charities Aid Foundation, Goonj, Save the Children, Seeds, UNICEF India  (National NGOs) and United Way. We want to thank all our Google Pay users who have contributed to these organisations, and we hope this effort will make a difference where it matters most. 

On Google Pay people can contribute funds to non-profit organizations involved in COVID response

As India battles this devastating wave, we’ll keep doing all we can to support the selfless individuals and committed organizations on the front lines of the response. There’s a long way to go—but standing together in solidarity, working together with determination, we can and will turn the tide.  

Posted by the Covid Response team, Google India


Helping people find credible information as India gets into vaccination overdrive

Even as our country gradually returns to regular work and life, COVID-19 continues to be a reality for many. The commencement of vaccinations is a source of hope, especially with the second phase now underway, potentially targeting 100  million people who can benefit from it.

As the government continues to manage the logistics of the vaccine roll out -- one of the largest in the world -- it has taken proactive steps to provide timely, accurate, and science-based information about the vaccines to the public. This is crucial because instances of misinformation and disinformation about the vaccine,  its need, and it’s efficacy can seriously undermine this public health intervention.

As the government activates the processes involved in implementing these large-scale vaccinations, our teams have been hard at work to surface authoritative and timely information for people asking vaccine-related questions. We have worked with the Ministry of Health & Family Welfare (MoHFW) and the  Bill & Melinda Gates Foundation to amplify this science-based narrative around vaccination drive, and have been working closely with the Rapid Risk Response team at the MoHFW that is tracking misinformation using social media listening tools across region and languages, and countering it with science-based messaging on vaccines and pandemic response overall. 

Shortly after the first phase of vaccinations commenced, to help people find credible information we rolled out knowledge panels in Google Search that show up for queries relating to the COVID vaccine. These panels provide consolidated information such as details on the two vaccines, effectiveness, safety, distribution, side effects, and more, and is available in English and eight Indian languages (Tamil, Telugu, Malayalam, Kannada, Marathi, Gujarati, Bengali, and Hindi). This information is sourced from MoHFW, and provides answers to commonly asked questions, displays real-time statistics around vaccinations completed, and provides links to the MoHFW website for additional local resources.

Search queries on the COVID-19 vaccine display organized information on the subject including top news stories and resources from MoHFW on side effects, where to get it and more.

Our teams also supported the MoHFW in helping optimize their website for mobile viewers by improving the website’s page load times, enabling users to find information swiftly. We also helped localize their various vaccination resource pages into the eight Indian languages listed above.

On YouTube we launched information panels that show up when searching for COVID-related queries and also have a banner on the YouTube homepage, both of which redirect to key vaccine resources on the MoHFW website. We also featured FAQ videos from the MoHFW on the YouTube homepage.

With vaccinations for the vulnerable population having commenced from 1st March in thousands of hospitals across the country, we are also working with the MoHFW and the Bill & Melinda Gates Foundation to accurately surface the information on vaccination centers on Google Search, Maps and Google Assistant, and expect to roll this out in the coming weeks . 

To enable government officials as they make critical decisions during these vaccination rollouts, we also deliver regular Google Trends reports on COVID vaccine queries that reflect interest around the vaccination from month to month across regions.

As COVID-19 continues to challenge our communities, we remain committed to doing all we can to assist the country’s health agencies at this key juncture of the pandemic, where the successful rollout of these large-scale vaccinations can help us collectively turn a corner and see a much-needed return to normalcy.

Posted by the Google India team

Addressing Range Anxiety with Smart Electric Vehicle Routing

Mapping algorithms used for navigation often rely on Dijkstra’s algorithm, a fundamental textbook solution for finding shortest paths in graphs. Dijkstra’s algorithm is simple and elegant -- rather than considering all possible routes (an exponential number) it iteratively improves an initial solution, and works in polynomial time. The original algorithm and practical extensions of it (such as the A* algorithm) are used millions of times per day for routing vehicles on the global road network. However, due to the fact that most vehicles are gas-powered, these algorithms ignore refueling considerations because a) gas stations are usually available everywhere at the cost of a small detour, and b) the time needed to refuel is typically only a few minutes and is negligible compared to the total travel time.

This situation is different for electric vehicles (EVs). First, EV charging stations are not as commonly available as gas stations, which can cause range anxiety, the fear that the car will run out of power before reaching a charging station. This concern is common enough that it is considered one of the barriers to the widespread adoption of EVs. Second, charging an EV’s battery is a more decision-demanding task, because the charging time can be a significant fraction of the total travel time and can vary widely by station, vehicle model, and battery level. In addition, the charging time is non-linear — e.g., it takes longer to charge a battery from 90% to 100% than from 20% to 30%.

The EV can only travel a distance up to the illustrated range before needing to recharge. Different roads and different stations have different time costs. The goal is to optimize for the total trip time.

Today, we present a new approach for routing of EVs integrated into the latest release of Google Maps built into your car for participating EVs that reduces range anxiety by integrating recharging stations into the navigational route. Based on the battery level and the destination, Maps will recommend the charging stops and the corresponding charging levels that will minimize the total duration of the trip. To accomplish this we engineered a highly scalable solution for recommending efficient routes through charging stations, which optimizes the sum of the driving time and the charging time together.

The fastest route from Berlin to Paris for a gas fueled car is shown in the top figure. The middle figure shows the optimal route for a 400 km range EV (travel time indicated - charging time excluded), where the larger white circles along the route indicate charging stops. The bottom figure shows the optimal route for a 200 km range EV.

Routing Through Charging Stations
A fundamental constraint on route selection is that the distance between recharging stops cannot be higher than what the vehicle can reach on a full charge. Consequently, the route selection model emphasizes the graph of charging stations, as opposed to the graph of road segments of the road network, where each charging station is a node and each trip between charging stations is an edge. Taking into consideration the various characteristics of each EV (such as the weight, maximum battery level, plug type, etc.) the algorithm identifies which of the edges are feasible for the EV under consideration and which are not. Once the routing request comes in, Maps EV routing augments the feasible graph with two new nodes, the origin and the destination, and with multiple new (feasible) edges that outline the potential trips from the origin to its nearby charging stations and to the destination from each of its nearby charging stations.

Routing using Dijkstra’s algorithm or A* on this graph is sufficient to give a feasible solution that optimizes for the travel time for drivers that do not care at all about the charging time, (i.e., drivers who always fully charge their batteries at each charging station). However, such algorithms are not sufficient to account for charging times. In this case, the algorithm constructs a new graph by replicating each charging station node multiple times. Half of the copies correspond to entering the station with a partially charged battery, with a charge, x, ranging from 0%-100%. The other half correspond to exiting the station with a fractional charge, y (again from 0%-100%). We add an edge from the entry node at the charge x to the exit node at charge y (constrained by y > x), with a corresponding charging time to get from x to y. When the trip from Station A to Station B spends some fraction (z) of the battery charge, we introduce an edge between every exit node of Station A to the corresponding entry node of Station B (at charge x-z). After performing this transformation, using Dijkstra or A* recovers the solution.

An example of our node/edge replication. In this instance the algorithm opts to pass through the first station without charging and charges at the second station from 20% to 80% battery.

Graph Sparsification
To perform the above operations while addressing range anxiety with confidence, the algorithm must compute the battery consumption of each trip between stations with good precision. For this reason, Maps maintains detailed information about the road characteristics along the trip between any two stations (e.g., the length, elevation, and slope, for each segment of the trip), taking into consideration the properties of each type of EV.

Due to the volume of information required for each segment, maintaining a large number of edges can become a memory intensive task. While this is not a problem for areas where EV charging stations are sparse, there exist locations in the world (such as Northern Europe) where the density of stations is very high. In such locations, adding an edge for every pair of stations between which an EV can travel quickly grows to billions of possible edges.

The figure on the left illustrates the high density of charging stations in Northern Europe. Different colors correspond to different plug types. The figure on the right illustrates why the routing graph scales up very quickly in size as the density of stations increases. When there are many stations within range of each other, the induced routing graph is a complete graph that stores detailed information for each edge.

However, this high density implies that a trip between two stations that are relatively far apart will undoubtedly pass through multiple other stations. In this case, maintaining information about the long edge is redundant, making it possible to simply add the smaller edges (spanners) in the graph, resulting in sparser, more computationally feasible, graphs.

The spanner construction algorithm is a direct generalization of the greedy geometric spanner. The trips between charging stations are sorted from fastest to slowest and are processed in that order. For each trip between points a and b, the algorithm examines whether smaller subtrips already included in the spanner subsume the direct trip. To do so it compares the trip time and battery consumption that can be achieved using subtrips already in the spanner, against the same quantities for the direct a-b route. If they are found to be within a tiny error threshold, the direct trip from a to b is not added to the spanner, otherwise it is. Applying this sparsification algorithm has a notable impact and allows the graph to be served efficiently in responding to users’ routing requests.

On the left is the original road network (EV stations in light red). The station graph in the middle has edges for all feasible trips between stations. The sparse graph on the right maintains the distances with much fewer edges.

Summary
In this work we engineer a scalable solution for routing EVs on long trips to include access to charging stations through the use of graph sparsification and novel framing of standard routing algorithms. We are excited to put algorithmic ideas and techniques in the hands of Maps users and look forward to serving stress-free routes for EV drivers across the globe!

Acknowledgements
We thank our collaborators Dixie Wang, Xin Wei Chow, Navin Gunatillaka, Stephen Broadfoot, Alex Donaldson, and Ivan Kuznetsov.

Source: Google AI Blog


Improving Indian Language Transliterations in Google Maps

Nearly 75% of India’s population — which possesses the second highest number of internet users in the world — interacts with the web primarily using Indian languages, rather than English. Over the next five years, that number is expected to rise to 90%. In order to make Google Maps as accessible as possible to the next billion users, it must allow people to use it in their preferred language, enabling them to explore anywhere in the world.

However, the names of most Indian places of interest (POIs) in Google Maps are not generally available in the native scripts of the languages of India. These names are often in English and may be combined with acronyms based on the Latin script, as well as Indian language words and names. Addressing such mixed-language representations requires a transliteration system that maps characters from one script to another, based on the source and target languages, while accounting for the phonetic properties of the words as well.

For example, consider a user in Ahmedabad, Gujarat, who is looking for a nearby hospital, KD Hospital. They issue the search query, કેડી હોસ્પિટલ, in the native script of Gujarati, the 6th most widely spoken language in India. Here, કેડી (“kay-dee”) is the sounding out of the acronym KD, and હોસ્પિટલ is “hospital”. In this search, Google Maps knows to look for hospitals, but it doesn't understand that કેડી is KD, hence it finds another hospital, CIMS. As a consequence of the relative sparsity of names available in the Gujarati script for places of interest (POIs) in India, instead of their desired result, the user is shown a result that is further away.

To address this challenge, we have built an ensemble of learned models to transliterate names of Latin script POIs into 10 languages prominent in India: Hindi, Bangla, Marathi, Telugu, Tamil, Gujarati, Kannada, Malayalam, Punjabi, and Odia. Using this ensemble, we have added names in these languages to millions of POIs in India, increasing the coverage nearly twenty-fold in some languages. This will immediately benefit millions of existing Indian users who don't speak English, enabling them to find doctors, hospitals, grocery stores, banks, bus stops, train stations and other essential services in their own language.

Transliteration vs. Transcription vs. Translation

Our goal was to design a system that will transliterate from a reference Latin script name into the scripts and orthographies native to the above-mentioned languages. For example, the Devanagari script is the native script for both Hindi and Marathi (the language native to Nagpur, Maharashtra). Transliterating the Latin script names for NIT Garden and Chandramani Garden, both POIs in Nagpur, results in एनआईटी गार्डन and चंद्रमणी गार्डन, respectively, depending on the specific language’s orthography in that script.

It is important to note that the transliterated POI names are not translations. Transliteration is only concerned with writing the same words in a different script, much like an English language newspaper might choose to write the name Горбачёв from the Cyrillic script as “Gorbachev” for their readers who do not read the Cyrillic script. For example, the second word in both of the transliterated POI names above is still pronounced “garden”, and the second word of the Gujarati example earlier is still “hospital” — they remain the English words “garden” and “hospital”, just written in the other script. Indeed, common English words are frequently used in POI names in India, even when written in the native script. How the name is written in these scripts is largely driven by its pronunciation; so एनआईटी from the acronym NIT is pronounced “en-aye-tee”, not as the English word “nit”. Knowing that NIT is a common acronym from the region is one piece of evidence that can be used when deriving the correct transliteration.

Note also that, while we use the term transliteration, following convention in the NLP community for mapping directly between writing systems, romanization in South Asian languages regardless of the script is generally pronunciation-driven, and hence one could call these methods transcription rather than transliteration. The task remains, however, mapping between scripts, since pronunciation is only relatively coarsely captured in the Latin script for these languages, and there remain many script-specific correspondences that must be accounted for. This, coupled with the lack of standard spelling in the Latin script and the resulting variability, is what makes the task challenging.

Transliteration Ensemble

We use an ensemble of models to automatically transliterate from the reference Latin script name (such as NIT Garden or Chandramani Garden) into the scripts and orthographies native to the above-mentioned languages. Candidate transliterations are derived from a pair of sequence-to-sequence (seq2seq) models. One is a finite-state model for general text transliteration, trained in a manner similar to models used by Gboard on-device for transliteration keyboards. The other is a neural long short-term memory (LSTM) model trained, in part, on the publicly released Dakshina dataset. This dataset contains Latin and native script data drawn from Wikipedia in 12 South Asian languages, including all but one of the languages mentioned above, and permits training and evaluation of various transliteration methods. Because the two models have such different characteristics, together they produce a greater variety of transliteration candidates.

To deal with the tricky phenomena of acronyms (such as the “NIT” and “KD” examples above), we developed a specialized transliteration module that generates additional candidate transliterations for these cases.

For each native language script, the ensemble makes use of specialized romanization dictionaries of varying provenance that are tailored for place names, proper names, or common words. Examples of such romanization dictionaries are found in the Dakshina dataset.

Scoring in the Ensemble

The ensemble combines scores for the possible transliterations in a weighted mixture, the parameters of which are tuned specifically for POI name accuracy using small targeted development sets for such names.

For each native script token in candidate transliterations, the ensemble also weights the result according to its frequency in a very large sample of on-line text. Additional candidate scoring is based on a deterministic romanization approach derived from the ISO 15919 romanization standard, which maps each native script token to a unique Latin script string. This string allows the ensemble to track certain key correspondences when compared to the original Latin script token being transliterated, even though the ISO-derived mapping itself does not always perfectly correspond to how the given native script word is typically written in the Latin script.

In aggregate, these many moving parts provide substantially higher quality transliterations than possible for any of the individual methods alone.

Coverage

The following table provides the per-language quality and coverage improvements due to the ensemble over existing automatic transliterations of POI names. The coverage improvement measures the increase in items for which an automatic transliteration has been made available. Quality improvement measures the ratio of updated transliterations that were judged to be improvements versus those that were judged to be inferior to existing automatic transliterations.

Language 

Coverage Improvement

Quality Improvement

Hindi

3.2x

1.8x

Bengali

19x

3.3x

Marathi

19x

2.9x

Telugu

3.9x

2.6x

Tamil

19x

3.6x

Gujarati

19x

2.5x

Kannada

24x

2.3x

Malayalam

24x

1.7x

Odia

960x

*

Punjabi

24x

*


* Unknown / No Baseline.


Conclusion

As with any machine learned system, the resulting automatic transliterations may contain a few errors or infelicities, but the large increase in coverage in these widely spoken languages marks a substantial expansion of the accessibility of information within Google Maps in India. Future work will include using the ensemble for transliteration of other classes of entities within Maps and its extension to other languages and scripts, including Perso-Arabic scripts, which are also commonly used in the region.


Acknowledgments: This work was a collaboration between the authors and Jacob Farner, Jonathan Herbert, Anna Katanova, Andre Lebedev, Chris Miles, Brian Roark, Anurag Sharma, Kevin Wang, Andy Wildenberg, and many others.

Posted by Cibu Johny, Software Engineer, Google Research and Saumya Dalal, Product Manager, Google Geo


Improving Indian Language Transliterations in Google Maps

Nearly 75% of India’s population — which possesses the second highest number of internet users in the world — interacts with the web primarily using Indian languages, rather than English. Over the next five years, that number is expected to rise to 90%. In order to make Google Maps as accessible as possible to the next billion users, it must allow people to use it in their preferred language, enabling them to explore anywhere in the world.

However, the names of most Indian places of interest (POIs) in Google Maps are not generally available in the native scripts of the languages of India. These names are often in English and may be combined with acronyms based on the Latin script, as well as Indian language words and names. Addressing such mixed-language representations requires a transliteration system that maps characters from one script to another, based on the source and target languages, while accounting for the phonetic properties of the words as well.

For example, consider a user in Ahmedabad, Gujarat, who is looking for a nearby hospital, KD Hospital. They issue the search query, કેડી હોસ્પિટલ, in the native script of Gujarati, the 6th most widely spoken language in India. Here, કેડી (“kay-dee”) is the sounding out of the acronym KD, and હોસ્પિટલ is “hospital”. In this search, Google Maps knows to look for hospitals, but it doesn't understand that કેડી is KD, hence it finds another hospital, CIMS. As a consequence of the relative sparsity of names available in the Gujarati script for places of interest (POIs) in India, instead of their desired result, the user is shown a result that is further away.


To address this challenge, we have built an ensemble of learned models to transliterate names of Latin script POIs into 10 languages prominent in India: Hindi, Bangla, Marathi, Telugu, Tamil, Gujarati, Kannada, Malayalam, Punjabi, and Odia. Using this ensemble, we have added names in these languages to millions of POIs in India, increasing the coverage nearly twenty-fold in some languages. This will immediately benefit millions of existing Indian users who don't speak English, enabling them to find doctors, hospitals, grocery stores, banks, bus stops, train stations and other essential services in their own language.

Transliteration vs. Transcription vs. Translation
Our goal was to design a system that will transliterate from a reference Latin script name into the scripts and orthographies native to the above-mentioned languages. For example, the Devanagari script is the native script for both Hindi and Marathi (the language native to Nagpur, Maharashtra). Transliterating the Latin script names for NIT Garden and Chandramani Garden, both POIs in Nagpur, results in एनआईटी गार्डन and चंद्रमणी गार्डन, respectively, depending on the specific language’s orthography in that script.

It is important to note that the transliterated POI names are not translations. Transliteration is only concerned with writing the same words in a different script, much like an English language newspaper might choose to write the name Горбачёв from the Cyrillic script as “Gorbachev” for their readers who do not read the Cyrillic script. For example, the second word in both of the transliterated POI names above is still pronounced “garden”, and the second word of the Gujarati example earlier is still “hospital” — they remain the English words “garden” and “hospital”, just written in the other script. Indeed, common English words are frequently used in POI names in India, even when written in the native script. How the name is written in these scripts is largely driven by its pronunciation; so एनआईटी from the acronym NIT is pronounced “en-aye-tee”, not as the English word “nit”. Knowing that NIT is a common acronym from the region is one piece of evidence that can be used when deriving the correct transliteration.

Note also that, while we use the term transliteration, following convention in the NLP community for mapping directly between writing systems, romanization in South Asian languages regardless of the script is generally pronunciation-driven, and hence one could call these methods transcription rather than transliteration. The task remains, however, mapping between scripts, since pronunciation is only relatively coarsely captured in the Latin script for these languages, and there remain many script-specific correspondences that must be accounted for. This, coupled with the lack of standard spelling in the Latin script and the resulting variability, is what makes the task challenging.

Transliteration Ensemble
We use an ensemble of models to automatically transliterate from the reference Latin script name (such as NIT Garden or Chandramani Garden) into the scripts and orthographies native to the above-mentioned languages. Candidate transliterations are derived from a pair of sequence-to-sequence (seq2seq) models. One is a finite-state model for general text transliteration, trained in a manner similar to models used by Gboard on-device for transliteration keyboards. The other is a neural long short-term memory (LSTM) model trained, in part, on the publicly released Dakshina dataset. This dataset contains Latin and native script data drawn from Wikipedia in 12 South Asian languages, including all but one of the languages mentioned above, and permits training and evaluation of various transliteration methods. Because the two models have such different characteristics, together they produce a greater variety of transliteration candidates.

To deal with the tricky phenomena of acronyms (such as the “NIT” and “KD” examples above), we developed a specialized transliteration module that generates additional candidate transliterations for these cases.

For each native language script, the ensemble makes use of specialized romanization dictionaries of varying provenance that are tailored for place names, proper names, or common words. Examples of such romanization dictionaries are found in the Dakshina dataset.

Scoring in the Ensemble
The ensemble combines scores for the possible transliterations in a weighted mixture, the parameters of which are tuned specifically for POI name accuracy using small targeted development sets for such names.

For each native script token in candidate transliterations, the ensemble also weights the result according to its frequency in a very large sample of on-line text. Additional candidate scoring is based on a deterministic romanization approach derived from the ISO 15919 romanization standard, which maps each native script token to a unique Latin script string. This string allows the ensemble to track certain key correspondences when compared to the original Latin script token being transliterated, even though the ISO-derived mapping itself does not always perfectly correspond to how the given native script word is typically written in the Latin script.

In aggregate, these many moving parts provide substantially higher quality transliterations than possible for any of the individual methods alone.

Coverage
The following table provides the per-language quality and coverage improvements due to the ensemble over existing automatic transliterations of POI names. The coverage improvement measures the increase in items for which an automatic transliteration has been made available. Quality improvement measures the ratio of updated transliterations that were judged to be improvements versus those that were judged to be inferior to existing automatic transliterations.

  Coverage Quality
Language   Improvement    Improvement
Hindi 3.2x 1.8x
Bengali 19x 3.3x
Marathi 19x 2.9x
Telugu 3.9x 2.6x
Tamil 19x 3.6x
Gujarati 19x 2.5x
Kannada 24x 2.3x
Malayalam 24x 1.7x
Odia 960x *
Punjabi 24x *
* Unknown / No Baseline.

Conclusion
As with any machine learned system, the resulting automatic transliterations may contain a few errors or infelicities, but the large increase in coverage in these widely spoken languages marks a substantial expansion of the accessibility of information within Google Maps in India. Future work will include using the ensemble for transliteration of other classes of entities within Maps and its extension to other languages and scripts, including Perso-Arabic scripts, which are also commonly used in the region.

Acknowledgments
This work was a collaboration between the authors and Jacob Farner, Jonathan Herbert, Anna Katanova, Andre Lebedev, Chris Miles, Brian Roark, Anurag Sharma, Kevin Wang, Andy Wildenberg, and many others.

Source: Google AI Blog


“L10n” – Localisation: Breaking down language barriers to unleash the benefits of the internet for all Indians

In July, at the Google for India event, we outlined our vision to make the Internet helpful for a billion Indians, and power the growth of India’s digital economy. One critical area that we need to overcome is the challenge of India’s vast linguistic diversity, with dialects changing every hundred kilometres. More often than not, one language doesn’t seamlessly map to another. A word in Bengali roughly translates to a full sentence in Tamil and there are expressions in Urdu which have no adequately evocative equivalent in Hindi. 


This poses a formidable challenge for technology developers, who rely on commonly understood visual and spoken idioms to make tech products work universally. 


We realised early on that there was no way to simplify this challenge - that there wasn’t any one common minimum that could address the needs of every potential user in this country. If we hoped to bring the potential of the internet within reach of every user in India, we had to invest in building products, content and tools in every popularly spoken Indian language. 


India’s digital transformation will be incomplete if English proficiency continues to be the entry barrier for basic and potent uses of the Internet such as buying and selling online, finding jobs, using net banking and digital payments or getting access to information and registering for government schemes.


The work, though underway, is far from done. We are driving a 3-point strategy to truly digitize India:


  1. Invest in ML & AI efforts at Google’s research center in India, to make advances in machine learning and AI models accessible to everyone across the ecosystem.

  2. Partner with innovative local startups who are building solutions to cater to the needs of Indians in local languages

  3. Drastically improve the experience of Google products and services for Indian language users


And so today, we are happy to announce a range of features to help deliver an even richer language experience to millions across India.

Easily toggling between English and Indian language results

Four years ago we made it easier for people in states with a significant Hindi-speaking population to flip between English and Hindi results for a search query, by introducing a simple ‘chip’ or tab they could tap to see results in their preferred language. In fact, since the launch of this Hindi chip and other language features, we have seen more than a 10X increase in Hindi queries in India.

We are now making it easier to toggle Search results between English and four additional Indian languages: Tamil, Telugu, Bangla and Marathi.

People can now tap a chip to see Search results in their local language

Understanding which language content to surface, when

Typing in an Indian language in its native script is typically more difficult, and can often take three times as long, compared to English. As a result, many people search in English even if they really would prefer to see results in a local language they understand.

Search will show relevant results in more Indian languages

Over the next month, Search will start to show relevant content in supported Indian languages where appropriate, even if the local language query is typed in English. This functionality will also better serve bilingual people who are comfortable reading both English and an Indian language. It will roll out in five Indian languages: Hindi, Bangla, Marathi, Tamil, and Telugu.

Enabling people to use apps in the language of their choice

Just like you use different tools for different tasks, we know (because we do it ourselves) people often select a specific language for a particular situation. Rather than guessing preferences, we launched the ability to easily change the language of Google Assistant and Discover to be different from the phone language. Today in India, more than 50 percent of the content viewed on Google Discover is in Indian languages. A third of Google Assistant users in India are using it in an Indian language, and since the launch of Assistant language picker, queries in Indian languages have doubled.

Maps will now able people to select up to nine Indian languages

We are now extending this ability to Google Maps, where users can quickly and easily change their Maps experience into one of nine Indian languages, by simply opening the app, going to Settings, and tapping ‘App language’. This will allow anyone to search for places, get directions and navigation, and interact with the Map in their preferred local language.

Homework help in Hindi (and English)

Meaning is also communicated with images: and this is where Google Lens can help. From street signs to restaurant menus, shop names to signboards, Google Lens lets you search what you see, get things done faster, and understand the world around you—using just your camera or a photo. In fact more people use Google Lens in India every month than in any other country worldwide. As an example of its popularity, over 3 billion words have been translated in India with Lens in 2020.

Lens is particularly helpful for students wanting to learn about the world. If you’re a parent, you’ll be familiar with your kids asking you questions about homework. About stuff you never thought you’d need to remember, like... quadratic equations.

Google Lens can now help you solve math problems by simply pointing your camera 

Now, right from the Search bar in the Google app, you can use Lens to snap a photo of a math problem and learn how to solve it on your own, in Hindi (or English). To do this, Lens first turns an image of a homework question into a query. Based on the query, we will show step-by-step guides and videos to help explain the problem.

Helping computer systems understand Indian languages at scale

At Google Research India, we have spent a lot of time helping computer systems understand human language. As you can imagine, this is quite an exciting challenge.The new approach we developed in India is called Multilingual Representations for Indian Languages (or ‘MuRIL’). Among many other benefits of this powerful multilingual model that scales across languages, MuRIL also provides support for transliterated text such as when writing Hindi using Roman script, which was something missing from previous models of its kind. 

One of the many tasks MuRIL is good at, is determining the sentiment of the sentence. For example, “Achha hua account bandh nahi hua” would previously be interpreted as having a negative meaning, but MuRIL correctly identifies this as a positive statement. Or take the ability to classify a person versus a place: ‘Shirdi ke sai baba’ would previously be interpreted as a place, which is wrong, but MuRIL correctly interprets it as a person.

MuRIL currently supports 16 Indian languages as well as English -- the highest coverage for Indian languages among any other publicly available model of its kind.

MuRIL is free & Open Source,

available on TensorFlow Hub

https://tfhub.dev/google/MuRIL/1



We are thrilled to announce that we have made MuRIL open source, and it is currently available to download from the TensorFlow Hub, for free. We hope MuRIL will be the next big evolution for Indian language understanding, forming a better foundation for researchers, students, startups, and anyone else interested in building Indian language technologies, and we can’t wait to see the many ways the ecosystem puts it to use.

We’re sharing this to provide a flavor of the depth of work underway -- and which is required -- to really make a universally potent and accessible Internet a reality. This said, the Internet in India is the sum of the work of millions of developers, content creators, news media and online businesses, and it is only when this effort is undertaken at scale by the entire ecosystem, that we will help fulfil the truly meaningful promise of the billionth Indian coming online.

Posted by the Google India team