Search your world, any way and anywhere

People have always gathered information in a variety of ways — from talking to others, to observing the world around them, to, of course, searching online. Though typing words into a search box has become second nature for many of us, it’s far from the most natural way to express what we need. For example, if I’m walking down the street and see an interesting tree, I might point to it and ask a friend what species it is and if they know of any nearby nurseries that might sell seeds. If I were to express that question to a search engine just a few years ago… well, it would have taken a lot of queries.

But we’ve been working hard to change that. We've already started on a journey to make searching more natural. Whether you're humming the tune that's been stuck in your head, or using Google Lens to search visually (which now happens more than 8 billion times per month!), there are more ways to search and explore information than ever before.

Today, we're redefining Google Search yet again, combining our understanding of all types of information — text, voice, visual and more — so you can find helpful information about whatever you see, hear and experience, in whichever ways are most intuitive to you. We envision a future where you can search your whole world, any way and anywhere.

Find local information with multisearch

The recent launch of multisearch, one of our most significant updates to Search in several years, is a milestone on this path. In the Google app, you can search with images and text at the same time — similar to how you might point at something and ask a friend about it.

Now we’re adding a way to find local information with multisearch, so you can uncover what you need from the millions of local businesses on Google. You’ll be able to use a picture or screenshot and add “near me” to see options for local restaurants or retailers that have the apparel, home goods and food you’re looking for.

An animation of a phone showing a search. A photo is taken of Korean cuisine, then Search scans it for restaurants near the user that serve it.

Later this year, you’ll be able to find local information with multisearch.

For example, say you see a colorful dish online you’d like to try – but you don’t know what’s in it, or what it’s called. When you use multisearch to find it near you, Google scans millions of images and reviews posted on web pages, and from our community of Maps contributors, to find results about nearby spots that offer the dish so you can go enjoy it for yourself.

Local information in multisearch will be available globally later this year in English, and will expand to more languages over time.

Get a more complete picture with scene exploration

Today, when you search visually with Google, we’re able to recognize objects captured in a single frame. But sometimes, you might want information about a whole scene in front of you.

In the future, with an advancement called “scene exploration,” you’ll be able to use multisearch to pan your camera and instantly glean insights about multiple objects in a wider scene.

In the future, “scene exploration” will help you uncover insights across multiple objects in a scene at the same time.

Imagine you’re trying to pick out the perfect candy bar for your friend who's a bit of a chocolate connoisseur. You know they love dark chocolate but dislike nuts, and you want to get them something of quality. With scene exploration, you’ll be able to scan the entire shelf with your phone’s camera and see helpful insights overlaid in front of you. Scene exploration is a powerful breakthrough in our devices’ ability to understand the world the way we do – so you can easily find what you’re looking for– and we look forward to bringing it to multisearch in the future.

These are some of the latest steps we’re taking to help you search any way and anywhere. But there’s more we’re doing, beyond Search. AI advancements are helping bridge the physical and digital worlds in Google Maps, and making it possible to interact with the Google Assistant more naturally and intuitively. To ensure information is truly useful for people from all communities, it’s also critical for people to see themselves represented in the results they find. Underpinning all these efforts is our commitment to helping you search safely, with new ways to control your online presence and information.

How we make every day safer with Google

Every day, we work to create a safer internet by making our products secure by default, private by design, and putting you in control of your data. This is how we keep more people safe online than anyone else in the world.

Secure by default in the face of cyber threats

Today, more cyberattacks than ever are happening on a broader, global scale. The targets of these attacks are not just major companies or government agencies, but hospitals, energy providers, banks, schools and individuals. Every day, we keep people’s data safe and secure through industry-leading security technology, automatic, built-in protections, and ongoing vulnerability research and detection.

Our specialized teams work around the clock to combat current and emerging cyber threats. Google’s Threat Analysis Group (TAG), for example, has been tracking critical cyber activity to help inform Ukraine, neighboring countries in Europe, and others of active threat campaigns in relation to the war. We’ve also expanded our support for Project Shield to protect the websites of 200+ Ukrainian government entities, news outlets and more.

Cybersecurity concerns are not limited to war zones — more than 80% of Americans say they’re concerned about the safety and privacy of their online data. That’s why we built one of the world’s most advanced security infrastructures to ensure that our products are secure by default. Now, that infrastructure helps keep people safer at scale:

  • Account Safety Status: We’re adding your safety status to your apps so you never have to worry about the security of your Google Account. These updates will feature a simple yellow alert icon on your profile picture that will flag actions you should take to secure your account.
GIF showing account safety status feature
  • Phishing protections in Google Workspace: We’re now scaling the phishing and malware protections that guard Gmail to Google Docs, Sheets, and Slides.
  • Automatic 2-Step Verification: We’re also continuing our journey towards a more secure, passwordless future with 2-Step Verification (2SV) auto enrollment to help people instantly boost the security of their Google Accounts and reduce their risk of getting phished. This builds on our work last year to auto enroll 150+ million accounts in 2SV and successfully reduce account takeovers.
  • Virtual Cards: As people do more shopping online, keeping payment information safe and secure is critically important. We’re launching virtual cards on Chrome and Android. When you use autofill to enter your payment details at checkout, virtual cards will add an additional layer of security by replacing your actual card number with a distinct, virtual number. This eliminates the need to manually enter card details like the CVV at checkout, and they’re easy to manage atpay.google.com — where you can enable the feature for eligible cards, access your virtual card number, and see recent virtual card transactions. Virtual cards will be rolling out in the US for Visa, American Express, Mastercard and all Capital One cards starting this summer.
GIF of virtual card feature

Helpful products that are private by design

We’re committed to designing products that are helpful and protect people’s privacy. Our engineers have pioneered and open-sourced numerous privacy preserving technologies, including Federated Learning and Differential Privacy, which we made more widely available earlier this year when we started offering our Differential Privacy library in Python as a free open-source tool — reaching almost half of developers worldwide.

Now, we’re expanding this work with the introduction of Protected Computing, a growing toolkit of technologies that transform how, when, and where data is processed to technically ensure the privacy and safety of your data. We do this by:

  • Minimizing your data footprint: Leveraging techniques like edge processing and ephemerality, we shrink the amount of your personally identifiable data.
  • De-identifying data: Through blurring and randomizing identifiable signals, to adding statistical noise, we use a range of anonymization techniques to strip your identity from your data.
  • Restricting access: Through technologies like end-to-end encryption and secure enclaves, we make it technically impossible for anyone, including Google, to access your sensitive data.

Today, Protected Computing enables helpful features like Smart Reply in Messages by Google and Live Translation on Pixel. And while we’re continuing to innovate new applications across our products, we’re equally focused on using Protected Computing to unlock the potential of data to benefit society more broadly — for example, by enabling even more robust aggregated and anonymized datasets so we can safely do everything from help cities reduce their carbon footprint, to accelerate new medical breakthroughs.

You’re in control of your personal information

Privacy is personal, and safety is a bit different for each individual. That’s why our privacy and security protections are easy to access, monitor and control. Today, we’re introducing two new tools that give you even more control over your data:

  • Results about you in Search: When you’re using the internet, it’s important to have control over how your personal information can be found. With our new tool to accompany updated removal policies, people can more easily request the removal of Google Search results containing their contact details — such as phone numbers, home addresses, and email addresses. This feature will be available in the coming months in the Google App, and you can also access it by clicking the three dots next to individual Google Search results.
"Take control of results about you" GIF
  • My Ad Center: We want to make it even easier for you to control the ads you see. Towards the end of this year, we’ll launch more controls for your ads privacy settings: a way of choosing which brands to see more or less of, and an easier way to choose whether to personalize your ads. My Ad Center gives you even more control over the ads you see on YouTube, Search, and your Discover feed, while still being able to block and report ads. You’ll be able to choose the types of ads you want to see — such as fitness, vacation rentals or skincare — and learn more about the information we use to show them to you.
GIF of new features in My Ad Center

To learn more about how every day you're safer with Google, visit our Safety Center.

Understanding the world through language

Language is at the heart of how people communicate with each other. It’s also proving to be powerful in advancing AI and building helpful experiences for people worldwide.

From the beginning, we set out to connect words in your search to words on a page so we could make the web’s information more accessible and useful. Over 20 years later, as the web changes, and the ways people consume information expand from text to images to videos and more — the one constant is that language remains a surprisingly powerful tool for understanding information.

In recent years, we’ve seen an incredible acceleration in the field of natural language understanding. While our systems still don’t understand language the way people do, they’re increasingly able to spot patterns in information, identify complex concepts and even draw implicit connections between them. We’re even finding that many of our advanced models can understand information across languages or in non-language-based formats like images and videos.

Building the next generation of language models

In 2017, Google researchers developed the Transformer, the neural network that underlies major advancements like MUM and LaMDA. Last year, we shared our thinking on a new architecture called Pathways, which is loosely inspired by the sparse patterns of neural activity in the brain. When you read a blog post like this one, only the critical parts of your brain needed to process this information fire up — not every single neuron. With Pathways, we’re now able to train AI models to be similarly effective.

Using this system, we recently introduced PaLM, a new model that achieves state-of-the-art performance on challenging language modeling tasks. It can solve complex math word problems, and answer questions in new languages with very little additional training data.

PaLM also shows improvements in understanding and expressing logic. This is significant because it allows the model to express its reasoning through words. Remember your algebra problem sets? It wasn’t enough to just get the right answer — you had to explain how you got there. PaLM is able to prompt a “Chain of Thought” to explain its thought process, step-by-step. This emerging capability helps improve accuracy and our understanding of how a model arrives at answers.

Flow chart for the difference between "Standard Prompting" and "Chain of Thought Prompting"

Translating the languages of the world

Pathways-related models are enabling us to break down language barriers in a way never before possible. Nowhere is this clearer than in our recently added support for 24 new languages in Google Translate, spoken by over 300 million people worldwide — including the first indigenous languages of the Americas. The amazing part is that the neural model did this using only monolingual text with no translation pairs — which allows us to help communities and languages underrepresented by technology. Machine translation at this level helps the world feel a bit smaller, while allowing us to dream bigger.

Unlocking knowledge about the world across modalities

Today, people consume information through webpages, images, videos, and more. Our advanced language and Pathways-related models are learning to make sense of information stemming from these different modalities through language. With these multimodal capabilities, we’re expanding multisearch in the Google app so you can search more naturally than ever before. As the saying goes — “a picture is worth a thousand words” — it turns out, words are really the key to sharing information about the world.

"Scene exploration" GIF of a store shelf demonstrating multisearch

Improving conversational AI

Despite these advancements, human language continues to be one of the most complex undertakings for computers.

In everyday conversation, we all naturally say “um,” pause to find the right words, or correct ourselves — and yet other people have no trouble understanding what we’re saying. That’s because people can react to conversational cues in as little as 200 milliseconds. Moving our speech model from data centers to run on the device made things faster, but we wanted to push the envelope even more.

Computers aren’t there yet — so we’re introducing improvements to responsiveness on the Assistant with unified neural networks, combining many models into smarter ones capable of understanding more — like when someone pauses but is not finished speaking. Getting closer to the fluidity of real-time conversation is finally possible with Google's Tensor chip, which is custom-engineered to handle on-device machine learning tasks super fast.

We’re also investing in building models that are capable of carrying more natural, sensible and specific conversations. Since introducing LaMDA to the world last year, we’ve made great progress, improving the model in key areas of quality, safety and groundedness — areas where we know conversational AI models can struggle. We’ll be releasing the next iteration, LaMDA 2, as a part of the AI Test Kitchen, which we’ll be opening up to small groups of people gradually. Our goal with AI Test Kitchen is to learn, improve, and innovate responsibly on this technology together. It’s still early days for LaMDA, but we want to continue to make progress and do so responsibly with feedback from the community.

GIF showing LaMDA 2 on device

Responsible development of AI models

While language is a remarkably powerful and versatile tool for understanding the world around us, we also know it comes with its limitations and challenges. In 2018, we published our AI Principles as guidelines to help us avoid bias, test rigorously for safety, design with privacy top of mind and make technology accountable to people. We’re investing in research across disciplines to understand the types of harms language models can affect, and to develop the frameworks and methods to ensure we bring in a diversity of perspectives and make meaningful improvements. We also build and use tools that can help us better understand our models (e.g., identifying how different words affect a prediction, tracing an error back to training data and even measuring correlations within a model). And while we work to improve underlying models, we also test rigorously before and after any kind of product deployment.

We’ve come a long way since introducing the world to the Transformer. We’re proud of the tremendous value that it and its predecessors have brought not only to everyday Google products like Search and Translate, but also the breakthroughs they’ve powered in natural language understanding. Our work advancing the future of AI is driven by something as old as time: the power language has to bring people together.

Require email verification to book appointments in Google Calendar

Quick summary 

When using appointment scheduling in Google Calendar, you can now opt to have users verify their email before booking an appointment. When enabled, the user must be signed into a Google account or validate their email address using a PIN code to complete the booking. 


This setting is off by default and is “sticky”. This means that if you turn it on or off, the configuration will be saved for any new appointment scheduling series. We hope this feature helps ensure you’re protected against potentially malicious actors. 


Appointment scheduling user interface with email verification option unchecked.


Getting started 


Rollout pace 


Availability 

  • Available to Google Workspace Business Standard, Business Plus, Enterprise Standard, Enterprise Plus, Education Fundamentals, Education Standard, Education Plus, the Teaching and Learning Upgrade, and Nonprofits customers 
  • Not available to Google Workspace Essentials, Business Starter, Frontline, as well as legacy G Suite Basic and Business customers 

Resources 

Have more natural conversations with Google Assistant

Like any other busy parent, I’m always looking for ways to make daily life a little bit easier. And Google Assistant helps me do that — from giving me cooking instructions as I’m making dinner for my family to sharing how much traffic there is on the way to the office. Assistant allows me to get more done at home and on the go, so I can make time for what really matters.

Every month, over 700 million people around the world get everyday tasks done with their Assistant. Voice has become one of the main ways we communicate with our devices. But we know it can feel unnatural to say “Hey Google'' or touch your device every time you want to ask for help. So today, we’re introducing new ways to interact with your Assistant more naturally — just as if you were talking to a friend.

Get the conversation going

Our first new feature, Look and Talk, is beginning to roll out today in the U.S. on Nest Hub Max. Once you opt in, you can simplylook at the screen and ask for what you need. From the beginning, we’ve built Look and Talk with your privacy in mind. It’s designed to activate when you opt in and both Face Match and Voice Match recognize it’s you. And video from these interactions is processed entirely on-device, so it isn’t shared with Google or anyone else.

Let’s say I need to fix my leaky kitchen sink. As I walk into the room, I can just look at my Nest Hub Max and say “Show plumbers near me” — without having to say “Hey Google” first.

There’s a lot going on behind the scenes to recognize whether you’re actually making eye contact with your device rather than just giving it a passing glance. In fact, it takes six machine learning models to process more than 100 signals from both the camera and microphone — like proximity, head orientation, gaze direction, lip movement, context awareness and intent classification — all in real time.

Last year, we announced Real Tone, an effort to improve Google’s camera and imagery products across skin tones. Continuing in that spirit, we’ve tested and refined Look and Talk to work across a range of skin tones so it works well for people with diverse backgrounds. We’ll continue to drive this work forward using the Monk Skin Tone Scale, released today.

GIF of a man baking cookies with a speech bubble saying “Set a timer for 10 minutes.” His Google Nest Hub Max responds with a speech bubble saying “OK, 10 min. And that’s starting…now.”

We’re also expanding quick phrases to Nest Hub Max, which let you skip saying “Hey Google” for some of your most common daily tasks. So as soon as you walk through the door, you can just say “Turn on the hallway lights” or “Set a timer for 10 minutes.” Quick phrases are also designed with privacy in mind. If you opt in, you decide which phrases to enable, and they’ll work when Voice Match recognizes it’s you.

Looking ahead: more natural conversation

In everyday conversation, we all naturally say “um,” correct ourselves and pause occasionally to find the right words. But others can still understand us, because people are active listeners and can react to conversational cues in under 200 milliseconds. We believe your Google Assistant should be able to listen and understand you just as well.

To make this happen, we're building new, more powerful speech and language models that can understand the nuances of human speech — like when someone is pausing, but not finished speaking. And we’re getting closer to the fluidity of real-time conversation with the Tensor chip, which is custom-engineered to handle on-device machine learning tasks super fast. Looking ahead, Assistant will be able to better understand the imperfections of human speech without getting tripped up — including the pauses, “umms” and interruptions — making your interactions feel much closer to a natural conversation.

We're working hard to make Google Assistant the easiest way to get everyday tasks done at home, in the car and on the go. And with these latest improvements, we’re getting closer to a world where you can spend less time thinking about technology — and more time staying present in the moment.

Immersive view coming soon to Maps — plus more updates

Google Maps helps over one billion people navigate and explore. And over the past few years, our investments in AI have supercharged the ability to bring you the most helpful information about the real world, including when a business is open and how crowded your bus is. Today at Google I/O, we announced new ways the latest advancements in AI are transforming Google Maps — helping you explore with an all-new immersive view of the world, find the most fuel-efficient route, and use the magic of Live View in your favorite third-party apps.

A more immersive, intuitive map

Google Maps first launched to help people navigate to their destinations. Since then, it’s evolved to become much more — it’s a handy companion when you need to find the perfect restaurant or get information about a local business. Today — thanks to advances in computer vision and AI that allow us to fuse together billions of Street View and aerial images to create a rich, digital model of the world — we’re introducing a whole new way to explore with Maps. With our new immersive view, you’ll be able to experience what a neighborhood, landmark, restaurant or popular venue is like — and even feel like you’re right there before you ever set foot inside. So whether you’re traveling somewhere new or scoping out hidden local gems, immersive view will help you make the most informed decisions before you go.

Say you’re planning a trip to London and want to figure out the best sights to see and places to eat. With a quick search, you can virtually soar over Westminster to see the neighborhood and stunning architecture of places, like Big Ben, up close. With Google Maps’ helpful information layered on top, you can use the time slider to check out what the area looks like at different times of day and in various weather conditions, and see where the busy spots are. Looking for a spot for lunch? Glide down to street level to explore nearby restaurants and see helpful information, like live busyness and nearby traffic. You can even look inside them to quickly get a feel for the vibe of the place before you book your reservation.

The best part? Immersive view will work on just about any phone and device. It starts rolling out in Los Angeles, London, New York, San Francisco and Tokyo later this year with more cities coming soon.

Immersive view lets you explore and understand the vibe of a place before you go

An update on eco-friendly routing

In addition to making places easier to explore, we want to help you get there more sustainably. We recently launched eco-friendly routing in the U.S. and Canada, which lets you see and choose the most fuel-efficient route when looking for driving directions — helping you save money on gas. Since then, people have used it to travel 86 billion miles, saving more than an estimated half a million metric tons of carbon emissions — equivalent to taking 100,000 cars off the road. We’re on track to double this amount as we expand to more places, like Europe.

Still image of eco-friendly routing on Google Maps

Eco-friendly routing has helped save more than an estimated half a million metric tons of carbon emissions

The magic of Live View — now in your favorite apps

Live View helps you find your way when walking around, using AR to display helpful arrows and directions right on top of your world. It's especially helpful when navigating tricky indoor areas, like airports, malls and train stations. Thanks to our AI-based technology called global localization, Google Maps can point you where you need to go in a matter of seconds. As part of our efforts to bring the helpfulness of Google Maps to more places, we’re now making this technology available to developers at no cost with the new ARCore Geospatial API.

Developers are already using the API to make apps that are even more useful and provide an easy way to interact with both the digital and physical worlds at once. Shared electric vehicle company Lime is piloting the API in London, Paris, Tel Aviv, Madrid, San Diego, and Bordeaux to help riders park their e-bikes and e-scooters responsibly and out of pedestrians’ right of way. Telstra and Accenture are using it to help sports fans and concertgoers find their seats, concession stands and restrooms at Marvel Stadium in Melbourne. DOCOMO and Curiosity are building a new game that lets you fend off virtual dragons with robot companions in front of iconic Tokyo landmarks, like the Tokyo Tower. The new Geospatial API is available now to ARCore developers, wherever Street View is available.

DOCOMO and Curiosity game showing an AR dragon, alien and spaceship interacting on top of a real-world image, powered by the ARCore Geospatial API.

Live View technology is now available to ARCore developers around the world

AI will continue to play a critical role in making Google Maps the most comprehensive and helpful map possible for people everywhere.

7 ways AI is making Google Workspace better

Hybrid work life is…well, one of our many “new normals.” Over the last two years, many of us have gone through various versions of what the office looks like, and these changes have been a significant motivation behind some of our recent updates to Google Workspace.

With some people in the office and others at home, the amount of emails, chats, and meetings in our inboxes and on our calendars has increased — so we’ve been working on finding more ways to use machine learning to fight information overload and keep you feeling productive. Here are seven upcoming features — most made possible by AI — on their way to Google Workspace:

  1. Portrait restore uses Google AI technology to improve video quality, so even if you’re using Google Meet in a dimly lit room using an old webcam — or maybe you’ve got a bad WiFi connection — your video will be automatically enhanced.
Animated GIF showing a person in a Google Meet call who is backlit, and their image is very dark in the call. Portrait restore is applied, and their face is then better lit and more visible.

Portrait restore improves video quality using Google AI.

2. We’re also introducing portrait light: This feature uses machine learning to simulate studio-quality lighting in your video feed, and you can even adjust the lighting position and brightness.

Animated GIF showing a person in a Google Meet call. The cursor is moving around selecting areas where it can apply portrait lighting, brightening up various areas of the image.

Portrait light brings studio-quality lighting effects to Google Meet.

3. We’re adding de-reverberation, which filters out echoes in spaces with hard surfaces, so it sounds like you’re in a mic-ed up conference room…even if you’re in your basement.

4. Live sharing will sync content that’s being shared in a Google Meet call and allow participants to control the media. Whether you’re at the office or at home, the person sharing the content or viewing it, participants will see and hear what’s going on at the same time. Our partners and developers can use our live sharing APIs today to start integrating Meet into their apps.

5. Earlier this year, we introduced automated built-in summaries for Google Docs. Now we’re extending auto-summaries to Spaces so you get a helpful digest of conversations you missed.

An animated GIF demonstrating how summaries in Spaces works.

Summaries in Spaces help you catch up quickly on conversations.

6. Later this year, we're bringingautomated transcriptions of Google Meet meetings to Google Workspace, so people can catch up quickly on meetings they couldn't attend.

7. Many of the security protections that we use for Gmail are coming to Google Slides, Docs and Sheets. For example, if a Doc you’re about to open contains phishing links or malware, you’ll get an automatic alert.

For a deeper dive into all the new AI capabilities coming to Google Workspace, head over to the Cloud blog for more details.

Google Translate learns 24 new languages

For years, Google Translate has helped break down language barriers and connect communities all over the world. And we want to make this possible for even more people — especially those whose languages aren’t represented in most technology. So today we’ve added 24 languages to Translate, now supporting a total of 133 used around the globe.

Over 300 million people speak these newly added languages — like Mizo, used by around 800,000 people in the far northeast of India, and Lingala, used by over 45 million people across Central Africa. As part of this update, Indigenous languages of the Americas (Quechua, Guarani and Aymara) and an English dialect (Sierra Leonean Krio) have also been added to Translate for the first time.

The Google Translate bar translates the phrase "Our mission: to enable everyone, everywhere to understand the world and express themselves across languages" into different languages.

Translate's mission translated into some of our newly added languages

Here’s a complete list of the new languages now available in Google Translate:

  • Assamese, used by about 25 million people in Northeast India
  • Aymara, used by about two million people in Bolivia, Chile and Peru
  • Bambara, used by about 14 million people in Mali
  • Bhojpuri, used by about 50 million people in northern India, Nepal and Fiji
  • Dhivehi, used by about 300,000 people in the Maldives
  • Dogri, used by about three million people in northern India
  • Ewe, used by about seven million people in Ghana and Togo
  • Guarani, used by about seven million people in Paraguay and Bolivia, Argentina and Brazil
  • Ilocano, used by about 10 million people in northern Philippines
  • Konkani, used by about two million people in Central India
  • Krio, used by about four million people in Sierra Leone
  • Kurdish (Sorani), used by about eight million people, mostly in Iraq
  • Lingala, used by about 45 million people in the Democratic Republic of the Congo, Republic of the Congo, Central African Republic, Angola and the Republic of South Sudan
  • Luganda, used by about 20 million people in Uganda and Rwanda
  • Maithili, used by about 34 million people in northern India
  • Meiteilon (Manipuri), used by about two million people in Northeast India
  • Mizo, used by about 830,000 people in Northeast India
  • Oromo, used by about 37 million people in Ethiopia and Kenya
  • Quechua, used by about 10 million people in Peru, Bolivia, Ecuador and surrounding countries
  • Sanskrit, used by about 20,000 people in India
  • Sepedi, used by about 14 million people in South Africa
  • Tigrinya, used by about eight million people in Eritrea and Ethiopia
  • Tsonga, used by about seven million people in Eswatini, Mozambique, South Africa and Zimbabwe
  • Twi, used by about 11 million people in Ghana

This is also a technical milestone for Google Translate. These are the first languages we’ve added using Zero-Shot Machine Translation, where a machine learning model only sees monolingual text — meaning, it learns to translate into another language without ever seeing an example. While this technology is impressive, it isn't perfect. And we’ll keep improving these models to deliver the same experience you’re used to with a Spanish or German translation, for example. If you want to dig into the technical details, check out our Google AI blog post and research paper.

We’re grateful to the many native speakers, professors and linguists who worked with us on this latest update and kept us inspired with their passion and enthusiasm. If you want to help us support your language in a future update, contribute evaluations or translations through Translate Contribute.

Living in a multi-device world with Android

Android has grown into the most popular OS in the world, delivering access, connectivity and information to people everywhere on their smartphones. There are over three billion active monthly Android devices around the world, and in the last year alone, more than a billion new Android phones have been activated. While the phone is still the most popular form of computing, people are adding more connected technologies to their lives like TVs, cars, watches and more.

As we build for a multi-device future, we’re introducing new ways to get more done. Whether it's your phone or your other devices, our updates help them all work better together.

Do more with your Android phone

With Android 13, we’re making updates to privacy and security, personalization and large screen devices. You’ve already seen a preview of this in the Developer Previews and first beta. Across the Android ecosystem, we’re also bringing more ways to keep your conversations private and secure, store your digital identity and get you help in the physical world.

We have been working with carriers and phone makers around the world to upgrade SMS text messaging to a new standard called Rich Communication Services (RCS). With RCS, you can share high-quality photos, see type indicators, message over Wi-Fi and get a better group messaging experience.

This is a huge step forward for the mobile ecosystem and we are really excited about the progress! In fact, Google's Messages app already has half a billion monthly active users with RCS and is growing fast. And, Messages already offers end-to-end encryption for your one-to-one conversations. Later this year, we’ll also be bringing encryption to your group conversations to open beta.

Three messages are shown from a group message between friends who are excited for a baking class they will take together.

Your phone can also help provide secure access to your everyday essentials. Recently, we’ve witnessed the rapid digitization of things like car keys and vaccine records. The new Google Wallet on Android will standardize the way you save and access these important items, plus things like payment cards, transit and event tickets, boarding and loyalty passes and student IDs. We’ll be launching Google Wallet on Wear OS, starting with support for payment cards.

Soon, you’ll be able to save and access hotel keys and office badges from your Android phone. And we know you can’t leave home without your ID, so we're collaborating with states across the U.S. and international partners to bring digital driver's licenses and IDs to Google Wallet later this year.

We’re developing smooth integrations with other Google apps and services while providing granular privacy controls. For example, when you add a transit card to Wallet, your card and balance will automatically show up in Google Maps when you search for directions. If your balance is running low, you can quickly tap and add fare before you arrive at the station.

A user looks at their phone for directions from the San Francisco airport on Google Maps. Since they are looking for public transportation routes, they are prompted on their phone to add fare to their Clipper card, a transit card used throughout the San Francisco Bay Area. With a tap, they add their desired amount of money to the card.

Beyond helping keep your communication and digital identities safe, your devices can be even more essential in critical moments like medical emergencies or natural disasters. In these times, chances are you’ll have either your phone or watch on you. We built critical infrastructure into Android like Emergency Location Services (ELS) to help first responders locate you when you call for help. We recently launched ELS in Bulgaria, Paraguay, Spain and Saudi Arabia, and it is now available to more than one billion people worldwide.

Early Earthquake Warnings are already in place in 25 countries, and this year we’ll launch them in many of the remaining high-risk regions around the world. This year, we’ll also start working with partners to bring Emergency SOS to Wear OS, so you can instantly contact a trusted friend or family member or call emergency services from your watch.

A watch screen depicts the Emergency SOS feature. The watch face has an outline of a red circle that counts down the time before an emergency call is made directly from the watch. In this example 911 is called.

Apps and services that extend beyond the phone

Along with your phone, two of the most important and personal devices in our lives are watches and tablets.

With the launch of our unified platform with Samsung last year, there are now over three times as many active Wear OS devices as there were last year. Later this year, you’ll start to see more devices powered with Wear OS from Samsung, Fossil Group, Montblanc, Mobvoi and others. And for the first time ever, Google Assistant is coming to Samsung Galaxy watches, starting soon with the Watch4 series. The Google Assistant experience for Wear OS has been improved with faster, more natural voice interactions, so you can access useful features like voice-controlled navigation or setting reminders.

We’re also bringing more of your favorite apps to Wear OS. Check out experiences built for your wrist by Spotify, adidas Running, LINE and KakaoTalk. And you’ll see many more from apps like SoundCloud and Deezer later this year.

Various app logos including Spotify, adidas Running, LINE, and more are spread out in a circle outside of a watch.

We’re investing in tablets in a big way and have made updates to the interface in 12L and Android 13 that optimize information for the larger screen. We’ve also introduced new features that help you multitask — for example, tap the toolbar to view the app tray and drag and drop apps to view them in a side by side view.

To support these system-level updates, we’ve also been working to improve the app experiences on Android tablets. Over the next few weeks, we’ll be updating more than 20 Google apps to take full advantage of the extra space including YouTube Music, Google Maps, Messages and more.

A collage of colorful tablets are shown, each tablet with a different app running on its screen such as Google Translate, Google Maps, Google TV, Google Photos, Gmail, and more. The Android logo is in the center of the image with the text “20+ optimized Google tablet apps” written in large lettering.

We’re working with other apps to revamp their experiences this year as well, including TikTok, Zoom, Facebook and many others. You’ll soon be able to easily search for all tablet-optimized apps thanks to updates to Google Play.

The Google Play app is open on a tablet. Apps like TikTok, Instagram, WhatsApp, and Zoom are listed under the “Top Free” section of the app charts, each with an Install button beside it.

Simple ways for your devices to work better together

Getting things done can be much easier if your connected devices all communicate and work together. The openness and flexibility of Android powers phones, watches, tablets, TVs and cars — and it works well with devices like headphones, speakers, laptops and more. Across all these devices, we’re building on our efforts and introducing even more simple and helpful features to move throughout your day.

With Chromecast built-in, you can watch videos, listen to music and more on the device that makes sense depending on where you are and what you’re doing. This means after your daily commute, you can easily play the rest of a movie you were watching on your phone on your TV at home. To help you stay entertained, we’re working to extend casting capabilities to new partners and products, such as Chromebook, or even your car.

An interior of a car with YouTube video being cast from a phone to the in-car display.

Your media should just move with you, so you can automatically switch audio from your headphones while watching a movie on your tablet to your phone when answering an incoming call.

And when you need to get more done across devices, you’ll soon be able to copy a URL or picture from your phone, and paste it on your tablet.

This graphic begins with a user copying an image from the web on their phone. They select the Nearby Share icon and the image from the phone is now in the clipboard of their tablet. The user then clicks paste within a slide in Google Slides on their tablet and the image from the phone appears.

Earlier this year, we previewed multi-device experiences, like expanding Phone Hub on your Chromebook to allow you to access all your phone’s messaging apps. By streaming from your phone to the laptop, you’ll be able to send and reply to messages, view your conversation history and launch your messaging apps from your laptop. We’re also making it easier to set up and pair your devices with the expansion of Fast Pair support to more devices, including built-in support for Matter on Android.

Whether Android brings new possibilities to your phone or the many devices in your life, we’re looking forward to helping you in this multi-device world.