Tag Archives: developers

100 things we announced at I/O

And that’s a wrap on I/O 2022! We returned to our live keynote event, packed in more than a few product surprises, showed off some experimental projects and… actually, let’s just dive right in. Here are 100 things we announced at I/O 2022.

Gear news galore

Pixel products grouped together on a white background. Products include Pixel Bud Pro, Google Pixel Watch and Pixel phones.
  1. Let’s start at the very beginning — with some previews. We showed off a first look at the upcoming Pixel 7 and Pixel 7 Pro[1ac74e], powered by the next version of Google Tensor
  2. We showed off an early look at Google Pixel Watch! It’s our first-ever all-Google built watch: 80% recycled stainless steel[ec662b], Wear OS, Fitbit integration, Assistant access…and it’s coming this fall.
  3. Fitbit is coming to Google Pixel Watch. More experiences built for your wrist are coming later this year from apps like Deezer and Soundcloud.
  4. Later this year, you’ll start to see more devices powered with Wear OS from Samsung, Fossil Group, Montblanc and others.
  5. Google Assistant is coming soon to the Samsung Galaxy Watch 4 series.
  6. The new Pixel Buds Pro use Active Noise Cancellation (ANC), a feature powered by a custom 6-core audio chip and Google algorithms to put the focus on your music — and nothing else.
  7. Silent Seal™ helps Pixel Buds Pro adapt to the shape of your ear, for better sound. Later this year, Pixel Buds Pro will also support spatial audio to put you in the middle of the action when watching a movie or TV show with a compatible device and supported content.
  8. They also come in new colors: Charcoal, Fog, Coral and Lemongrass. Ahem, multiple colors — the Pixel Buds Pro have a two-tone design.
  9. With Multipoint connectivity, Pixel Buds Pro can automatically switch between your previously paired Bluetooth devices — including compatible laptops, tablets, TVs, and Android and iOS phones.
  10. Plus, the earbuds and their case are water-resistant[a53326].
  11. …And you can preorder them on July 21.
  12. Then there’s the brand new Pixel 6a, which comes with the full Material You experience.
  13. The new Pixel 6a has the same Google Tensor processor and hardware security architecture with Titan M2 as the Pixel 6 and Pixel 6 Pro.
  14. It also has two dual rear cameras — main and ultrawide lenses.
  15. You’ve got three Pixel 6a color options: Chalk, Charcoal and Sage. The options keep going if you pair it with one of the new translucent cases.
  16. It costs $449 and will be available for pre-order on July 21.
  17. We also showed off an early look at the upcoming Pixel tablet[a12f26], which we’re aiming to make available next year.

Android updates

18. In the last year, over 1 billion new Android phones have been activated.

19. You’ll no longer need to grant location to apps to enable Wi-Fi scanning in Android 13.

20. Android 13 will automatically delete your clipboard history after a short time to preemptively block apps from seeing old copied information

21. Android 13’s new photo picker lets you select the exact photos or videos you want to grant access to, without needing to share your entire media library with an app.

22. You’ll soon be able to copy a URL or picture from your phone, and paste it on your tablet in Android 13.

23. Android 13 allows you to select different language preferences for different apps.

24. The latest Android OS will also require apps to get your permission before sending you notifications.

25. And later this year, you’ll see a new Security & Privacy settings page with Android 13.

26. Google’s Messages app already has half a billion monthly active users with RCS, a new standard that enables you to share high-quality photos, see type indicators, message over Wi-Fi and get a better group messaging experience.

27. Messages is getting a public beta of end-to-end encryption for group conversations.

28. Early earthquake warnings are coming to more high-risk regions around the world.

29. On select headphones, you’ll soon be able to automatically switch audio between the devices you’re listening on with Android.

30. Stream and use messaging apps from your Android phone to laptop with Chromebook’s Phone Hub, and you won’t even have to install any apps.

31. Google Wallet is here! It’s a new home for things like your student ID, transit tickets, vaccine card, credit cards, debits cards.

32. You can even use Google Wallet to hold your Walt Disney World park pass.

33. Google Wallet is coming to Wear OS, too.

34. Improved app experiences are coming for Android tablets: YouTube Music, Google Maps and Messages will take advantage of the extra screen space, and more apps coming soon include TikTok, Zoom, Facebook, Canva and many others.

Developer deep dive

Illustration depicting a smart home, with lights, thermostat, television, screen and mobile device.

35. The Google Home and Google Home Mobile software developer kit (SDK) for Matter will be launching in June as developer previews.

36. The Google Home SDK introduces Intelligence Clusters, which make intelligence features like Home and Away, available to developers.

37. Developers can even create QR codes for Google Wallet to create their own passes for any use case they’d like.

38. Matter support is coming to the Nest Thermostat.

39. The Google Home Developer Center has lots of updates to check out.

40. There’s now built-in support for Matter on Android, so you can use Fast Pair to quickly connect Matter-enabled smart home devices to your network, Google Home and other accompanying apps in just a few taps.

41. The ARCore Geospatial API makes Google Maps’ Live View technology available to developers for free. Companies like Lime are using it to help people find parking spots for their scooters and save time.

42. DOCOMO and Curiosity are using the ARCore Geospatial API to build a new game that lets you fend off virtual dragons with robot companions in front of iconic Tokyo landmarks, like the Tokyo Tower.

43. AlloyDB is a new, fully-managed PostgreSQL-compatible database service designed to help developers manage enterprise database workloads — in our performance tests, it’s more than four times faster for transactional workloads and up to 100 times faster for analytical queries than standard PostgreSQL.

44. AlloyDB uses the same infrastructure building blocks that power large-scale products like YouTube, Search, Maps and Gmail.

45. Google Cloud’s machine learning cluster powered by Cloud TPU v4 Pods is super powerful — in fact, we believe it’s the world’s largest publicly available machine learning hub in terms of compute power…

46. …and it operates at 90% carbon-free energy.

47. We also announced a preview of Cloud Run jobs, which reduces the time developers spend running administrative tasks like database migration or batch data transformation.

48. We announced Flutter 3.0, which will enable developers to publish production-ready apps to six platforms at once, from one code base (Android, iOS, Desktop Web, Linux, Desktop Windows and MacOS).

49. To help developers build beautiful Wear apps, we announced the beta of Jetpack Compose for Wear OS.

50. We’re making it faster and easier for developers to build modern, high-quality apps with new Live edit features in Android Studio.

Help for the home

GIF of a man baking cookies with a speech bubble saying “Set a timer for 10 minutes.” His Google Nest Hub Max responds with a speech bubble saying “OK, 10 min. And that’s starting…now.”

51. Many Nest Devices will become Matter controllers, which means they can serve as central hubs to control Matter-enabled devices both locally and remotely from the Google Home app.

52. Works with Hey Google is now Works with Google Home.

53. The new home.google is your new hub for finding out everything you can do with your Google Home system.

54. Nest Hub Max is getting Look and Talk, where you can simply look at your device to ask a question without saying “Hey Google.”

55. Look and Talk works when Voice Match and Face Match recognize that it’s you.

56. And video from Look and Talk interactions is processed entirely on-device, so it isn’t shared with Google or anyone else.

57. Look and Talk is opt-in. Oh, and FYI, you can still say “Hey Google” whenever you want!

58. Want to learn more about it? Just say “Hey Google, what is Look and Talk?” or “Hey Google, how do you enable Look and Talk?”

59. We’re also expanding quick phrases to Nest Hub Max, so you can skip saying “Hey Google” for some of your most common daily tasks – things like “set a timer for 10 minutes” or “turn off the living room lights.”

60. You can choose the quick phrases you want to turn on.

61. Your quick phrases will work when Voice Match recognizes it’s you .

62. And looking ahead, Assistant will be able to better understand the imperfections of human speech without getting tripped up — including the pauses, “umms” and interruptions — making your interactions feel much closer to a natural conversation.

Taking care of business

Animated GIF  demonstrating portrait light, bringing studio-quality lighting effects to Google Meet.

63. Google Meet video calls will now look better thanks to portrait restore and portrait light, which use AI and machine learning to improve quality and lighting on video calls.

64. Later this year we’re scaling the phishing and malware protections that guard Gmail to Google Docs, Sheets and Slides.

65. Live sharing is coming to Google Meet, meaning users will be able to share controls and interact directly within the meeting, whether it’s watching an icebreaker video from YouTube or sharing a playlist.

66. Automated built-in summaries are coming to Spaces so you can get a helpful digest of conversations to catch up quickly.

67. De-reverberation for Google Meet will filter out echoes in spaces with hard surfaces, giving you conference-room audio quality whether you’re in a basement, a kitchen, or a big empty room.

68. Later this year, we're bringing automated transcriptions of Google Meet meetings to Google Workspace, so people can catch up quickly on meetings they couldn't attend.

Apps for on-the-go

A picture of London in immersive view.

69. Google Wallet users will be able to check the balance of transit passes and top up within Google Maps.

70. Google Translate added 24 new languages.

71. As part of this update, Indigenous languages of the Americas (Quechua, Guarani and Aymara) and an English dialect (Sierra Leonean Krio) have also been added to Translate for the first time.

72. Google Translate now supports a total of 133 languages used around the globe.

73. These are the first languages we’ve added using Zero-resource Machine Translation, where a machine learning model only sees monolingual text — meaning, it learns to translate into another language without ever seeing an example.

74. Google Maps’ new immersive view is a whole new way to explore so you can see what an area truly looks and feels like.

75. Immersive view will work on nearly any phone or tablet; you don’t need the fanciest or newest device.

76. Immersive view will first be available in L.A., London, New York, San Francisco and Tokyo — with more places coming soon.

77. Last year we launched eco-friendly routing in the U.S. and Canada. Since then, people have used it to travel 86 billion miles, which saved more than half a million metric tons of carbon emissions — that’s like taking 100,000 cars off the road.

78. And we’re expanding eco-friendly routing to more places, like Europe.

All in on AI

Ten circles in a row, ranging from dark to light.

The 10 shades of the Monk Skin Tone Scale.

79. A team at Google Research partnered with Harvard’s Dr. Ellis Monk to openly release the Monk Skin Tone Scale, a new tool for measuring skin tone that can help build more inclusive products.

80. Google Search will use the Monk Skin Tone Scale to make it easier to find more relevant results — for instance, if you search for “bridal makeup,” you’ll see an option to filter by skin tone so you can refine to results that meet your needs.

81. Oh, and the Monk Skin Tone Scale was used to evaluate a new set of Real Tone filters for Photos that are designed to work well across skin tones. These filters were created and tested in partnership with artists like Kennedi Carter and Joshua Kissi.

82. We’re releasing LaMDA 2, as a part of the AI Test Kitchen, a new space to learn, improve, and innovate responsibly on this technology together.

83. PaLM is a new language model that can solve complex math word problems, and even explain its thought process, step-by-step.

84. Nest Hub Max’s new Look and Talk feature uses six machine learning models to process more than 100 signals in real time to detect whether you’re intending to make eye contact with your device so you can talk to Google Assistant and not just giving it a passing glance.

85. We recently launched multisearch in the Google app, which lets you search by taking a photo and asking a question at the same time. At I/O, we announced that later this year, you'll be able to take a picture or screenshot and add "near me" to get local results from restaurants, retailers and more.

86. We introduced you to an advancement called “scene exploration,” where in the future, you’ll be able to use multisearch to pan your camera and instantly glean insights about multiple objects in a wider scene.

Privacy, security and information

A GIF that shows someone’s Google account with a yellow alert icon, flagging recommended actions they should take to secure their account.

87. We’ve expanded our support for Project Shield to protect the websites of 200+ Ukrainian government agencies, news outlets and more.

88. Account Safety Status will add a simple yellow alert icon to flag actions you should take to secure your Google Account.

89. Phishing protections in Google Workspace are expanding to Docs, Slides and Sheets.

90. My Ad Center is now giving you even more control over the ads you see on YouTube, Search, and your Discover feed.

91. Virtual cards are coming to Chrome and Android this summer, adding an additional layer of security and eliminating the need to enter certain card details at checkout.

92. In the coming months, you’ll be able to request removal of Google Search results that have your contact info with an easy-to-use tool.

93. Protected Computing, a toolkit that helps minimize your data footprint, de-identifies your data and restricts access to your sensitive data.

94. On-device encryption is now available for Google Password Manager.

95. We’re continuing to auto enroll people in 2-Step Verification to reduce phishing risks.

What else?!

Illustration of a black one-story building with large windows. Inside are people walking around wooden tables and white walls containing Google hardware products. There is a Google Store logo on top of the building.

96. A new Google Store is opening in Williamsburg.

97. This is our first “neighborhood store” — it’s in a more intimate setting that highlights the community. You can find it at 134 N 6th St., opening on June 16.

98. The store will feature an installation by Brooklyn-based artist Olalekan Jeyifous.

99. Visitors there can picture everyday life with Google products through interactive displays that show how our hardware and services work together, and even get hands-on help with devices from Google experts.

100. We showed a prototype of what happens when we bring technologies like transcription and translation to your line of sight.

Google I/O 2022: Advancing knowledge and computing

[TL;DR]

Nearly 24 years ago, Google started with two graduate students, one product, and a big mission: to organize the world’s information and make it universally accessible and useful. In the decades since, we’ve been developing our technology to deliver on that mission.

The progress we've made is because of our years of investment in advanced technologies, from AI to the technical infrastructure that powers it all. And once a year — on my favorite day of the year :) — we share an update on how it’s going at Google I/O.

Today, I talked about how we’re advancing two fundamental aspects of our mission — knowledge and computing — to create products that are built to help. It’s exciting to build these products; it’s even more exciting to see what people do with them.

Thank you to everyone who helps us do this work, and most especially our Googlers. We are grateful for the opportunity.

- Sundar


Editor’s note: Below is an edited transcript of Sundar Pichai's keynote address during the opening of today's Google I/O Developers Conference.

Hi, everyone, and welcome. Actually, let’s make that welcome back! It’s great to return to Shoreline Amphitheatre after three years away. To the thousands of developers, partners and Googlers here with us, it’s great to see all of you. And to the millions more joining us around the world — we’re so happy you’re here, too.

Last year, we shared how new breakthroughs in some of the most technically challenging areas of computer science are making Google products more helpful in the moments that matter. All this work is in service of our timeless mission: to organize the world's information and make it universally accessible and useful.

I'm excited to show you how we’re driving that mission forward in two key ways: by deepening our understanding of information so that we can turn it into knowledge; and advancing the state of computing, so that knowledge is easier to access, no matter who or where you are.

Today, you'll see how progress on these two parts of our mission ensures Google products are built to help. I’ll start with a few quick examples. Throughout the pandemic, Google has focused on delivering accurate information to help people stay healthy. Over the last year, people used Google Search and Maps to find where they could get a COVID vaccine nearly two billion times.

A visualization of Google’s flood forecasting system, with three 3D maps stacked on top of one another, showing landscapes and weather patterns in green and brown colors. The maps are floating against a gray background.

Google’s flood forecasting technology sent flood alerts to 23 million people in India and Bangladesh last year.

We’ve also expanded our flood forecasting technology to help people stay safe in the face of natural disasters. During last year’s monsoon season, our flood alerts notified more than 23 million people in India and Bangladesh. And we estimate this supported the timely evacuation of hundreds of thousands of people.

In Ukraine, we worked with the government to rapidly deploy air raid alerts. To date, we’ve delivered hundreds of millions of alerts to help people get to safety. In March I was in Poland, where millions of Ukrainians have sought refuge. Warsaw’s population has increased by nearly 20% as families host refugees in their homes, and schools welcome thousands of new students. Nearly every Google employee I spoke with there was hosting someone.

Adding 24 more languages to Google Translate

In countries around the world, Google Translate has been a crucial tool for newcomers and residents trying to communicate with one another. We’re proud of how it’s helping Ukrainians find a bit of hope and connection until they are able to return home again.

Two boxes, one showing a question in English — “What’s the weather like today?” — the other showing its translation in Quechua. There is a microphone symbol below the English question and a loudspeaker symbol below the Quechua answer.

With machine learning advances, we're able to add languages like Quechua to Google Translate.

Real-time translation is a testament to how knowledge and computing come together to make people's lives better. More people are using Google Translate than ever before, but we still have work to do to make it universally accessible. There’s a long tail of languages that are underrepresented on the web today, and translating them is a hard technical problem. That’s because translation models are usually trained with bilingual text — for example, the same phrase in both English and Spanish. However, there's not enough publicly available bilingual text for every language.

So with advances in machine learning, we’ve developed a monolingual approach where the model learns to translate a new language without ever seeing a direct translation of it. By collaborating with native speakers and institutions, we found these translations were of sufficient quality to be useful, and we'll continue to improve them.

A list of the 24 new languages Google Translate now has available.

We’re adding 24 new languages to Google Translate.

Today, I’m excited to announce that we’re adding 24 new languages to Google Translate, including the first indigenous languages of the Americas. Together, these languages are spoken by more than 300 million people. Breakthroughs like this are powering a radical shift in how we access knowledge and use computers.

Taking Google Maps to the next level

So much of what’s knowable about our world goes beyond language — it’s in the physical and geospatial information all around us. For more than 15 years, Google Maps has worked to create rich and useful representations of this information to help us navigate. Advances in AI are taking this work to the next level, whether it’s expanding our coverage to remote areas, or reimagining how to explore the world in more intuitive ways.

An overhead image of a map of a dense urban area, showing gray roads cutting through clusters of buildings outlined in blue.

Advances in AI are helping to map remote and rural areas.

Around the world, we’ve mapped around 1.6 billion buildings and over 60 million kilometers of roads to date. Some remote and rural areas have previously been difficult to map, due to scarcity of high-quality imagery and distinct building types and terrain. To address this, we’re using computer vision and neural networks to detect buildings at scale from satellite images. As a result, we have increased the number of buildings on Google Maps in Africa by 5X since July 2020, from 60 million to nearly 300 million.

We’ve also doubled the number of buildings mapped in India and Indonesia this year. Globally, over 20% of the buildings on Google Maps have been detected using these new techniques. We’ve gone a step further, and made the dataset of buildings in Africa publicly available. International organizations like the United Nations and the World Bank are already using it to better understand population density, and to provide support and emergency assistance.

Immersive view in Google Maps fuses together aerial and street level images.

We’re also bringing new capabilities into Maps. Using advances in 3D mapping and machine learning, we’re fusing billions of aerial and street level images to create a new, high-fidelity representation of a place. These breakthrough technologies are coming together to power a new experience in Maps called immersive view: it allows you to explore a place like never before.

Let’s go to London and take a look. Say you’re planning to visit Westminster with your family. You can get into this immersive view straight from Maps on your phone, and you can pan around the sights… here’s Westminster Abbey. If you’re thinking of heading to Big Ben, you can check if there's traffic, how busy it is, and even see the weather forecast. And if you’re looking to grab a bite during your visit, you can check out restaurants nearby and get a glimpse inside.

What's amazing is that isn't a drone flying in the restaurant — we use neural rendering to create the experience from images alone. And Google Cloud Immersive Stream allows this experience to run on just about any smartphone. This feature will start rolling out in Google Maps for select cities globally later this year.

Another big improvement to Maps is eco-friendly routing. Launched last year, it shows you the most fuel-efficient route, giving you the choice to save money on gas and reduce carbon emissions. Eco-friendly routes have already rolled out in the U.S. and Canada — and people have used them to travel approximately 86 billion miles, helping save an estimated half million metric tons of carbon emissions, the equivalent of taking 100,000 cars off the road.

Still image of eco-friendly routing on Google Maps — a 53-minute driving route in Berlin is pictured, with text below the map showing it will add three minutes but save 18% more fuel.

Eco-friendly routes will expand to Europe later this year.

I’m happy to share that we’re expanding this feature to more places, including Europe later this year. In this Berlin example, you could reduce your fuel consumption by 18% taking a route that’s just three minutes slower. These small decisions have a big impact at scale. With the expansion into Europe and beyond, we estimate carbon emission savings will double by the end of the year.

And we’ve added a similar feature to Google Flights. When you search for flights between two cities, we also show you carbon emission estimates alongside other information like price and schedule, making it easy to choose a greener option. These eco-friendly features in Maps and Flights are part of our goal to empower 1 billion people to make more sustainable choices through our products, and we’re excited about the progress here.

New YouTube features to help people easily access video content

Beyond Maps, video is becoming an even more fundamental part of how we share information, communicate, and learn. Often when you come to YouTube, you are looking for a specific moment in a video and we want to help you get there faster.

Last year we launched auto-generated chapters to make it easier to jump to the part you’re most interested in.

This is also great for creators because it saves them time making chapters. We’re now applying multimodal technology from DeepMind. It simultaneously uses text, audio and video to auto-generate chapters with greater accuracy and speed. With this, we now have a goal to 10X the number of videos with auto-generated chapters, from eight million today, to 80 million over the next year.

Often the fastest way to get a sense of a video’s content is to read its transcript, so we’re also using speech recognition models to transcribe videos. Video transcripts are now available to all Android and iOS users.

Animation showing a video being automatically translated. Then text reads "Now available in sixteen languages."

Auto-translated captions on YouTube.

Next up, we’re bringing auto-translated captions on YouTube to mobile. Which means viewers can now auto-translate video captions in 16 languages, and creators can grow their global audience. We’ll also be expanding auto-translated captions to Ukrainian YouTube content next month, part of our larger effort to increase access to accurate information about the war.

Helping people be more efficient with Google Workspace

Just as we’re using AI to improve features in YouTube, we’re building it into our Workspace products to help people be more efficient. Whether you work for a small business or a large institution, chances are you spend a lot of time reading documents. Maybe you’ve felt that wave of panic when you realize you have a 25-page document to read ahead of a meeting that starts in five minutes.

At Google, whenever I get a long document or email, I look for a TL;DR at the top — TL;DR is short for “Too Long, Didn’t Read.” And it got us thinking, wouldn’t life be better if more things had a TL;DR?

That’s why we’ve introduced automated summarization for Google Docs. Using one of our machine learning models for text summarization, Google Docs will automatically parse the words and pull out the main points.

This marks a big leap forward for natural language processing. Summarization requires understanding of long passages, information compression and language generation, which used to be outside of the capabilities of even the best machine learning models.

And docs are only the beginning. We’re launching summarization for other products in Workspace. It will come to Google Chat in the next few months, providing a helpful digest of chat conversations, so you can jump right into a group chat or look back at the key highlights.

Animation showing summary in Google Chat

We’re bringing summarization to Google Chat in the coming months.

And we’re working to bring transcription and summarization to Google Meet as well so you can catch up on some important meetings you missed.

Visual improvements on Google Meet

Of course there are many moments where you really want to be in a virtual room with someone. And that’s why we continue to improve audio and video quality, inspired by Project Starline. We introduced Project Starline at I/O last year. And we’ve been testing it across Google offices to get feedback and improve the technology for the future. And in the process, we’ve learned some things that we can apply right now to Google Meet.

Starline inspired machine learning-powered image processing to automatically improve your image quality in Google Meet. And it works on all types of devices so you look your best wherever you are.

An animation of a man looking directly at the camera then waving and smiling. A white line sweeps across the screen, adjusting the image quality to make it brighter and clearer.

Machine learning-powered image processing automatically improves image quality in Google Meet.

We’re also bringing studio quality virtual lighting to Meet. You can adjust the light position and brightness, so you’ll still be visible in a dark room or sitting in front of a window. We’re testing this feature to ensure everyone looks like their true selves, continuing the work we’ve done with Real Tone on Pixel phones and the Monk Scale.

These are just some of the ways AI is improving our products: making them more helpful, more accessible, and delivering innovative new features for everyone.

Gif shows a phone camera pointed towards a rack of shelves, generating helpful information about food items. Text on the screen shows the words ‘dark’, ‘nut-free’ and ‘highly-rated’.

Today at I/O Prabhakar Raghavan shared how we’re helping people find helpful information in more intuitive ways on Search.

Making knowledge accessible through computing

We’ve talked about how we’re advancing access to knowledge as part of our mission: from better language translation to improved Search experiences across images and video, to richer explorations of the world using Maps.

Now we’re going to focus on how we make that knowledge even more accessible through computing. The journey we’ve been on with computing is an exciting one. Every shift, from desktop to the web to mobile to wearables and ambient computing has made knowledge more useful in our daily lives.

As helpful as our devices are, we’ve had to work pretty hard to adapt to them. I’ve always thought computers should be adapting to people, not the other way around. We continue to push ourselves to make progress here.

Here’s how we’re making computing more natural and intuitive with the Google Assistant.

Introducing LaMDA 2 and AI Test Kitchen

Animation shows demos of how LaMDA can converse on any topic and how AI Test Kitchen can help create lists.

A demo of LaMDA, our generative language model for dialogue application, and the AI Test Kitchen.

We're continually working to advance our conversational capabilities. Conversation and natural language processing are powerful ways to make computers more accessible to everyone. And large language models are key to this.

Last year, we introduced LaMDA, our generative language model for dialogue applications that can converse on any topic. Today, we are excited to announce LaMDA 2, our most advanced conversational AI yet.

We are at the beginning of a journey to make models like these useful to people, and we feel a deep responsibility to get it right. To make progress, we need people to experience the technology and provide feedback. We opened LaMDA up to thousands of Googlers, who enjoyed testing it and seeing its capabilities. This yielded significant quality improvements, and led to a reduction in inaccurate or offensive responses.

That’s why we’ve made AI Test Kitchen. It’s a new way to explore AI features with a broader audience. Inside the AI Test Kitchen, there are a few different experiences. Each is meant to give you a sense of what it might be like to have LaMDA in your hands and use it for things you care about.

The first is called “Imagine it.” This demo tests if the model can take a creative idea you give it, and generate imaginative and relevant descriptions. These are not products, they are quick sketches that allow us to explore what LaMDA can do with you. The user interfaces are very simple.

Say you’re writing a story and need some inspirational ideas. Maybe one of your characters is exploring the deep ocean. You can ask what that might feel like. Here LaMDA describes a scene in the Mariana Trench. It even generates follow-up questions on the fly. You can ask LaMDA to imagine what kinds of creatures might live there. Remember, we didn’t hand-program the model for specific topics like submarines or bioluminescence. It synthesized these concepts from its training data. That’s why you can ask about almost any topic: Saturn’s rings or even being on a planet made of ice cream.

Staying on topic is a challenge for language models. Say you’re building a learning experience — you want it to be open-ended enough to allow people to explore where curiosity takes them, but stay safely on topic. Our second demo tests how LaMDA does with that.

In this demo, we’ve primed the model to focus on the topic of dogs. It starts by generating a question to spark conversation, “Have you ever wondered why dogs love to play fetch so much?” And if you ask a follow-up question, you get an answer with some relevant details: it’s interesting, it thinks it might have something to do with the sense of smell and treasure hunting.

You can take the conversation anywhere you want. Maybe you’re curious about how smell works and you want to dive deeper. You’ll get a unique response for that too. No matter what you ask, it will try to keep the conversation on the topic of dogs. If I start asking about cricket, which I probably would, the model brings the topic back to dogs in a fun way.

This challenge of staying on-topic is a tricky one, and it’s an important area of research for building useful applications with language models.

These experiences show the potential of language models to one day help us with things like planning, learning about the world, and more.

Of course, there are significant challenges to solve before these models can truly be useful. While we have improved safety, the model might still generate inaccurate, inappropriate, or offensive responses. That’s why we are inviting feedback in the app, so people can help report problems.

We will be doing all of this work in accordance with our AI Principles. Our process will be iterative, opening up access over the coming months, and carefully assessing feedback with a broad range of stakeholders — from AI researchers and social scientists to human rights experts. We’ll incorporate this feedback into future versions of LaMDA, and share our findings as we go.

Over time, we intend to continue adding other emerging areas of AI into AI Test Kitchen. You can learn more at: g.co/AITestKitchen.

Advancing AI language models

LaMDA 2 has incredible conversational capabilities. To explore other aspects of natural language processing and AI, we recently announced a new model. It’s called Pathways Language Model, or PaLM for short. It’s our largest model to date and trained on 540 billion parameters.

PaLM demonstrates breakthrough performance on many natural language processing tasks, such as generating code from text, answering a math word problem, or even explaining a joke.

It achieves this through greater scale. And when we combine that scale with a new technique called chain-of- thought prompting, the results are promising. Chain-of-thought prompting allows us to describe multi-step problems as a series of intermediate steps.

Let’s take an example of a math word problem that requires reasoning. Normally, how you use a model is you prompt it with a question and answer, and then you start asking questions. In this case: How many hours are in the month of May? So you can see, the model didn’t quite get it right.

In chain-of-thought prompting, we give the model a question-answer pair, but this time, an explanation of how the answer was derived. Kind of like when your teacher gives you a step-by-step example to help you understand how to solve a problem. Now, if we ask the model again — how many hours are in the month of May — or other related questions, it actually answers correctly and even shows its work.

There are two boxes below a heading saying ‘chain-of-thought prompting’. A box headed ‘input’ guides the model through answering a question about how many tennis balls a person called Roger has. The output box shows the model correctly reasoning through and answering a separate question (‘how many hours are in the month of May?’)

Chain-of-thought prompting leads to better reasoning and more accurate answers.

Chain-of-thought prompting increases accuracy by a large margin. This leads to state-of-the-art performance across several reasoning benchmarks, including math word problems. And we can do it all without ever changing how the model is trained.

PaLM is highly capable and can do so much more. For example, you might be someone who speaks a language that’s not well-represented on the web today — which makes it hard to find information. Even more frustrating because the answer you are looking for is probably out there. PaLM offers a new approach that holds enormous promise for making knowledge more accessible for everyone.

Let me show you an example in which we can help answer questions in a language like Bengali — spoken by a quarter billion people. Just like before we prompt the model with two examples of questions in Bengali with both Bengali and English answers.

That’s it, now we can start asking questions in Bengali: “What is the national song of Bangladesh?” The answer, by the way, is “Amar Sonar Bangla” — and PaLM got it right, too. This is not that surprising because you would expect that content to exist in Bengali.

You can also try something that is less likely to have related information in Bengali such as: “What are popular pizza toppings in New York City?” The model again answers correctly in Bengali. Though it probably just stirred up a debate amongst New Yorkers about how “correct” that answer really is.

What’s so impressive is that PaLM has never seen parallel sentences between Bengali and English. Nor was it ever explicitly taught to answer questions or translate at all! The model brought all of its capabilities together to answer questions correctly in Bengali. And we can extend the techniques to more languages and other complex tasks.

We're so optimistic about the potential for language models. One day, we hope we can answer questions on more topics in any language you speak, making knowledge even more accessible, in Search and across all of Google.

Introducing the world’s largest, publicly available machine learning hub

The advances we’ve shared today are possible only because of our continued innovation in our infrastructure. Recently we announced plans to invest $9.5 billion in data centers and offices across the U.S.

One of our state-of-the-art data centers is in Mayes County, Oklahoma. I’m excited to announce that, there, we are launching the world’s largest, publicly-available machine learning hub for our Google Cloud customers.

Still image of a data center with Oklahoma map pin on bottom left corner.

One of our state-of-the-art data centers in Mayes County, Oklahoma.

This machine learning hub has eight Cloud TPU v4 pods, custom-built on the same networking infrastructure that powers Google’s largest neural models. They provide nearly nine exaflops of computing power in aggregate — bringing our customers an unprecedented ability to run complex models and workloads. We hope this will fuel innovation across many fields, from medicine to logistics, sustainability and more.

And speaking of sustainability, this machine learning hub is already operating at 90% carbon-free energy. This is helping us make progress on our goal to become the first major company to operate all of our data centers and campuses globally on 24/7 carbon-free energy by 2030.

Even as we invest in our data centers, we are working to innovate on our mobile platforms so more processing can happen locally on device. Google Tensor, our custom system on a chip, was an important step in this direction. It’s already running on Pixel 6 and Pixel 6 Pro, and it brings our AI capabilities — including the best speech recognition we’ve ever deployed — right to your phone. It’s also a big step forward in making those devices more secure. Combined with Android’s Private Compute Core, it can run data-powered features directly on device so that it’s private to you.

People turn to our products every day for help in moments big and small. Core to making this possible is protecting your private information each step of the way. Even as technology grows increasingly complex, we keep more people safe online than anyone else in the world, with products that are secure by default, private by design and that put you in control.

We also spent time today sharing updates to platforms like Android. They’re delivering access, connectivity, and information to billions of people through their smartphones and other connected devices like TVs, cars and watches.

And we shared our new Pixel Portfolio, including the Pixel 6a, Pixel Buds Pro, Google Pixel Watch, Pixel 7, and Pixel tablet all built with ambient computing in mind. We’re excited to share a family of devices that work better together — for you.

The next frontier of computing: augmented reality

Today we talked about all the technologies that are changing how we use computers and access knowledge. We see devices working seamlessly together, exactly when and where you need them and with conversational interfaces that make it easier to get things done.

Looking ahead, there's a new frontier of computing, which has the potential to extend all of this even further, and that is augmented reality. At Google, we have been heavily invested in this area. We’ve been building augmented reality into many Google products, from Google Lens to multisearch, scene exploration, and Live and immersive views in Maps.

These AR capabilities are already useful on phones and the magic will really come alive when you can use them in the real world without the technology getting in the way.

That potential is what gets us most excited about AR: the ability to spend time focusing on what matters in the real world, in our real lives. Because the real world is pretty amazing!

It’s important we design in a way that is built for the real world — and doesn’t take you away from it. And AR gives us new ways to accomplish this.

Let’s take language as an example. Language is just so fundamental to connecting with one another. And yet, understanding someone who speaks a different language, or trying to follow a conversation if you are deaf or hard of hearing can be a real challenge. Let's see what happens when we take our advancements in translation and transcription and deliver them in your line of sight in one of the early prototypes we’ve been testing.

You can see it in their faces: the joy that comes with speaking naturally to someone. That moment of connection. To understand and be understood. That’s what our focus on knowledge and computing is all about. And it’s what we strive for every day, with products that are built to help.

Each year we get a little closer to delivering on our timeless mission. And we still have so much further to go. At Google, we genuinely feel a sense of excitement about that. And we are optimistic that the breakthroughs you just saw will help us get there. Thank you to all of the developers, partners and customers who joined us today. We look forward to building the future with all of you.

Introducing the Google Meet Live Sharing SDK

Posted by Mai Lowe, Product Manager & Ken Cenerelli, Technical Writer


The Google Meet Live Sharing SDK is in preview. To use the SDK, developers can apply for access through our Early Access Program.

Today at Google I/O 2022, we announced new functionality for app developers to leverage the Google Meet video conferencing product through our new Meet Live Sharing SDK. Users can now come together and share experiences with each other inside an app, such as streaming a TV show, queuing up videos to watch on YouTube, collaborating on a music playlist, joining in a dance party, or working out together though Google Meet. This SDK joins the large set of offerings available to developers under the Google Workspace Platform.

Partners like YouTube, Heads Up!, UNO!™ Mobile, and Kahoot! are already integrating our SDK into their applications so that their users can participate in these new, shared interactive experiences later this year.

Supports multiple use cases


The Live Sharing SDK allows developers to sync content across devices in real time and incorporate Meet into their apps, enabling them to bring new, fun, and genuinely connecting experiences to their users. It’s also a great way to reach new audiences as current users can introduce your app to friends and family.

The SDK supports two key use cases:
  • Co-Watching—Syncs streaming app content across devices in real time, and allows users to take turns sharing videos and playing the latest hits from their favorite artist. This allows for users to share controls such as starting and pausing a video, or selecting new content in the app.
  • Co-Doing—Syncs arbitrary app content, allowing users to get together to perform an activity like playing video games or follow the same workout regime.


The co-watching and co-doing APIs are independent but can be used in parallel with each other.


Example workflow illustration of a user starting live sharing within an app using the Live Sharing SDK.


Get started


To learn more, watch our I/O 2022 session on the Google Meet Live Sharing SDK and check out the documentation for the Android version.

If you want to try out the SDK, developers can apply for access through our Early Access Program.


What’s next?


We’re also continuing to improve features by working to build the video-content experience you want to bring to your users. For more announcements like this and for info about the Google Workspace Platform and APIs, subscribe to our developer newsletter.

Now in Developer Preview: Create Spaces and Add Members with the Google Chat API

Posted by Mike Rhemtulla, Product Manager & Charles Maxson, Developer Advocate

The Google Chat API updates are in developer preview. To use the API, developers can apply for access through our Google Workspace Developer Preview Program.

In Google Chat, Spaces serve as a central place for team collaboration—instead of starting an email chain or scheduling a meeting, teams can move conversations and collaboration into a space, giving everybody the ability to stay connected, reference team or project info and revisit work asynchronously.

Programmatically create and populate Google Chat spaces

We are pleased to announce that you can programmatically create new Spaces and add members on behalf of users, through the Google Workspace Developer Preview Program via the Google Chat API.

These latest additions to the Chat API unlock some sought after scenarios for developers looking to add new dimensions to how they can leverage Chat. For example, organizations that need to create Spaces based on various business needs will now be able to do so programmatically. This will open up the door for Chat solutions that can build out Spaces modeled to represent new teams, projects, working groups, or whatever the specific use case may be that can benefit from automatically creating new Spaces.

Coming soon, example from an early developer preview partner

One of our developer preview partners, PagerDuty, is already leveraging the API as part of their upcoming release of PagerDuty for Google Chat. The app will allow users of their incident management solution to take quick actions around an incident with the right team members needed. PagerDuty for Chat will allow the incident team to isolate and focus on the problem at hand without being distracted by having to set up a new space, or further distract any folks in the current space who aren’t a part of the resolution team for a specific incident. All of this will be done seamlessly through PagerDuty for Chat as part of the natural flow of working with Google Chat.

Example of how a Chat app with the new APIs can enable users to easily create new Spaces and add members to an incident.

Learn more and get started

As you can imagine, there are many use cases that show off the potential of what you can build with the Chat API and the new Create methods. Whether it’s creating Spaces with specified members or extending Chat apps that spawn off new collaboration Spaces for use with help desk, HR, sales, customer support or any endless number of scenarios, we encourage you to explore what you can do today.

How to get started:



Now in Developer Preview: Create Spaces and Add Members with the Google Chat API

Posted by Mike Rhemtulla, Product Manager & Charles Maxson, Developer Advocate

The Google Chat API updates are in developer preview. To use the API, developers can apply for access through our Google Workspace Developer Preview Program.

In Google Chat, Spaces serve as a central place for team collaboration—instead of starting an email chain or scheduling a meeting, teams can move conversations and collaboration into a space, giving everybody the ability to stay connected, reference team or project info and revisit work asynchronously.

Programmatically create and populate Google Chat spaces

We are pleased to announce that you can programmatically create new Spaces and add members on behalf of users, through the Google Workspace Developer Preview Program via the Google Chat API.

These latest additions to the Chat API unlock some sought after scenarios for developers looking to add new dimensions to how they can leverage Chat. For example, organizations that need to create Spaces based on various business needs will now be able to do so programmatically. This will open up the door for Chat solutions that can build out Spaces modeled to represent new teams, projects, working groups, or whatever the specific use case may be that can benefit from automatically creating new Spaces.

Coming soon, example from an early developer preview partner

One of our developer preview partners, PagerDuty, is already leveraging the API as part of their upcoming release of PagerDuty for Google Chat. The app will allow users of their incident management solution to take quick actions around an incident with the right team members needed. PagerDuty for Chat will allow the incident team to isolate and focus on the problem at hand without being distracted by having to set up a new space, or further distract any folks in the current space who aren’t a part of the resolution team for a specific incident. All of this will be done seamlessly through PagerDuty for Chat as part of the natural flow of working with Google Chat.

Example of how a Chat app with the new APIs can enable users to easily create new Spaces and add members to an incident.

Learn more and get started

As you can imagine, there are many use cases that show off the potential of what you can build with the Chat API and the new Create methods. Whether it’s creating Spaces with specified members or extending Chat apps that spawn off new collaboration Spaces for use with help desk, HR, sales, customer support or any endless number of scenarios, we encourage you to explore what you can do today.

How to get started:



Building better products for new internet users

Since the launch of Google’s Next Billion Users (NBU) initiative in 2015, nearly 3 billion people worldwide came online for the very first time. In the next four years, we expect another 1.2 billion new internet users, and building for and with these users allows us to build better for the rest of the world.

For this year’s I/O, the NBU team has created sessions that will showcase how organizations can address representation bias in data, learn how new users experience the web, and understand Africa’s fast-growing developer ecosystem to drive digital inclusion and equity in the world around us.

We invite you to join these developers sessions and hear perspectives on how to build for the next billion users. Together, we can make technology helpful, relevant, and inclusive for people new to the internet.

Session: Building for everyone: the importance of representative data

Mike Knapp, Hannah Highfill and Emila Yang from Google’s Next Billion Users team, in partnership with Ben Hutchinson from Google’s Responsible AI team, will be leading a session on how to crowdsource data to build more inclusive products.

Data gathering is often the most overlooked aspect of AI, yet the data used for machine learning directly impacts a project’s success and lasting potential. Many organizations—Google included—struggle to gather the right datasets required to build inclusively and equitably for the next billion users. “We are going to talk about a very experimental product and solution to building more inclusive technology,” says Knapp of his session. “Google is testing a paid crowdsourcing app [Task Mate] to better serve underrepresented communities. This tool enables developers to reach ‘crowds’ in previously underrepresented regions. It is an incredible step forward in the mission to create more inclusive technology.”

Bookmark this session to your I/O developer profile.

Session: What we can learn from the internet’s newest users

“The first impression that your product makes matters,” says Nicole Naurath, Sr. UX Researcher - Next Billion Users at Google. “It can either spark curiosity and engagement, or confuse your audience.”

Everyday, thousands of people are coming online for the first time. Their experience can be directly impacted by how familiar they are with technology. People with limited digital experience, or novice internet users, experience the web differently and sometimes developers are not used to building for them. Design elements such as images, icons, and colors play a key role in digital experience. If images are not relatable, icons are irrelevant, and colors are not grounded in cultural context, the experience can confuse anyone, especially someone new to the internet.

Nicole Naurath and Neha Malhotra, from Google’s Next Billion Users team, will be leading the session on what we can learn from the internet’s newest users, how users experience the web and share a framework for evaluating products that work for novice internet users.”

Bookmark this session to your I/O developer profile.

Session: Africa’s booming developer ecosystem

Software developers are the catalyst for digital transformation in Africa. They empower local communities, spark growth for businesses, and drive innovation in a continent which more than 1.3 billion people call home. Demand for African developers reached an all-time high last year, driven by both local and remote opportunities, and is growing even faster than the continent's developer population.

Andy Volk and John Kimani from the Developer and Startup Ecosystem team in Sub-Saharan Africa will share findings from the Africa Developer Ecosystem 2021 report.

In their words, “This session is for anyone who wants to find out more about how African developers are building for the world or who is curious to find out more about this fast-growing opportunity on the continent. We are presenting trends, case studies and new research from Google and its partners to illustrate how people and organizations are coming together to support the rapid growth of the developer ecosystem.”

Bookmark this session to your I/O developer profile.

To learn more about Google’s Next Billion Users initiative, visit nextbillionusers.google

Women Techmakers expands online safety education

Online violence against women goes beyond the internet. It impacts society and the economy at large. It leads to damaging economic repercussions, due to increased medical costs and lost income for victims. It impacts the offline world, with seven percent of women changing jobs due to online violence, and one in ten experiencing physical harm due to online threats, according to Google-supported research conducted by the Economist Intelligence Unit in 2020.

That’s why the Women Techmakers program, which provides visibility, community and resources for women in technology, supports online safety education for women and allies. Google community manager Merve Isler, who lives in Turkey and leads Women Techmakers efforts in Turkey, Central Asia and the Caucasus region, organized the first-ever women’s online safety hackathon in Turkey in 2020, which expanded to a full week of trainings and ideathons in 2021. Google community manager and Women Techmakers manager Hufsa Manawar brought online safety training to Pakistan in early 2022.

Now, Women Techmakers is providing a more structured way for women around the world to learn about online safety, in the form of a free online learning module, launched in April 2022, in honor of International Women’s Day. To create this module, I worked with my co-host Alana Fromm from Jigsaw and our teams to create a series of videos covering different topics related to women’s online safety. Jigsaw is a unit within Google that explores threats to open society and builds technological solutions.

In the online training, we begin by defining online violence and walking through the ways negative actors threaten women online, which include misinformation and defamation, cyberharassment and hate speech. Regardless of the tactic, the goal remains the same: to threaten and harass women into silence. We break down the groups of people involved in online harassment and the importance of surrounding oneself with allies.

In one of the videos in the series, Women Techmakers Ambassador Esrae Abdelnaby Hassan shares her story of online abuse. She was exploring learning cybersecurity when a mentor she trusted gave her USB drives with courses and reading material that were infected with viruses and allowed him to take control of her computer and record videos. Then, he blackmailed her, using the videos he’d taken as threats. She felt afraid and isolated, and relied on her family for support as she addressed the harassment.

The learning module provides two codelabs, one on steps you can take to protect yourself online, and one on Perspective API, a free, open-source product built by Jigsaw and the Counter Abuse security team at Google. The first codelab provides practical guidance, and the second codelab walks viewers through the process of installing Perspective API, which uses machine learning to identify toxic comments.

We look forward to seeing the impact of our new, easy-to-access online training, as well as what our ambassadors are able to accomplish offline as the year progresses.

How GDSC students are using their skills to support communities in Ukraine

Posted by Laura Cincera, Program Manager Google Developer Student Clubs, Europe

Revealing character in moments of crisis

The conflict in Ukraine is a humanitarian crisis that presents complex challenges. During this time of uncertainty, communities of student developers are demonstrating extraordinary leadership skills and empathy as they come together to support those affected by the ongoing situation. Student Patricijia Čerkaitė and her Google Developer Student Club (GDSC) community at the Eindhoven University of Technology in the Netherlands organized Code4Ukraine, an international hackathon that brought diverse groups of over 80 student developers together on March 3-4, 2022, to develop technology solutions to support people affected by the conflict in Ukraine.

Even far from the conflict in the Netherlands, they felt compelled to make an impact. “I have relatives in Ukraine; they live in Crimea,” says Patricijia. “In my childhood, I used to spend summer holidays there, eating ice cream and swimming in the Black Sea.”

Patricijia sitting at desk in black chair looking back and smiling

Patricijia working on the details for Code4Ukraine.

Rushing to help others in need with technology

Time was of the essence. The organizing team in Eindhoven contacted other students, connected with communities near and far, and sprang into action. The team invited Ukrainian Google Developer Expert Artem Nikulchenko to share his technology knowledge and first-hand experience of what is happening in his country. Students discussed issues faced by Ukrainians, reviewed problems citizens faced, and ideated around technology-centric solutions. Feelings of exasperation, frustration, and most importantly, hope became lines of code. Together, students built solutions to answer the call: Code4Ukraine.

Blue and yellow emblem that says Code 4 Ukraine

Then, gradually, through a collaborative effort, problem solving, and hours of hard work, the winners of the Code4Ukraine Hackathon emerged: Medicine Warriors, a project built by a diverse, cross-cultural group of undergraduate students and IT professionals from Ukraine, Poland, and Georgia, aiming to address the insulin shortage in Ukraine. The project gathers publicly available data from Ukrainian government notices on insulin availability across Ukraine and presents it in an easily readable way.

Photograph of the Medicine Warriors application design

Photograph of the Medicine Warriors application design

Helping: at the heart of their community

One member of the winning team is the GDSC chapter lead at the National Technical University of Ukraine Kyiv Polytechnic Institute, Ekaterina Gricaenko. “In Ukraine, there is a saying: ‘друг пізнається в біді,’ which translates to, ‘you will know who your friends are when the rough times arrive,’” says Ekaterina. “And now, I can say that the GDSC community is definitely on my family list.”

Photograph of Ekaterina Gricaenko, GDSC Lead

Ekaterina Gricaenko, GDSC Lead, Kyiv Polytechnic Institute

The Code4Ukraine initiative's goal of bringing others together to make an impact offers a prime example of what the Google Developer Student Clubs (GDSC) program aims to achieve: empowering student developers in universities to impact their communities through technology.

Reflecting on her experience leading the Kyiv GDSC chapter, Ekaterina says, “I started my journey with GDSC as a Core Team member, and during that time, I fell in love with our community, goals, and key concepts. Then, I decided to become a lead, to share my enthusiasm and support students as they pursue their professional dreams.

The Kyiv GDSC has organized over 18 workshops, written over 200 articles, run multiple study groups, and reached over a thousand followers on social media. “It’s incredible to realize how far we have come,” Ekaterina says.

A visual collage displays multiple activities organized by GDSC KPI

A visual collage displays multiple activities organized by GDSC KPI, led by Ekaterina Gricaenko.

Getting involved in your community

Through efforts like Code4Ukraine and other inspiring solutions like the 2022 Solution Challenge, students globally are giving communities hope as they tackle challenges and propose technical solutions. By joining a GDSC, students can grow their knowledge in a peer-to-peer learning environment and put theory into practice by building projects that solve for community problems and make a significant impact.

Photo of students in class in the upper right hand corner with a sign in the center that says Become a leader at your university

Learn more about Google Developer Student Clubs

If you feel inspired to make a positive change through technology, applications for GDSC leads for the upcoming 2022-2023 academic year are now open. Students can apply at goo.gle/gdsc-leads. If you’re passionate about technology and are ready to use your skills to help your student developer community, then you should consider becoming a Google Developer Student Clubs Lead!

We encourage all interested students to apply here and submit their applications as soon as possible. The applications in Europe will be open until 31st May 2022.

Meet 11 startups working to combat climate change

We believe that technology and entrepreneurship can help avert the world’s climate crisis. Startup founders are using tools — from machine learning to mobile platforms to large scale data processing — to accelerate the change to a low-carbon economy. As part ofGoogle’s commitment to address climate change, we’ll continue to invest in the technologists and entrepreneurs who are working to build climate solutions.

So this Earth Day, we’re announcing the second Google for Startups Accelerator: Climate Change cohort. This ten-week program consists of intensive workshops and expert mentorship designed to help growth-stage, sustainability-focused startups learn technical, product and leadership best practices. Meet the 11 selected startups using technology to better our planet:

  • AmpUpin Cupertino, California: AmpUp is an electric vehicle (EV) software company and network provider that helps drivers, hosts, and fleets to charge stress-free.
  • Carbon Limitin Boca Raton, Florida: Carbon Limit transforms concrete into a CO2 sponge with green cement nanotechnology, turning roads and buildings into permanent CO2 solutions.
  • ChargeNet Stationsin Los Angeles, California: ChargeNet Stations aims to make charging accessible and convenient in all communities, preventing greenhouse gas emissions through use of PV + storage.
  • ChargerHelp!In Los Angeles, California: ChargerHelp! provides on-demand repair of electric vehicle charging stations, while also building out local workforces, removing barriers and creating economic mobility within all communities.
  • CO-Zin Boulder, Colorado: CO-Z accelerates electricity decarbonization and empowers renters, homeowners and businesses with advanced control, automated savings and power failure protection.
  • Community Energy Labsin Portland, Oregon: Community Energy Labs uses artificial intelligence to make smart energy management and decarbonization both accessible and affordable for community building owners.
  • Moment Energyin Vancouver, British Columbia: Moment Energy repurposes retired electric vehicle (EV) batteries to provide clean, affordable and reliable energy storage.
  • Mi Terroin City of Industry, California: Mi Terro is a synthetic biology and advanced material company that creates home compostable, plastic-alternative biomaterials made from plant-based agricultural waste.
  • Nithioin Washington, DC: Nithio is an AI-driven platform for clean energy investment that standardizes credit risk to catalyze capital to address climate change and achieve universal energy access.
  • Re Companyin New York City, New York: Re Company is a reusable packaging subscription service that supplies reuse systems with optimally designed containers and cycles them back into the supply chain at end of life.
  • Understoryin Pacific Grove, California: Understory rapidly monitors and quantifies discrete landscape changes to mitigate the effects of environmental change and deliver actionable information for land management, habitat conservation and climate risk assessment.

When the program kicks off this summer, startups will receive mentoring and technical support tailored to their business through a mix of one-to-one and one-to-many learning sessions, both remotely and in-person, from Google engineers and external experts. Stay tuned on Google for Startups social channels to see their experience unfold over the next three months.

Learn more about Google for Startups Accelerator here, and the latest on Google’s commitment to sustainability here.