Tag Archives: developers

Finding courage and inspiration in the developer community

Posted by Monika Janota

How do we empower women in tech and equip them with the skills to help them become true leaders? One way is learning from others' successes and failures. Web GDEs—Debbie O'Brien, Julia Miocene, and Glafira Zhur—discuss the value of one to one mentoring and the impact it has made on their own professional and personal development.

A 2019 study showed that only 25% of keynote speakers at tech events are women, meanwhile 70% of female speakers mentioned being the only woman on a conference panel. One way of changing that is by running programs and workshops with the aim of empowering women and providing them with the relevant soft skills training, including public speaking, content creation, and leadership. Among such programs are the Women Developer Academy (WDA) and the Road to GDE, both run by Google's developer communities.

With more than 1000 graduates around the world, WDA is a program run by Women Techmakers for professional IT practitioners. To equip women in tech with speaking and presentation skills, along with confidence and courage, training sessions, workshops, and mentoring meetings are organized. Road to GDE, on the other hand, is a three-month mentoring program created to support people from historically underrepresented groups in tech on their path to becoming experts. What makes both programs special is the fact that they're based on a unique connection between mentor and mentee, direct knowledge sharing, and an individualized approach.

Photo of Julia Miocene speaking at a conference Julia Miocene

Some Web GDE community members have had a chance to be part of the mentoring programs for women as both mentors and mentees. Frontend developers Julia Miocene and Glafira Zhur are relatively new to the GDE program. They became Google Developers Experts in October 2021 and January 2022 respectively, after graduating from the first edition of both the Women Developer Academy and the Road to GDE; whilst Debbie O'Brien has been a member of the community and an active mentor for both programs for several years. They have all shared their experiences with the programs in order to encourage other women in tech to believe in themselves, take a chance, and to become true leaders.

Different paths, one goal

Although all three share an interest in frontend development, each has followed a very different path. Glafira Zhur, now a team leader with 12 years of professional experience, originally planned to become a musician, but decided to follow her other passion instead. A technology fan thanks to her father, she was able to reinstall Windows at the age of 11. Julia Miocene, after more than ten years in product design, was really passionate about CSS. She became a GDE because she wanted to work with Chrome and DevTools. Debbie is a Developer Advocate working in the frontend area, with a strong passion for user experience and performance. For her, mentoring is a way of giving back to the community, helping other people achieve their dreams, and become the programmers they want to be. At one point while learning JavaScript, she was so discouraged she wanted to give it up, but her mentor convinced her she could be successful. Now she's returning the favor.

Photo of Debbie O'Brien and another woman in a room smiling at the camera

Debbie O'Brien

As GDEs, Debbie, Glafira, and Julia all mention that the most valuable part of becoming experts is the chance to meet people with similar interests in technology, to network, and to provide early feedback for the web team. Mentoring, on the other hand, enables them to create, it boosts their confidence and empowers them to share their skills and knowledge—regardless of whether they're a mentor or a mentee.

Sharing knowledge

A huge part of being a mentee in Google's programs is learning how to share knowledge with other developers and help them in the most effective way. Many WDA and Road to GDE participants become mentors themselves. According to Julia, it's important to remember that a mentor is not a teacher—they are much more. The aim of mentoring, she says, is to create something together, whether it's an idea, a lasting connection, a piece of knowledge, or a plan for the future.

Glafira mentioned that she learned to perceive social media in a new way—as a hub for sharing knowledge, no matter how small the piece of advice might seem. It's because, she says, even the shortest Tweet may help someone who's stuck on a technical issue that they might not be able to resolve without such content being available online. Every piece of knowledge is valuable. Glafira adds that, "Social media is now my tool, I can use it to inspire people, invite them to join the activities I organize. It's not only about sharing rough knowledge, but also my energy."

Working with mentors who have successfully built an audience for their own channels allows the participants to learn more about the technical aspects of content creation—how to choose topics that might be interesting for readers, set up the lighting in the studio, or prepare an engaging conference speech.

Learning while teaching

From the other side of the mentor—mentee relationship, Debbie O'Brien says the best thing about mentoring is seeing the mentees grow and succeed: "We see in them something they can't see in themselves, we believe in them, and help guide them to achieve their goals. The funny thing is that sometimes the advice we give them is also useful for ourselves, so as mentors we end up learning a lot from the experience too."

TV screenin a room showing and image od Glafira Zhur

Glafira Zhur

Both Glafira and Julia state that they're willing to mentor other women on their way to success. Asked what is the most important learning from a mentorship program, they mention confidence—believing in yourself is something they want for every female developer out there.

Growing as a part of the community

Both Glafira and Julia mentioned that during the programs they met many inspiring people from their local developer communities. Being able to ask others for help, share insights and doubts, and get feedback was a valuable lesson for both women.

Mentors may become role models for the programs' participants. Julia mentioned how important it was for her to see someone else succeed and follow in their footsteps, to map out exactly where you want to be professionally, and how you can get there. This means learning not just from someone else's failures, but also from their victories and achievements.

Networking within the developer community is also a great opportunity to grow your audience by visiting other contributors' podcasts and YouTube channels. Glafira recalls that during the Academy, she received multiple invites and had an opportunity to share her knowledge on different channels.

Overall, what's even more important than growing your audience is finding your own voice. As Debbie states: "We need more women speaking at conferences, sharing knowledge online, and being part of the community. So I encourage you all to be brave and follow your dreams. I believe in you, so now it's time to start believing in yourself."

How to use App Engine Memcache in Flask apps (Module 12)

Posted by Wesley Chun


In our ongoing Serverless Migration Station series aimed at helping developers modernize their serverless applications, one of the key objectives for Google App Engine developers is to upgrade to the latest language runtimes, such as from Python 2 to 3 or Java 8 to 17. Another objective is to help developers learn how to move away from App Engine legacy APIs (now called "bundled services") to Cloud standalone equivalent services. Once this has been accomplished, apps are much more portable, making them flexible enough to:

In today's Module 12 video, we're going to start our journey by implementing App Engine's Memcache bundled service, setting us up for our next move to a more complete in-cloud caching service, Cloud Memorystore. Most apps typically rely on some database, and in many situations, they can benefit from a caching layer to reduce the number of queries and improve response latency. In the video, we add use of Memcache to a Python 2 app that has already migrated web frameworks from webapp2 to Flask, providing greater portability and execution options. More importantly, it paves the way for an eventual 3.x upgrade because the Python 3 App Engine runtime does not support webapp2. We'll cover both the 3.x and Cloud Memorystore ports next in Module 13.

Got an older app needing an update? We can help with that.

Adding use of Memcache

The sample application registers individual web page "visits," storing visitor information such as the IP address and user agent. In the original app, these values are stored immediately, and then the most recent visits are queried to display in the browser. If the same user continuously refreshes their browser, each refresh constitutes a new visit. To discourage this type of abuse, we cache the same user's visit for an hour, returning the same cached list of most recent visits unless a new visitor arrives or an hour has elapsed since their initial visit.

Below is pseudocode representing the core part of the app that saves new visits and queries for the most recent visits. Before, you can see how each visit is registered. After the update, the app attempts to fetch these visits from the cache. If cached results are available and "fresh" (within the hour), they're used immediately, but if cache is empty, or a new visitor arrives, the current visit is stored as before, and this latest collection of visits is cached for an hour. The bolded lines represent the new code that manages the cached data.

Adding App Engine Memcache usage to sample app


Today's "migration" began with the Module 1 sample app. We added a Memcache-based caching layer and arrived at the finish line with the Module 12 sample app. To practice this on your own, follow the codelab doing it by-hand while following the video. The Module 12 app will then be ready to upgrade to Cloud Memorystore should you choose to do so.

In Fall 2021, the App Engine team extended support of many of the bundled services to next-generation runtimes, meaning you are no longer required to migrate to Cloud Memorystore when porting your app to Python 3. You can continue using Memcache in your Python 3 app so long as you retrofit the code to access bundled services from next-generation runtimes.

If you do want to move to Cloud Memorystore, stay tuned for the Module 13 video or try its codelab to get a sneak peek. All Serverless Migration Station content (codelabs, videos, source code [when available]) can be accessed at its open source repo. While our content initially focuses on Python users, we hope to one day cover other language runtimes, so stay tuned. For additional video content, check out our broader Serverless Expeditions series.

Celebrating leaders in AAPI communities

Posted by Google Developer Studio

In recognition of Asian American and Pacific Islander Heritage Month, we are speaking with mentors and leaders in tech and who identify as part of the AAPI community. Many of the influential figures we feature are involved with and help champion inclusivity programs like Google Developer Experts and Google Developer Student Clubs, while others work on leading in product areas like TensorFlow and drive impact through their line of work and communities.

On that note, we are honoring this year’s theme of “Advancing Leaders Through Collaboration” by learning more about the power of mentorship, advice they’ve received from other leaders, and their biggest accomplishments.

Read more about leads in the AAPI community below.

Ben Hong

Senior Staff Developer Experience Engineer at Netlify

What’s the best piece of advice you can offer new/junior developers looking to grow into leadership roles?

There is a lot of advice out there on how to get the most out of your career by climbing the ladder and getting leadership roles. Before you embark on that journey, first ask yourself the question "Why do I want this?"

Becoming a leader comes with a lot of glitz and glamor, but the reality is that it carries a huge weight of responsibility because the decisions and actions you take as a leader will impact the lives of those around you in significant ways you can't foresee.

As a result, the key to becoming the best leader you can be is to:

  1. Establish what your values and principles are
  2. Align them to the actions you take each and every day

Because at the end of the day, leaders are often faced with difficult decisions that lead to an uncertain future. And without core values and principles to guide you as an individual, you run the risk of being easily swayed by short term trade offs that could result in a long term loss.

This world needs leaders who can stand their ground against the temptations of short-term wins and make the best decisions they can while fighting for those that follow them. If you stand firm in your values and listen to those around you, you'll be able to create profound impact in your community.

Taha Bouhsine

Data Scientist and GDSCUIZ Lead

What’s the best piece of advice you can offer new/junior developers looking to grow into leadership roles?

Create a journey worth taking. You will face many challenges and a new set of problems. You will start asking a lot of questions as everything seems to be unfamiliar.

Things get much lighter if you are guided by a mentor, as you will get guidance on how to act in this new chapter of life. In your early days, invest as much as you can in building and nurturing a team, as it will save you a lot of time along the road. Surround yourself with real people who take the initiative, get to the action, and are willing to grow and learn, nurture their skills and guide them towards your common goal. Don't try to be a people pleaser as it's an impossible mission.

Your actions will offend some people one way or the other. That’s ok as you should believe in your mission, create a clear plan with well-defined tasks and milestones, and be firm with your decision. In the end, responsibility is yours to bear, so at least take it on something you decided, not something that was forced upon you by others.

Finally, when there is fire, look for ways to put it out. Take care of your soul, and enjoy the journey!

Huyen Tue Dao

Android Developer, Trello

What do you love most about being a part of the developer community?

It has been the most rewarding and critical part of my career to meet other developers, learning and sharing knowledge and getting to know them as human beings.

Development is a job of constant learning, whether it is the latest technology, trends, issues, and challenges or the day-to-day intricacies and nuances of writing specialized code and solving problems in efficient and elegant ways. I don't think I'd have the tools to solve issues large and small without the sharing of knowledge and experience of the developer community. If you're having a problem of any kind, chances are that someone has had the same challenges. You can take comfort that you can probably find the answer or at least find people that can help you. You can also feel confident that if you discovered something new or learned important lessons, someone will want to hear what you have to say.

I love seeing and being part of this cycle and interchange; as we pool our experience, our knowledge, and insights, we become stronger and more skilled as a community. I would not be the engineer or person that I am without the opportunities of this exchange.

Just as important, though, is the camaraderie and support of those who do what I do and love it. I have been so fortunate to have been in communities that have been open and welcoming, ready to make connections and form networks, eager to celebrate victories and commiserate with challenges. Regardless of the technical and personal challenges of the everyday that may get to me, there are people that understand and can support me and provide brilliantly diverse perspectives of different industries, countries, cultures, and ages.

Malak Magdy Ali

Google Developer Student Club Lead at Canadian International College, Egypt

What’s the best piece of advice you can offer new/junior developers looking to grow into leadership roles?

The best piece of advice I can give to new leaders is to have empathy. Having empathy will make you understand people’s actions and respect their feelings. This will make for stronger teams.

Also, give others a space to lead. Involve your team in making decisions; they come up with great ideas that can help you and teammates learn from each other. In this process, trust is also built, resulting in a better quality product.

Finally, don't underestimate yourself. Do your best and involve your team to discuss the overall quality of your work and let them make recommendations.

100 things we announced at I/O

And that’s a wrap on I/O 2022! We returned to our live keynote event, packed in more than a few product surprises, showed off some experimental projects and… actually, let’s just dive right in. Here are 100 things we announced at I/O 2022.

Gear news galore

Pixel products grouped together on a white background. Products include Pixel Bud Pro, Google Pixel Watch and Pixel phones.
  1. Let’s start at the very beginning — with some previews. We showed off a first look at the upcoming Pixel 7 and Pixel 7 Pro[1ac74e], powered by the next version of Google Tensor
  2. We showed off an early look at Google Pixel Watch! It’s our first-ever all-Google built watch: 80% recycled stainless steel[ec662b], Wear OS, Fitbit integration, Assistant access…and it’s coming this fall.
  3. Fitbit is coming to Google Pixel Watch. More experiences built for your wrist are coming later this year from apps like Deezer and Soundcloud.
  4. Later this year, you’ll start to see more devices powered with Wear OS from Samsung, Fossil Group, Montblanc and others.
  5. Google Assistant is coming soon to the Samsung Galaxy Watch 4 series.
  6. The new Pixel Buds Pro use Active Noise Cancellation (ANC), a feature powered by a custom 6-core audio chip and Google algorithms to put the focus on your music — and nothing else.
  7. Silent Seal™ helps Pixel Buds Pro adapt to the shape of your ear, for better sound. Later this year, Pixel Buds Pro will also support spatial audio to put you in the middle of the action when watching a movie or TV show with a compatible device and supported content.
  8. They also come in new colors: Charcoal, Fog, Coral and Lemongrass. Ahem, multiple colors — the Pixel Buds Pro have a two-tone design.
  9. With Multipoint connectivity, Pixel Buds Pro can automatically switch between your previously paired Bluetooth devices — including compatible laptops, tablets, TVs, and Android and iOS phones.
  10. Plus, the earbuds and their case are water-resistant[a53326].
  11. …And you can preorder them on July 21.
  12. Then there’s the brand new Pixel 6a, which comes with the full Material You experience.
  13. The new Pixel 6a has the same Google Tensor processor and hardware security architecture with Titan M2 as the Pixel 6 and Pixel 6 Pro.
  14. It also has two dual rear cameras — main and ultrawide lenses.
  15. You’ve got three Pixel 6a color options: Chalk, Charcoal and Sage. The options keep going if you pair it with one of the new translucent cases.
  16. It costs $449 and will be available for pre-order on July 21.
  17. We also showed off an early look at the upcoming Pixel tablet[a12f26], which we’re aiming to make available next year.

Android updates

18. In the last year, over 1 billion new Android phones have been activated.

19. You’ll no longer need to grant location to apps to enable Wi-Fi scanning in Android 13.

20. Android 13 will automatically delete your clipboard history after a short time to preemptively block apps from seeing old copied information

21. Android 13’s new photo picker lets you select the exact photos or videos you want to grant access to, without needing to share your entire media library with an app.

22. You’ll soon be able to copy a URL or picture from your phone, and paste it on your tablet in Android 13.

23. Android 13 allows you to select different language preferences for different apps.

24. The latest Android OS will also require apps to get your permission before sending you notifications.

25. And later this year, you’ll see a new Security & Privacy settings page with Android 13.

26. Google’s Messages app already has half a billion monthly active users with RCS, a new standard that enables you to share high-quality photos, see type indicators, message over Wi-Fi and get a better group messaging experience.

27. Messages is getting a public beta of end-to-end encryption for group conversations.

28. Early earthquake warnings are coming to more high-risk regions around the world.

29. On select headphones, you’ll soon be able to automatically switch audio between the devices you’re listening on with Android.

30. Stream and use messaging apps from your Android phone to laptop with Chromebook’s Phone Hub, and you won’t even have to install any apps.

31. Google Wallet is here! It’s a new home for things like your student ID, transit tickets, vaccine card, credit cards, debits cards.

32. You can even use Google Wallet to hold your Walt Disney World park pass.

33. Google Wallet is coming to Wear OS, too.

34. Improved app experiences are coming for Android tablets: YouTube Music, Google Maps and Messages will take advantage of the extra screen space, and more apps coming soon include TikTok, Zoom, Facebook, Canva and many others.

Developer deep dive

Illustration depicting a smart home, with lights, thermostat, television, screen and mobile device.

35. The Google Home and Google Home Mobile software developer kit (SDK) for Matter will be launching in June as developer previews.

36. The Google Home SDK introduces Intelligence Clusters, which make intelligence features like Home and Away, available to developers.

37. Developers can even create QR codes for Google Wallet to create their own passes for any use case they’d like.

38. Matter support is coming to the Nest Thermostat.

39. The Google Home Developer Center has lots of updates to check out.

40. There’s now built-in support for Matter on Android, so you can use Fast Pair to quickly connect Matter-enabled smart home devices to your network, Google Home and other accompanying apps in just a few taps.

41. The ARCore Geospatial API makes Google Maps’ Live View technology available to developers for free. Companies like Lime are using it to help people find parking spots for their scooters and save time.

42. DOCOMO and Curiosity are using the ARCore Geospatial API to build a new game that lets you fend off virtual dragons with robot companions in front of iconic Tokyo landmarks, like the Tokyo Tower.

43. AlloyDB is a new, fully-managed PostgreSQL-compatible database service designed to help developers manage enterprise database workloads — in our performance tests, it’s more than four times faster for transactional workloads and up to 100 times faster for analytical queries than standard PostgreSQL.

44. AlloyDB uses the same infrastructure building blocks that power large-scale products like YouTube, Search, Maps and Gmail.

45. Google Cloud’s machine learning cluster powered by Cloud TPU v4 Pods is super powerful — in fact, we believe it’s the world’s largest publicly available machine learning hub in terms of compute power…

46. …and it operates at 90% carbon-free energy.

47. We also announced a preview of Cloud Run jobs, which reduces the time developers spend running administrative tasks like database migration or batch data transformation.

48. We announced Flutter 3.0, which will enable developers to publish production-ready apps to six platforms at once, from one code base (Android, iOS, Desktop Web, Linux, Desktop Windows and MacOS).

49. To help developers build beautiful Wear apps, we announced the beta of Jetpack Compose for Wear OS.

50. We’re making it faster and easier for developers to build modern, high-quality apps with new Live edit features in Android Studio.

Help for the home

GIF of a man baking cookies with a speech bubble saying “Set a timer for 10 minutes.” His Google Nest Hub Max responds with a speech bubble saying “OK, 10 min. And that’s starting…now.”

51. Many Nest Devices will become Matter controllers, which means they can serve as central hubs to control Matter-enabled devices both locally and remotely from the Google Home app.

52. Works with Hey Google is now Works with Google Home.

53. The new home.google is your new hub for finding out everything you can do with your Google Home system.

54. Nest Hub Max is getting Look and Talk, where you can simply look at your device to ask a question without saying “Hey Google.”

55. Look and Talk works when Voice Match and Face Match recognize that it’s you.

56. And video from Look and Talk interactions is processed entirely on-device, so it isn’t shared with Google or anyone else.

57. Look and Talk is opt-in. Oh, and FYI, you can still say “Hey Google” whenever you want!

58. Want to learn more about it? Just say “Hey Google, what is Look and Talk?” or “Hey Google, how do you enable Look and Talk?”

59. We’re also expanding quick phrases to Nest Hub Max, so you can skip saying “Hey Google” for some of your most common daily tasks – things like “set a timer for 10 minutes” or “turn off the living room lights.”

60. You can choose the quick phrases you want to turn on.

61. Your quick phrases will work when Voice Match recognizes it’s you .

62. And looking ahead, Assistant will be able to better understand the imperfections of human speech without getting tripped up — including the pauses, “umms” and interruptions — making your interactions feel much closer to a natural conversation.

Taking care of business

Animated GIF  demonstrating portrait light, bringing studio-quality lighting effects to Google Meet.

63. Google Meet video calls will now look better thanks to portrait restore and portrait light, which use AI and machine learning to improve quality and lighting on video calls.

64. Later this year we’re scaling the phishing and malware protections that guard Gmail to Google Docs, Sheets and Slides.

65. Live sharing is coming to Google Meet, meaning users will be able to share controls and interact directly within the meeting, whether it’s watching an icebreaker video from YouTube or sharing a playlist.

66. Automated built-in summaries are coming to Spaces so you can get a helpful digest of conversations to catch up quickly.

67. De-reverberation for Google Meet will filter out echoes in spaces with hard surfaces, giving you conference-room audio quality whether you’re in a basement, a kitchen, or a big empty room.

68. Later this year, we're bringing automated transcriptions of Google Meet meetings to Google Workspace, so people can catch up quickly on meetings they couldn't attend.

Apps for on-the-go

A picture of London in immersive view.

69. Google Wallet users will be able to check the balance of transit passes and top up within Google Maps.

70. Google Translate added 24 new languages.

71. As part of this update, Indigenous languages of the Americas (Quechua, Guarani and Aymara) and an English dialect (Sierra Leonean Krio) have also been added to Translate for the first time.

72. Google Translate now supports a total of 133 languages used around the globe.

73. These are the first languages we’ve added using Zero-resource Machine Translation, where a machine learning model only sees monolingual text — meaning, it learns to translate into another language without ever seeing an example.

74. Google Maps’ new immersive view is a whole new way to explore so you can see what an area truly looks and feels like.

75. Immersive view will work on nearly any phone or tablet; you don’t need the fanciest or newest device.

76. Immersive view will first be available in L.A., London, New York, San Francisco and Tokyo — with more places coming soon.

77. Last year we launched eco-friendly routing in the U.S. and Canada. Since then, people have used it to travel 86 billion miles, which saved more than half a million metric tons of carbon emissions — that’s like taking 100,000 cars off the road.

78. And we’re expanding eco-friendly routing to more places, like Europe.

All in on AI

Ten circles in a row, ranging from dark to light.

The 10 shades of the Monk Skin Tone Scale.

79. A team at Google Research partnered with Harvard’s Dr. Ellis Monk to openly release the Monk Skin Tone Scale, a new tool for measuring skin tone that can help build more inclusive products.

80. Google Search will use the Monk Skin Tone Scale to make it easier to find more relevant results — for instance, if you search for “bridal makeup,” you’ll see an option to filter by skin tone so you can refine to results that meet your needs.

81. Oh, and the Monk Skin Tone Scale was used to evaluate a new set of Real Tone filters for Photos that are designed to work well across skin tones. These filters were created and tested in partnership with artists like Kennedi Carter and Joshua Kissi.

82. We’re releasing LaMDA 2, as a part of the AI Test Kitchen, a new space to learn, improve, and innovate responsibly on this technology together.

83. PaLM is a new language model that can solve complex math word problems, and even explain its thought process, step-by-step.

84. Nest Hub Max’s new Look and Talk feature uses six machine learning models to process more than 100 signals in real time to detect whether you’re intending to make eye contact with your device so you can talk to Google Assistant and not just giving it a passing glance.

85. We recently launched multisearch in the Google app, which lets you search by taking a photo and asking a question at the same time. At I/O, we announced that later this year, you'll be able to take a picture or screenshot and add "near me" to get local results from restaurants, retailers and more.

86. We introduced you to an advancement called “scene exploration,” where in the future, you’ll be able to use multisearch to pan your camera and instantly glean insights about multiple objects in a wider scene.

Privacy, security and information

A GIF that shows someone’s Google account with a yellow alert icon, flagging recommended actions they should take to secure their account.

87. We’ve expanded our support for Project Shield to protect the websites of 200+ Ukrainian government agencies, news outlets and more.

88. Account Safety Status will add a simple yellow alert icon to flag actions you should take to secure your Google Account.

89. Phishing protections in Google Workspace are expanding to Docs, Slides and Sheets.

90. My Ad Center is now giving you even more control over the ads you see on YouTube, Search, and your Discover feed.

91. Virtual cards are coming to Chrome and Android this summer, adding an additional layer of security and eliminating the need to enter certain card details at checkout.

92. In the coming months, you’ll be able to request removal of Google Search results that have your contact info with an easy-to-use tool.

93. Protected Computing, a toolkit that helps minimize your data footprint, de-identifies your data and restricts access to your sensitive data.

94. On-device encryption is now available for Google Password Manager.

95. We’re continuing to auto enroll people in 2-Step Verification to reduce phishing risks.

What else?!

Illustration of a black one-story building with large windows. Inside are people walking around wooden tables and white walls containing Google hardware products. There is a Google Store logo on top of the building.

96. A new Google Store is opening in Williamsburg.

97. This is our first “neighborhood store” — it’s in a more intimate setting that highlights the community. You can find it at 134 N 6th St., opening on June 16.

98. The store will feature an installation by Brooklyn-based artist Olalekan Jeyifous.

99. Visitors there can picture everyday life with Google products through interactive displays that show how our hardware and services work together, and even get hands-on help with devices from Google experts.

100. We showed a prototype of what happens when we bring technologies like transcription and translation to your line of sight.

Google I/O 2022: Advancing knowledge and computing


Nearly 24 years ago, Google started with two graduate students, one product, and a big mission: to organize the world’s information and make it universally accessible and useful. In the decades since, we’ve been developing our technology to deliver on that mission.

The progress we've made is because of our years of investment in advanced technologies, from AI to the technical infrastructure that powers it all. And once a year — on my favorite day of the year :) — we share an update on how it’s going at Google I/O.

Today, I talked about how we’re advancing two fundamental aspects of our mission — knowledge and computing — to create products that are built to help. It’s exciting to build these products; it’s even more exciting to see what people do with them.

Thank you to everyone who helps us do this work, and most especially our Googlers. We are grateful for the opportunity.

- Sundar

Editor’s note: Below is an edited transcript of Sundar Pichai's keynote address during the opening of today's Google I/O Developers Conference.

Hi, everyone, and welcome. Actually, let’s make that welcome back! It’s great to return to Shoreline Amphitheatre after three years away. To the thousands of developers, partners and Googlers here with us, it’s great to see all of you. And to the millions more joining us around the world — we’re so happy you’re here, too.

Last year, we shared how new breakthroughs in some of the most technically challenging areas of computer science are making Google products more helpful in the moments that matter. All this work is in service of our timeless mission: to organize the world's information and make it universally accessible and useful.

I'm excited to show you how we’re driving that mission forward in two key ways: by deepening our understanding of information so that we can turn it into knowledge; and advancing the state of computing, so that knowledge is easier to access, no matter who or where you are.

Today, you'll see how progress on these two parts of our mission ensures Google products are built to help. I’ll start with a few quick examples. Throughout the pandemic, Google has focused on delivering accurate information to help people stay healthy. Over the last year, people used Google Search and Maps to find where they could get a COVID vaccine nearly two billion times.

A visualization of Google’s flood forecasting system, with three 3D maps stacked on top of one another, showing landscapes and weather patterns in green and brown colors. The maps are floating against a gray background.

Google’s flood forecasting technology sent flood alerts to 23 million people in India and Bangladesh last year.

We’ve also expanded our flood forecasting technology to help people stay safe in the face of natural disasters. During last year’s monsoon season, our flood alerts notified more than 23 million people in India and Bangladesh. And we estimate this supported the timely evacuation of hundreds of thousands of people.

In Ukraine, we worked with the government to rapidly deploy air raid alerts. To date, we’ve delivered hundreds of millions of alerts to help people get to safety. In March I was in Poland, where millions of Ukrainians have sought refuge. Warsaw’s population has increased by nearly 20% as families host refugees in their homes, and schools welcome thousands of new students. Nearly every Google employee I spoke with there was hosting someone.

Adding 24 more languages to Google Translate

In countries around the world, Google Translate has been a crucial tool for newcomers and residents trying to communicate with one another. We’re proud of how it’s helping Ukrainians find a bit of hope and connection until they are able to return home again.

Two boxes, one showing a question in English — “What’s the weather like today?” — the other showing its translation in Quechua. There is a microphone symbol below the English question and a loudspeaker symbol below the Quechua answer.

With machine learning advances, we're able to add languages like Quechua to Google Translate.

Real-time translation is a testament to how knowledge and computing come together to make people's lives better. More people are using Google Translate than ever before, but we still have work to do to make it universally accessible. There’s a long tail of languages that are underrepresented on the web today, and translating them is a hard technical problem. That’s because translation models are usually trained with bilingual text — for example, the same phrase in both English and Spanish. However, there's not enough publicly available bilingual text for every language.

So with advances in machine learning, we’ve developed a monolingual approach where the model learns to translate a new language without ever seeing a direct translation of it. By collaborating with native speakers and institutions, we found these translations were of sufficient quality to be useful, and we'll continue to improve them.

A list of the 24 new languages Google Translate now has available.

We’re adding 24 new languages to Google Translate.

Today, I’m excited to announce that we’re adding 24 new languages to Google Translate, including the first indigenous languages of the Americas. Together, these languages are spoken by more than 300 million people. Breakthroughs like this are powering a radical shift in how we access knowledge and use computers.

Taking Google Maps to the next level

So much of what’s knowable about our world goes beyond language — it’s in the physical and geospatial information all around us. For more than 15 years, Google Maps has worked to create rich and useful representations of this information to help us navigate. Advances in AI are taking this work to the next level, whether it’s expanding our coverage to remote areas, or reimagining how to explore the world in more intuitive ways.

An overhead image of a map of a dense urban area, showing gray roads cutting through clusters of buildings outlined in blue.

Advances in AI are helping to map remote and rural areas.

Around the world, we’ve mapped around 1.6 billion buildings and over 60 million kilometers of roads to date. Some remote and rural areas have previously been difficult to map, due to scarcity of high-quality imagery and distinct building types and terrain. To address this, we’re using computer vision and neural networks to detect buildings at scale from satellite images. As a result, we have increased the number of buildings on Google Maps in Africa by 5X since July 2020, from 60 million to nearly 300 million.

We’ve also doubled the number of buildings mapped in India and Indonesia this year. Globally, over 20% of the buildings on Google Maps have been detected using these new techniques. We’ve gone a step further, and made the dataset of buildings in Africa publicly available. International organizations like the United Nations and the World Bank are already using it to better understand population density, and to provide support and emergency assistance.

Immersive view in Google Maps fuses together aerial and street level images.

We’re also bringing new capabilities into Maps. Using advances in 3D mapping and machine learning, we’re fusing billions of aerial and street level images to create a new, high-fidelity representation of a place. These breakthrough technologies are coming together to power a new experience in Maps called immersive view: it allows you to explore a place like never before.

Let’s go to London and take a look. Say you’re planning to visit Westminster with your family. You can get into this immersive view straight from Maps on your phone, and you can pan around the sights… here’s Westminster Abbey. If you’re thinking of heading to Big Ben, you can check if there's traffic, how busy it is, and even see the weather forecast. And if you’re looking to grab a bite during your visit, you can check out restaurants nearby and get a glimpse inside.

What's amazing is that isn't a drone flying in the restaurant — we use neural rendering to create the experience from images alone. And Google Cloud Immersive Stream allows this experience to run on just about any smartphone. This feature will start rolling out in Google Maps for select cities globally later this year.

Another big improvement to Maps is eco-friendly routing. Launched last year, it shows you the most fuel-efficient route, giving you the choice to save money on gas and reduce carbon emissions. Eco-friendly routes have already rolled out in the U.S. and Canada — and people have used them to travel approximately 86 billion miles, helping save an estimated half million metric tons of carbon emissions, the equivalent of taking 100,000 cars off the road.

Still image of eco-friendly routing on Google Maps — a 53-minute driving route in Berlin is pictured, with text below the map showing it will add three minutes but save 18% more fuel.

Eco-friendly routes will expand to Europe later this year.

I’m happy to share that we’re expanding this feature to more places, including Europe later this year. In this Berlin example, you could reduce your fuel consumption by 18% taking a route that’s just three minutes slower. These small decisions have a big impact at scale. With the expansion into Europe and beyond, we estimate carbon emission savings will double by the end of the year.

And we’ve added a similar feature to Google Flights. When you search for flights between two cities, we also show you carbon emission estimates alongside other information like price and schedule, making it easy to choose a greener option. These eco-friendly features in Maps and Flights are part of our goal to empower 1 billion people to make more sustainable choices through our products, and we’re excited about the progress here.

New YouTube features to help people easily access video content

Beyond Maps, video is becoming an even more fundamental part of how we share information, communicate, and learn. Often when you come to YouTube, you are looking for a specific moment in a video and we want to help you get there faster.

Last year we launched auto-generated chapters to make it easier to jump to the part you’re most interested in.

This is also great for creators because it saves them time making chapters. We’re now applying multimodal technology from DeepMind. It simultaneously uses text, audio and video to auto-generate chapters with greater accuracy and speed. With this, we now have a goal to 10X the number of videos with auto-generated chapters, from eight million today, to 80 million over the next year.

Often the fastest way to get a sense of a video’s content is to read its transcript, so we’re also using speech recognition models to transcribe videos. Video transcripts are now available to all Android and iOS users.

Animation showing a video being automatically translated. Then text reads "Now available in sixteen languages."

Auto-translated captions on YouTube.

Next up, we’re bringing auto-translated captions on YouTube to mobile. Which means viewers can now auto-translate video captions in 16 languages, and creators can grow their global audience. We’ll also be expanding auto-translated captions to Ukrainian YouTube content next month, part of our larger effort to increase access to accurate information about the war.

Helping people be more efficient with Google Workspace

Just as we’re using AI to improve features in YouTube, we’re building it into our Workspace products to help people be more efficient. Whether you work for a small business or a large institution, chances are you spend a lot of time reading documents. Maybe you’ve felt that wave of panic when you realize you have a 25-page document to read ahead of a meeting that starts in five minutes.

At Google, whenever I get a long document or email, I look for a TL;DR at the top — TL;DR is short for “Too Long, Didn’t Read.” And it got us thinking, wouldn’t life be better if more things had a TL;DR?

That’s why we’ve introduced automated summarization for Google Docs. Using one of our machine learning models for text summarization, Google Docs will automatically parse the words and pull out the main points.

This marks a big leap forward for natural language processing. Summarization requires understanding of long passages, information compression and language generation, which used to be outside of the capabilities of even the best machine learning models.

And docs are only the beginning. We’re launching summarization for other products in Workspace. It will come to Google Chat in the next few months, providing a helpful digest of chat conversations, so you can jump right into a group chat or look back at the key highlights.

Animation showing summary in Google Chat

We’re bringing summarization to Google Chat in the coming months.

And we’re working to bring transcription and summarization to Google Meet as well so you can catch up on some important meetings you missed.

Visual improvements on Google Meet

Of course there are many moments where you really want to be in a virtual room with someone. And that’s why we continue to improve audio and video quality, inspired by Project Starline. We introduced Project Starline at I/O last year. And we’ve been testing it across Google offices to get feedback and improve the technology for the future. And in the process, we’ve learned some things that we can apply right now to Google Meet.

Starline inspired machine learning-powered image processing to automatically improve your image quality in Google Meet. And it works on all types of devices so you look your best wherever you are.

An animation of a man looking directly at the camera then waving and smiling. A white line sweeps across the screen, adjusting the image quality to make it brighter and clearer.

Machine learning-powered image processing automatically improves image quality in Google Meet.

We’re also bringing studio quality virtual lighting to Meet. You can adjust the light position and brightness, so you’ll still be visible in a dark room or sitting in front of a window. We’re testing this feature to ensure everyone looks like their true selves, continuing the work we’ve done with Real Tone on Pixel phones and the Monk Scale.

These are just some of the ways AI is improving our products: making them more helpful, more accessible, and delivering innovative new features for everyone.

Gif shows a phone camera pointed towards a rack of shelves, generating helpful information about food items. Text on the screen shows the words ‘dark’, ‘nut-free’ and ‘highly-rated’.

Today at I/O Prabhakar Raghavan shared how we’re helping people find helpful information in more intuitive ways on Search.

Making knowledge accessible through computing

We’ve talked about how we’re advancing access to knowledge as part of our mission: from better language translation to improved Search experiences across images and video, to richer explorations of the world using Maps.

Now we’re going to focus on how we make that knowledge even more accessible through computing. The journey we’ve been on with computing is an exciting one. Every shift, from desktop to the web to mobile to wearables and ambient computing has made knowledge more useful in our daily lives.

As helpful as our devices are, we’ve had to work pretty hard to adapt to them. I’ve always thought computers should be adapting to people, not the other way around. We continue to push ourselves to make progress here.

Here’s how we’re making computing more natural and intuitive with the Google Assistant.

Introducing LaMDA 2 and AI Test Kitchen

Animation shows demos of how LaMDA can converse on any topic and how AI Test Kitchen can help create lists.

A demo of LaMDA, our generative language model for dialogue application, and the AI Test Kitchen.

We're continually working to advance our conversational capabilities. Conversation and natural language processing are powerful ways to make computers more accessible to everyone. And large language models are key to this.

Last year, we introduced LaMDA, our generative language model for dialogue applications that can converse on any topic. Today, we are excited to announce LaMDA 2, our most advanced conversational AI yet.

We are at the beginning of a journey to make models like these useful to people, and we feel a deep responsibility to get it right. To make progress, we need people to experience the technology and provide feedback. We opened LaMDA up to thousands of Googlers, who enjoyed testing it and seeing its capabilities. This yielded significant quality improvements, and led to a reduction in inaccurate or offensive responses.

That’s why we’ve made AI Test Kitchen. It’s a new way to explore AI features with a broader audience. Inside the AI Test Kitchen, there are a few different experiences. Each is meant to give you a sense of what it might be like to have LaMDA in your hands and use it for things you care about.

The first is called “Imagine it.” This demo tests if the model can take a creative idea you give it, and generate imaginative and relevant descriptions. These are not products, they are quick sketches that allow us to explore what LaMDA can do with you. The user interfaces are very simple.

Say you’re writing a story and need some inspirational ideas. Maybe one of your characters is exploring the deep ocean. You can ask what that might feel like. Here LaMDA describes a scene in the Mariana Trench. It even generates follow-up questions on the fly. You can ask LaMDA to imagine what kinds of creatures might live there. Remember, we didn’t hand-program the model for specific topics like submarines or bioluminescence. It synthesized these concepts from its training data. That’s why you can ask about almost any topic: Saturn’s rings or even being on a planet made of ice cream.

Staying on topic is a challenge for language models. Say you’re building a learning experience — you want it to be open-ended enough to allow people to explore where curiosity takes them, but stay safely on topic. Our second demo tests how LaMDA does with that.

In this demo, we’ve primed the model to focus on the topic of dogs. It starts by generating a question to spark conversation, “Have you ever wondered why dogs love to play fetch so much?” And if you ask a follow-up question, you get an answer with some relevant details: it’s interesting, it thinks it might have something to do with the sense of smell and treasure hunting.

You can take the conversation anywhere you want. Maybe you’re curious about how smell works and you want to dive deeper. You’ll get a unique response for that too. No matter what you ask, it will try to keep the conversation on the topic of dogs. If I start asking about cricket, which I probably would, the model brings the topic back to dogs in a fun way.

This challenge of staying on-topic is a tricky one, and it’s an important area of research for building useful applications with language models.

These experiences show the potential of language models to one day help us with things like planning, learning about the world, and more.

Of course, there are significant challenges to solve before these models can truly be useful. While we have improved safety, the model might still generate inaccurate, inappropriate, or offensive responses. That’s why we are inviting feedback in the app, so people can help report problems.

We will be doing all of this work in accordance with our AI Principles. Our process will be iterative, opening up access over the coming months, and carefully assessing feedback with a broad range of stakeholders — from AI researchers and social scientists to human rights experts. We’ll incorporate this feedback into future versions of LaMDA, and share our findings as we go.

Over time, we intend to continue adding other emerging areas of AI into AI Test Kitchen. You can learn more at: g.co/AITestKitchen.

Advancing AI language models

LaMDA 2 has incredible conversational capabilities. To explore other aspects of natural language processing and AI, we recently announced a new model. It’s called Pathways Language Model, or PaLM for short. It’s our largest model to date and trained on 540 billion parameters.

PaLM demonstrates breakthrough performance on many natural language processing tasks, such as generating code from text, answering a math word problem, or even explaining a joke.

It achieves this through greater scale. And when we combine that scale with a new technique called chain-of- thought prompting, the results are promising. Chain-of-thought prompting allows us to describe multi-step problems as a series of intermediate steps.

Let’s take an example of a math word problem that requires reasoning. Normally, how you use a model is you prompt it with a question and answer, and then you start asking questions. In this case: How many hours are in the month of May? So you can see, the model didn’t quite get it right.

In chain-of-thought prompting, we give the model a question-answer pair, but this time, an explanation of how the answer was derived. Kind of like when your teacher gives you a step-by-step example to help you understand how to solve a problem. Now, if we ask the model again — how many hours are in the month of May — or other related questions, it actually answers correctly and even shows its work.

There are two boxes below a heading saying ‘chain-of-thought prompting’. A box headed ‘input’ guides the model through answering a question about how many tennis balls a person called Roger has. The output box shows the model correctly reasoning through and answering a separate question (‘how many hours are in the month of May?’)

Chain-of-thought prompting leads to better reasoning and more accurate answers.

Chain-of-thought prompting increases accuracy by a large margin. This leads to state-of-the-art performance across several reasoning benchmarks, including math word problems. And we can do it all without ever changing how the model is trained.

PaLM is highly capable and can do so much more. For example, you might be someone who speaks a language that’s not well-represented on the web today — which makes it hard to find information. Even more frustrating because the answer you are looking for is probably out there. PaLM offers a new approach that holds enormous promise for making knowledge more accessible for everyone.

Let me show you an example in which we can help answer questions in a language like Bengali — spoken by a quarter billion people. Just like before we prompt the model with two examples of questions in Bengali with both Bengali and English answers.

That’s it, now we can start asking questions in Bengali: “What is the national song of Bangladesh?” The answer, by the way, is “Amar Sonar Bangla” — and PaLM got it right, too. This is not that surprising because you would expect that content to exist in Bengali.

You can also try something that is less likely to have related information in Bengali such as: “What are popular pizza toppings in New York City?” The model again answers correctly in Bengali. Though it probably just stirred up a debate amongst New Yorkers about how “correct” that answer really is.

What’s so impressive is that PaLM has never seen parallel sentences between Bengali and English. Nor was it ever explicitly taught to answer questions or translate at all! The model brought all of its capabilities together to answer questions correctly in Bengali. And we can extend the techniques to more languages and other complex tasks.

We're so optimistic about the potential for language models. One day, we hope we can answer questions on more topics in any language you speak, making knowledge even more accessible, in Search and across all of Google.

Introducing the world’s largest, publicly available machine learning hub

The advances we’ve shared today are possible only because of our continued innovation in our infrastructure. Recently we announced plans to invest $9.5 billion in data centers and offices across the U.S.

One of our state-of-the-art data centers is in Mayes County, Oklahoma. I’m excited to announce that, there, we are launching the world’s largest, publicly-available machine learning hub for our Google Cloud customers.

Still image of a data center with Oklahoma map pin on bottom left corner.

One of our state-of-the-art data centers in Mayes County, Oklahoma.

This machine learning hub has eight Cloud TPU v4 pods, custom-built on the same networking infrastructure that powers Google’s largest neural models. They provide nearly nine exaflops of computing power in aggregate — bringing our customers an unprecedented ability to run complex models and workloads. We hope this will fuel innovation across many fields, from medicine to logistics, sustainability and more.

And speaking of sustainability, this machine learning hub is already operating at 90% carbon-free energy. This is helping us make progress on our goal to become the first major company to operate all of our data centers and campuses globally on 24/7 carbon-free energy by 2030.

Even as we invest in our data centers, we are working to innovate on our mobile platforms so more processing can happen locally on device. Google Tensor, our custom system on a chip, was an important step in this direction. It’s already running on Pixel 6 and Pixel 6 Pro, and it brings our AI capabilities — including the best speech recognition we’ve ever deployed — right to your phone. It’s also a big step forward in making those devices more secure. Combined with Android’s Private Compute Core, it can run data-powered features directly on device so that it’s private to you.

People turn to our products every day for help in moments big and small. Core to making this possible is protecting your private information each step of the way. Even as technology grows increasingly complex, we keep more people safe online than anyone else in the world, with products that are secure by default, private by design and that put you in control.

We also spent time today sharing updates to platforms like Android. They’re delivering access, connectivity, and information to billions of people through their smartphones and other connected devices like TVs, cars and watches.

And we shared our new Pixel Portfolio, including the Pixel 6a, Pixel Buds Pro, Google Pixel Watch, Pixel 7, and Pixel tablet all built with ambient computing in mind. We’re excited to share a family of devices that work better together — for you.

The next frontier of computing: augmented reality

Today we talked about all the technologies that are changing how we use computers and access knowledge. We see devices working seamlessly together, exactly when and where you need them and with conversational interfaces that make it easier to get things done.

Looking ahead, there's a new frontier of computing, which has the potential to extend all of this even further, and that is augmented reality. At Google, we have been heavily invested in this area. We’ve been building augmented reality into many Google products, from Google Lens to multisearch, scene exploration, and Live and immersive views in Maps.

These AR capabilities are already useful on phones and the magic will really come alive when you can use them in the real world without the technology getting in the way.

That potential is what gets us most excited about AR: the ability to spend time focusing on what matters in the real world, in our real lives. Because the real world is pretty amazing!

It’s important we design in a way that is built for the real world — and doesn’t take you away from it. And AR gives us new ways to accomplish this.

Let’s take language as an example. Language is just so fundamental to connecting with one another. And yet, understanding someone who speaks a different language, or trying to follow a conversation if you are deaf or hard of hearing can be a real challenge. Let's see what happens when we take our advancements in translation and transcription and deliver them in your line of sight in one of the early prototypes we’ve been testing.

You can see it in their faces: the joy that comes with speaking naturally to someone. That moment of connection. To understand and be understood. That’s what our focus on knowledge and computing is all about. And it’s what we strive for every day, with products that are built to help.

Each year we get a little closer to delivering on our timeless mission. And we still have so much further to go. At Google, we genuinely feel a sense of excitement about that. And we are optimistic that the breakthroughs you just saw will help us get there. Thank you to all of the developers, partners and customers who joined us today. We look forward to building the future with all of you.

Introducing the Google Meet Live Sharing SDK

Posted by Mai Lowe, Product Manager & Ken Cenerelli, Technical Writer

The Google Meet Live Sharing SDK is in preview. To use the SDK, developers can apply for access through our Early Access Program.

Today at Google I/O 2022, we announced new functionality for app developers to leverage the Google Meet video conferencing product through our new Meet Live Sharing SDK. Users can now come together and share experiences with each other inside an app, such as streaming a TV show, queuing up videos to watch on YouTube, collaborating on a music playlist, joining in a dance party, or working out together though Google Meet. This SDK joins the large set of offerings available to developers under the Google Workspace Platform.

Partners like YouTube, Heads Up!, UNO!™ Mobile, and Kahoot! are already integrating our SDK into their applications so that their users can participate in these new, shared interactive experiences later this year.

Supports multiple use cases

The Live Sharing SDK allows developers to sync content across devices in real time and incorporate Meet into their apps, enabling them to bring new, fun, and genuinely connecting experiences to their users. It’s also a great way to reach new audiences as current users can introduce your app to friends and family.

The SDK supports two key use cases:
  • Co-Watching—Syncs streaming app content across devices in real time, and allows users to take turns sharing videos and playing the latest hits from their favorite artist. This allows for users to share controls such as starting and pausing a video, or selecting new content in the app.
  • Co-Doing—Syncs arbitrary app content, allowing users to get together to perform an activity like playing video games or follow the same workout regime.

The co-watching and co-doing APIs are independent but can be used in parallel with each other.

Example workflow illustration of a user starting live sharing within an app using the Live Sharing SDK.

Get started

To learn more, watch our I/O 2022 session on the Google Meet Live Sharing SDK and check out the documentation for the Android version.

If you want to try out the SDK, developers can apply for access through our Early Access Program.

What’s next?

We’re also continuing to improve features by working to build the video-content experience you want to bring to your users. For more announcements like this and for info about the Google Workspace Platform and APIs, subscribe to our developer newsletter.

Now in Developer Preview: Create Spaces and Add Members with the Google Chat API

Posted by Mike Rhemtulla, Product Manager & Charles Maxson, Developer Advocate

The Google Chat API updates are in developer preview. To use the API, developers can apply for access through our Google Workspace Developer Preview Program.

In Google Chat, Spaces serve as a central place for team collaboration—instead of starting an email chain or scheduling a meeting, teams can move conversations and collaboration into a space, giving everybody the ability to stay connected, reference team or project info and revisit work asynchronously.

Programmatically create and populate Google Chat spaces

We are pleased to announce that you can programmatically create new Spaces and add members on behalf of users, through the Google Workspace Developer Preview Program via the Google Chat API.

These latest additions to the Chat API unlock some sought after scenarios for developers looking to add new dimensions to how they can leverage Chat. For example, organizations that need to create Spaces based on various business needs will now be able to do so programmatically. This will open up the door for Chat solutions that can build out Spaces modeled to represent new teams, projects, working groups, or whatever the specific use case may be that can benefit from automatically creating new Spaces.

Coming soon, example from an early developer preview partner

One of our developer preview partners, PagerDuty, is already leveraging the API as part of their upcoming release of PagerDuty for Google Chat. The app will allow users of their incident management solution to take quick actions around an incident with the right team members needed. PagerDuty for Chat will allow the incident team to isolate and focus on the problem at hand without being distracted by having to set up a new space, or further distract any folks in the current space who aren’t a part of the resolution team for a specific incident. All of this will be done seamlessly through PagerDuty for Chat as part of the natural flow of working with Google Chat.

Example of how a Chat app with the new APIs can enable users to easily create new Spaces and add members to an incident.

Learn more and get started

As you can imagine, there are many use cases that show off the potential of what you can build with the Chat API and the new Create methods. Whether it’s creating Spaces with specified members or extending Chat apps that spawn off new collaboration Spaces for use with help desk, HR, sales, customer support or any endless number of scenarios, we encourage you to explore what you can do today.

How to get started:

Now in Developer Preview: Create Spaces and Add Members with the Google Chat API

Posted by Mike Rhemtulla, Product Manager & Charles Maxson, Developer Advocate

The Google Chat API updates are in developer preview. To use the API, developers can apply for access through our Google Workspace Developer Preview Program.

In Google Chat, Spaces serve as a central place for team collaboration—instead of starting an email chain or scheduling a meeting, teams can move conversations and collaboration into a space, giving everybody the ability to stay connected, reference team or project info and revisit work asynchronously.

Programmatically create and populate Google Chat spaces

We are pleased to announce that you can programmatically create new Spaces and add members on behalf of users, through the Google Workspace Developer Preview Program via the Google Chat API.

These latest additions to the Chat API unlock some sought after scenarios for developers looking to add new dimensions to how they can leverage Chat. For example, organizations that need to create Spaces based on various business needs will now be able to do so programmatically. This will open up the door for Chat solutions that can build out Spaces modeled to represent new teams, projects, working groups, or whatever the specific use case may be that can benefit from automatically creating new Spaces.

Coming soon, example from an early developer preview partner

One of our developer preview partners, PagerDuty, is already leveraging the API as part of their upcoming release of PagerDuty for Google Chat. The app will allow users of their incident management solution to take quick actions around an incident with the right team members needed. PagerDuty for Chat will allow the incident team to isolate and focus on the problem at hand without being distracted by having to set up a new space, or further distract any folks in the current space who aren’t a part of the resolution team for a specific incident. All of this will be done seamlessly through PagerDuty for Chat as part of the natural flow of working with Google Chat.

Example of how a Chat app with the new APIs can enable users to easily create new Spaces and add members to an incident.

Learn more and get started

As you can imagine, there are many use cases that show off the potential of what you can build with the Chat API and the new Create methods. Whether it’s creating Spaces with specified members or extending Chat apps that spawn off new collaboration Spaces for use with help desk, HR, sales, customer support or any endless number of scenarios, we encourage you to explore what you can do today.

How to get started:

Building better products for new internet users

Since the launch of Google’s Next Billion Users (NBU) initiative in 2015, nearly 3 billion people worldwide came online for the very first time. In the next four years, we expect another 1.2 billion new internet users, and building for and with these users allows us to build better for the rest of the world.

For this year’s I/O, the NBU team has created sessions that will showcase how organizations can address representation bias in data, learn how new users experience the web, and understand Africa’s fast-growing developer ecosystem to drive digital inclusion and equity in the world around us.

We invite you to join these developers sessions and hear perspectives on how to build for the next billion users. Together, we can make technology helpful, relevant, and inclusive for people new to the internet.

Session: Building for everyone: the importance of representative data

Mike Knapp, Hannah Highfill and Emila Yang from Google’s Next Billion Users team, in partnership with Ben Hutchinson from Google’s Responsible AI team, will be leading a session on how to crowdsource data to build more inclusive products.

Data gathering is often the most overlooked aspect of AI, yet the data used for machine learning directly impacts a project’s success and lasting potential. Many organizations—Google included—struggle to gather the right datasets required to build inclusively and equitably for the next billion users. “We are going to talk about a very experimental product and solution to building more inclusive technology,” says Knapp of his session. “Google is testing a paid crowdsourcing app [Task Mate] to better serve underrepresented communities. This tool enables developers to reach ‘crowds’ in previously underrepresented regions. It is an incredible step forward in the mission to create more inclusive technology.”

Bookmark this session to your I/O developer profile.

Session: What we can learn from the internet’s newest users

“The first impression that your product makes matters,” says Nicole Naurath, Sr. UX Researcher - Next Billion Users at Google. “It can either spark curiosity and engagement, or confuse your audience.”

Everyday, thousands of people are coming online for the first time. Their experience can be directly impacted by how familiar they are with technology. People with limited digital experience, or novice internet users, experience the web differently and sometimes developers are not used to building for them. Design elements such as images, icons, and colors play a key role in digital experience. If images are not relatable, icons are irrelevant, and colors are not grounded in cultural context, the experience can confuse anyone, especially someone new to the internet.

Nicole Naurath and Neha Malhotra, from Google’s Next Billion Users team, will be leading the session on what we can learn from the internet’s newest users, how users experience the web and share a framework for evaluating products that work for novice internet users.”

Bookmark this session to your I/O developer profile.

Session: Africa’s booming developer ecosystem

Software developers are the catalyst for digital transformation in Africa. They empower local communities, spark growth for businesses, and drive innovation in a continent which more than 1.3 billion people call home. Demand for African developers reached an all-time high last year, driven by both local and remote opportunities, and is growing even faster than the continent's developer population.

Andy Volk and John Kimani from the Developer and Startup Ecosystem team in Sub-Saharan Africa will share findings from the Africa Developer Ecosystem 2021 report.

In their words, “This session is for anyone who wants to find out more about how African developers are building for the world or who is curious to find out more about this fast-growing opportunity on the continent. We are presenting trends, case studies and new research from Google and its partners to illustrate how people and organizations are coming together to support the rapid growth of the developer ecosystem.”

Bookmark this session to your I/O developer profile.

To learn more about Google’s Next Billion Users initiative, visit nextbillionusers.google