Tag Archives: developers

How this engineer got the career boost she needed

Saskia Bobinska was excited when she came across the application for the Women Developers Academy (WDA) program in Europe. After spending two years in isolation, thanks to the pandemic and having multiple back surgeries, she was looking for a way to advance her career in tech. She thought the WDA — a global program run by Women Techmakers to help technical women become better speakers and bring more diversity to tech stages — would be a great first step towards this goal.

One of the first assignments was to write her own speaker bio. As a self-taught frontend developer who uses JavaScript, NextJS and React, she felt a bit hesitant to share her story. “To be honest, I thought that my story was not important enough,” she tells us. But after a few WDA training sessions and encouragement from her mentors, business strategist Kamila Wosińska, Dart and Flutter Google Developer Expert (GDE) Majid Hajian and Web Technologies GDE Anuradha Kumarii, Saskia’s confidence was boosted. She excitedly set out to write a LinkedIn post about a mobile app on which she had been working.

Not long after the post went live, Saskia was approached by one of the companies she had mentioned. A few meetings lead to interviews and within a few months, Saskia was offered a job on their team. “I never would’ve thought that this was possible when I started coding three years ago,” Saskia says.

Looking back on the experience, Saskia is positively surprised by the speed with which she was able to transition her career from social media to engineering. “I’d have given myself two years before applying to Sanity, but WDA accelerated that,” she says. “I found my voice within the tech industry because of the community and WDA, which gave me a push toward it.”

Going through the WDA also helped Saskia realize that her “soft” skills — communication, leadership and confidence — are just as important as her hard skills for excelling in tech. “Having the ability to go out and speak gave me an approach to finding a more intermediate-level engineering role,” she says. “I have hard skills, but my soft skills are what brought me to this company that shares my priorities, because they knew who I was.”

She also recognized the importance of having a supportive community. During the WDA, she was excited to see women supporting each other so enthusiastically within the male-dominated tech industry. “Emotional support and empathy, especially in a professional environment, help you stay in balance and enable you to do your best,” she says. “Always help and support others, because safe communities are not just found, they are made.”

Learn more about Women Techmakersand become a member to stay up to date on all our initiatives including the Women Developers Academy.

A bigger piece of the pi: Finding the 100-trillionth digit

The 100-trillionth decimal place of π (pi) is 0. A few months ago, on an average Tuesday morning in March, I sat down with my coffee to check on the program that had been running a calculation from my home office for 157 days. It was finally time — I was going to be the first and only person to ever see the number. The results were in and it was a new record: We’d calculated the most digits of π ever — 100 trillion to be exact.

Calculating π — or finding as many digits of it as possible — is a project that mathematicians, scientists and engineers around the world have worked on for thousands of years, myself included. The well-known approximation 3.14 is believed to have been found by Archimedes around the year 250 BCE. Computer scientist Donald Knuth wrote "human progress in calculation has traditionally been measured by the number of decimal digits of π" in his book “The Art of Computer Programming” (Dr. Knuth even wrote about me in the book). In the past, people would manually — meaning without calculators or computers — determine the digits of pi. Today, we use computers to do this calculation, which helps us learn how much faster they’ve become. It’s one of the few ways to measure how much progress we're making across centuries, including before the invention of electronic computers.

An illustration of pie crust stretching from the Earth to the moon. Above it reads: "100 trillion inches of pie crust stretches from Earth to the moon an back ~3,304 times."

As a developer advocate at Google Cloud, part of my job is to create demos and run experiments that show the cool things developers can do with our platform; one of those things, you guessed it, is using a program to calculate digits of pi. Breaking the record of π was my childhood dream, so a few years ago I decided to try using Google Cloud to take on this project. I also wanted to see how much data processing these computers could handle. In 2019, I became the third woman tobreak this world record, with a π calculation of 31.4 trillion digits.

But I couldn’t stop there, and I decided to try again. And now we have a new record of 100 trillion decimal places. This shows us, again, just how far computers have come: In three years, the computers have calculated three times as many numbers. What’s more, in 2019, it took the computers 121 days to get to 31.4 million digits. This time, it took them 157 days to get to 100 trillion — more than twice as fast as the first project.

A illustrated chart showing how quickly we reached the new pi record compared to the last time in 2019.

But let’s look back farther than my 2019 record: The first world record of computing π with an electronic computer was in 1949, which calculated 2,037 decimal places. It took humans thousands of years to reach the two-thousandth place, and we've reached the 100 trillionth decimal just 73 years later. Not only are we adding more digits than all the numbers in the past combined, but we're spending less and less time hitting new milestones.

An illustration of a person holding a phone and tapping on the screen. Above it reads: "The 82,000 terabytes of data processed during calculations is the equivalent of 160,156 Pixel 6 Pros with max storage (512 GB)."

I used the same tools and techniques as I did in 2019 (for more details, we have a technical explanation in the Google Cloud blog), but I was able to hit the new number more quickly thanks to Google Cloud’s infrastructure improvements in compute, storage and networking. One of the most remarkable phenomena in computer science is that every year we have made incremental progress, and in return we have reaped exponentially faster compute speeds. This is what’s made a lot of the recent computer-assisted research possible in areas like climate science and astronomy.

An illustration of a person with a megaphone. Above it reads: "If you read all 100 trillion digits out loud, one second at a time, it would take you 3,170,929 years to read the whole thing."

Back when I hit that record in 2019 — and again now — many people asked "what's next?" And I’m happy to say that the scientific community just keeps counting. There's no end to π, it’s a transcendental number, meaning it can't be written as a finite polynomial. Plus, we don't see an end to the evolution of computing. Like the introduction of electronic computers in the 1940s and discovery of faster algorithms in the 1960-80s, we could still see another fundamental shift that keeps the momentum going.

So, like I said: I’ll just keep counting.

Why .app and .dev are perfect homes for developer tools

Back in the day, I remember when the main game in town was .com, and it was hard to find a short, memorable domain name that didn’t cost an arm and a leg. Fast forward to today, and we now have a wealth of descriptive top-level domains (TLDs) available to choose from. Not only do these TLDs offer better availability of high-quality names, they also do a great job of signaling purpose and content. So it’s no surprise the developer community has embraced them.

The Google Registry team showcases some of these developers in their ongoing #MyDomain video series, which highlights real-world examples of websites built on .app, .dev and .page. In these videos, developers share why they chose their domain and offer helpful tips for anyone who might be building their own website. Today, we’re sharing three new #MyDomain videos that feature teams using .dev and .app domain names to host their developer tools.

Netlify.app

Netlify offers web hosting and serverless backend services. Learn why they built their website on a .app domain.

Web.dev

Google launched web.dev to share best practices, case studies and how-tos for modern web development with the broader developer community.

Clerk.dev

Clerk handles user accounts and logins for websites so developers don’t have to. Learn why they built their website on a .dev domain.

As a developer, security is top of mind every time I create a web app. Fortunately, every .app, .dev and .page domain is automatically HTTPS-only from the moment of creation, which means one less security best practice to worry about when spinning up a new website.

If you’re feeling inspired or working on a new project, you can register your own domain name at get.app, get.dev or get.page.

Google Play helps indie games go further faster

Indie game developers are known for creating some of the most innovative titles to land on Google Play. It’s this creativity that captures the imagination of the more than 2.5 billion people using our platform each month.

At Google Play, it is our mission to help indie game developers reach their full potential, wherever they are in their journey.

This year, the Indie Games programs are back once again to help talented indie developers design, launch and grow high-quality games and reach new players. Find out more about how the 2022Indie Games Accelerator & Festival helps developers to go further faster.

Supercharge your growth with mentorship & live masterclasses

If you’re an indie developer who is early in your journey — either close to launching a new game or have recently launched a title — this high-impact program is designed for you.

With the help of our network of gaming experts, the Indie Game Accelerator provides education and mentorship for ambitious developers to help you build, launch and grow successfully.

Selected game studios will be invited to take part in the 10-week acceleration program starting inSeptember 2022 as the Accelerator Class of 2022.

This is a highly-tailored program for small game developers spanning 78 eligible countries, that includes a series of online masterclasses, talks and game development workshops. You’ll also get the chance to meet and connect with other developers from around the world who are looking to take their games to the next level.

Celebrating the top indie games in Europe, Japan & South Korea

If you're a passionate indie game developer and you have recently launched a high-quality game, enter your game to be showcased at the Indie Games Festival by Google Play.

Once again, we are hosting three international competitions in search of the most promising games from Japan, South Korea, and selected European countries, to celebrate the Top 20 indie games in each region.

The festival jury will consist of both gaming experts and Googlers, who are charged with finding creative indie games that are ready for the spotlight. As a finalist you will be able to join the Festival showcase and get your game discovered by top industry experts and players worldwide.

You can now enter your game to one of the Festival contests: Europe, Japan & South Korea.

For more updates and announcements about the Indie Games programs follow @GooglePlayBiz.

South African developers build web application to help local athletes

Posted by Aniedi Udo-Obong, Sub-Saharan Africa Regional Lead, Google Developer Groups

Lesego Ndlovu and Simon Mokgotlhoa have stayed friends since they were eight years old, trading GameBoy cartridges and playing soccer. They live three houses away from each other in Soweto, the biggest township in South Africa, with over one million residents. The two friends have always been fascinated by technology, and by the time the duo attended university, they wanted to start a business together that would also help their community.

Lesego Ndlovu and Simon Mokgotlhoa sitting at a desk on their computers

After teaching themselves to code and attending Google Developer Groups (GDG) events in Johannesburg, they built a prototype and launched a chapter of their own (GDG Soweto) to teach other new developers how to code and build technology careers.

Building an app to help their community

Lesego and Simon wanted to build an application that would help the talented soccer players in their community get discovered and recruited by professional soccer teams. To do that, they had to learn to code.

Lesego Ndlovu and Simon Mokgotlhoa holding their phones towards the screen showcasing the Ball Talent app

“We always played soccer, and we saw talented players not get discovered, so, given our interest in sports and passion for technology, we wanted to make something that could change that narrative,” Lesego says. “We watched videos on the Chrome Developers YouTube channel and learned HTML, CSS, and JavaScript, but we didn’t know how to make an app, deliver a product, or start a business. Our tech journey became a business journey. We learned about the code as the business grew. It’s been a great journey.”

After many all-nighters learning frontend development using HTML, CSS, and JavaScript, and working on their project, they built BallTalent, a Progressive Web App (PWA), that helps local soccer players in their neighborhood get discovered by professional soccer clubs. They record games in their neighborhood and upload them to the app, so clubs can identify new talent.

“We tested our prototype with people, and it seemed like they really loved it, which pushed us to keep coding and improving on the project,” says Simon. “The application is currently focused on soccer, but it’s built it in a way that it can focus on other sports.”

In 2019, when BallTalent launched, the project placed in the top 5 of one of South Africa’s most prestigious competitions, Diageo Social Tech Startup Challenge. BallTalent has helped local soccer players match with professional teams, benefiting the community. Simon and Lesego plan to release version two soon, with a goal of expanding to other sports.

Learning to code with web technologies and resources

Lesego and Simon chose to watch the Chrome Developers YouTube channel to learn to code, because it was free, accessible, and taught programming in ways that were easy to understand. Preferring to continue to use free Google tools because of their availability and ease of use, Lesego and Simon used Google developer tools on Chrome to build and test the BallTalent app, which is hosted on Google Cloud Platform.

BallTalent Shows Youth Talent to the Worlds Best Scouts and Clubs

They used NodeJS as their backend runtime environment to stay within the Google ecosystem–NodeJS is powered by the V8 JavaScript engine, which is developed by the Chromium Project. They used a service worker codelab from Google to allow users to install the BallTalent PWA and see partial content, even without an internet connection.

We are focused on HTML, CSS, JavaScript, frontend frameworks like Angular, and Cloud tools like Firebase, to be able to equip people with the knowledge of how to set up an application,” says Simon.

Moving gif of soccer players playing on a soccer field

BallTalent shares sample footage of a previous match: Mangaung United Vs Bizana Pondo Chiefs, during the ABC Motsepe Play Offs

“Google has been with us the whole way,” says Simon.

Contributing to the Google Developer community

Because of their enthusiasm for web technologies and positive experience learning to code using Google tools, Lesego and Simon were enthusiastic about joining a Google Developer Community. They became regular members at GDG Johannesburg and went to DevFest South Africa in 2018, where they got inspired to start their own GDG chapter in Soweto. The chapter focuses on frontend development to meet the needs of a largely beginner developer membership and has grown to 500+ members.

Looking forward to continued growth

The duo is now preparing to launch the second version of their BallTalent app, which gives back to their community by pairing local soccer talent with professional teams seeking players. In addition, they’re teaching new developers in their township how to build their own apps, building community and creating opportunities for new developers. Google Developer Groups are local community groups for developers interested in learning new skills, teaching others, and connecting with other developers. We encourage you to join us, and if you’re interested in becoming a GDG organizer like Simon and Lesego, we encourage you to apply.

Finding courage and inspiration in the developer community

Posted by Monika Janota

How do we empower women in tech and equip them with the skills to help them become true leaders? One way is learning from others' successes and failures. Web GDEs—Debbie O'Brien, Julia Miocene, and Glafira Zhur—discuss the value of one to one mentoring and the impact it has made on their own professional and personal development.

A 2019 study showed that only 25% of keynote speakers at tech events are women, meanwhile 70% of female speakers mentioned being the only woman on a conference panel. One way of changing that is by running programs and workshops with the aim of empowering women and providing them with the relevant soft skills training, including public speaking, content creation, and leadership. Among such programs are the Women Developer Academy (WDA) and the Road to GDE, both run by Google's developer communities.

With more than 1000 graduates around the world, WDA is a program run by Women Techmakers for professional IT practitioners. To equip women in tech with speaking and presentation skills, along with confidence and courage, training sessions, workshops, and mentoring meetings are organized. Road to GDE, on the other hand, is a three-month mentoring program created to support people from historically underrepresented groups in tech on their path to becoming experts. What makes both programs special is the fact that they're based on a unique connection between mentor and mentee, direct knowledge sharing, and an individualized approach.

Photo of Julia Miocene speaking at a conference Julia Miocene

Some Web GDE community members have had a chance to be part of the mentoring programs for women as both mentors and mentees. Frontend developers Julia Miocene and Glafira Zhur are relatively new to the GDE program. They became Google Developers Experts in October 2021 and January 2022 respectively, after graduating from the first edition of both the Women Developer Academy and the Road to GDE; whilst Debbie O'Brien has been a member of the community and an active mentor for both programs for several years. They have all shared their experiences with the programs in order to encourage other women in tech to believe in themselves, take a chance, and to become true leaders.

Different paths, one goal

Although all three share an interest in frontend development, each has followed a very different path. Glafira Zhur, now a team leader with 12 years of professional experience, originally planned to become a musician, but decided to follow her other passion instead. A technology fan thanks to her father, she was able to reinstall Windows at the age of 11. Julia Miocene, after more than ten years in product design, was really passionate about CSS. She became a GDE because she wanted to work with Chrome and DevTools. Debbie is a Developer Advocate working in the frontend area, with a strong passion for user experience and performance. For her, mentoring is a way of giving back to the community, helping other people achieve their dreams, and become the programmers they want to be. At one point while learning JavaScript, she was so discouraged she wanted to give it up, but her mentor convinced her she could be successful. Now she's returning the favor.

Photo of Debbie O'Brien and another woman in a room smiling at the camera

Debbie O'Brien

As GDEs, Debbie, Glafira, and Julia all mention that the most valuable part of becoming experts is the chance to meet people with similar interests in technology, to network, and to provide early feedback for the web team. Mentoring, on the other hand, enables them to create, it boosts their confidence and empowers them to share their skills and knowledge—regardless of whether they're a mentor or a mentee.

Sharing knowledge

A huge part of being a mentee in Google's programs is learning how to share knowledge with other developers and help them in the most effective way. Many WDA and Road to GDE participants become mentors themselves. According to Julia, it's important to remember that a mentor is not a teacher—they are much more. The aim of mentoring, she says, is to create something together, whether it's an idea, a lasting connection, a piece of knowledge, or a plan for the future.

Glafira mentioned that she learned to perceive social media in a new way—as a hub for sharing knowledge, no matter how small the piece of advice might seem. It's because, she says, even the shortest Tweet may help someone who's stuck on a technical issue that they might not be able to resolve without such content being available online. Every piece of knowledge is valuable. Glafira adds that, "Social media is now my tool, I can use it to inspire people, invite them to join the activities I organize. It's not only about sharing rough knowledge, but also my energy."

Working with mentors who have successfully built an audience for their own channels allows the participants to learn more about the technical aspects of content creation—how to choose topics that might be interesting for readers, set up the lighting in the studio, or prepare an engaging conference speech.

Learning while teaching

From the other side of the mentor—mentee relationship, Debbie O'Brien says the best thing about mentoring is seeing the mentees grow and succeed: "We see in them something they can't see in themselves, we believe in them, and help guide them to achieve their goals. The funny thing is that sometimes the advice we give them is also useful for ourselves, so as mentors we end up learning a lot from the experience too."

TV screenin a room showing and image od Glafira Zhur

Glafira Zhur

Both Glafira and Julia state that they're willing to mentor other women on their way to success. Asked what is the most important learning from a mentorship program, they mention confidence—believing in yourself is something they want for every female developer out there.

Growing as a part of the community

Both Glafira and Julia mentioned that during the programs they met many inspiring people from their local developer communities. Being able to ask others for help, share insights and doubts, and get feedback was a valuable lesson for both women.

Mentors may become role models for the programs' participants. Julia mentioned how important it was for her to see someone else succeed and follow in their footsteps, to map out exactly where you want to be professionally, and how you can get there. This means learning not just from someone else's failures, but also from their victories and achievements.

Networking within the developer community is also a great opportunity to grow your audience by visiting other contributors' podcasts and YouTube channels. Glafira recalls that during the Academy, she received multiple invites and had an opportunity to share her knowledge on different channels.

Overall, what's even more important than growing your audience is finding your own voice. As Debbie states: "We need more women speaking at conferences, sharing knowledge online, and being part of the community. So I encourage you all to be brave and follow your dreams. I believe in you, so now it's time to start believing in yourself."

How to use App Engine Memcache in Flask apps (Module 12)

Posted by Wesley Chun

Background

In our ongoing Serverless Migration Station series aimed at helping developers modernize their serverless applications, one of the key objectives for Google App Engine developers is to upgrade to the latest language runtimes, such as from Python 2 to 3 or Java 8 to 17. Another objective is to help developers learn how to move away from App Engine legacy APIs (now called "bundled services") to Cloud standalone equivalent services. Once this has been accomplished, apps are much more portable, making them flexible enough to:

In today's Module 12 video, we're going to start our journey by implementing App Engine's Memcache bundled service, setting us up for our next move to a more complete in-cloud caching service, Cloud Memorystore. Most apps typically rely on some database, and in many situations, they can benefit from a caching layer to reduce the number of queries and improve response latency. In the video, we add use of Memcache to a Python 2 app that has already migrated web frameworks from webapp2 to Flask, providing greater portability and execution options. More importantly, it paves the way for an eventual 3.x upgrade because the Python 3 App Engine runtime does not support webapp2. We'll cover both the 3.x and Cloud Memorystore ports next in Module 13.

Got an older app needing an update? We can help with that.

Adding use of Memcache

The sample application registers individual web page "visits," storing visitor information such as the IP address and user agent. In the original app, these values are stored immediately, and then the most recent visits are queried to display in the browser. If the same user continuously refreshes their browser, each refresh constitutes a new visit. To discourage this type of abuse, we cache the same user's visit for an hour, returning the same cached list of most recent visits unless a new visitor arrives or an hour has elapsed since their initial visit.

Below is pseudocode representing the core part of the app that saves new visits and queries for the most recent visits. Before, you can see how each visit is registered. After the update, the app attempts to fetch these visits from the cache. If cached results are available and "fresh" (within the hour), they're used immediately, but if cache is empty, or a new visitor arrives, the current visit is stored as before, and this latest collection of visits is cached for an hour. The bolded lines represent the new code that manages the cached data.

Adding App Engine Memcache usage to sample app

Wrap-up

Today's "migration" began with the Module 1 sample app. We added a Memcache-based caching layer and arrived at the finish line with the Module 12 sample app. To practice this on your own, follow the codelab doing it by-hand while following the video. The Module 12 app will then be ready to upgrade to Cloud Memorystore should you choose to do so.

In Fall 2021, the App Engine team extended support of many of the bundled services to next-generation runtimes, meaning you are no longer required to migrate to Cloud Memorystore when porting your app to Python 3. You can continue using Memcache in your Python 3 app so long as you retrofit the code to access bundled services from next-generation runtimes.

If you do want to move to Cloud Memorystore, stay tuned for the Module 13 video or try its codelab to get a sneak peek. All Serverless Migration Station content (codelabs, videos, source code [when available]) can be accessed at its open source repo. While our content initially focuses on Python users, we hope to one day cover other language runtimes, so stay tuned. For additional video content, check out our broader Serverless Expeditions series.

Celebrating leaders in AAPI communities

Posted by Google Developer Studio

In recognition of Asian American and Pacific Islander Heritage Month, we are speaking with mentors and leaders in tech and who identify as part of the AAPI community. Many of the influential figures we feature are involved with and help champion inclusivity programs like Google Developer Experts and Google Developer Student Clubs, while others work on leading in product areas like TensorFlow and drive impact through their line of work and communities.

On that note, we are honoring this year’s theme of “Advancing Leaders Through Collaboration” by learning more about the power of mentorship, advice they’ve received from other leaders, and their biggest accomplishments.

Read more about leads in the AAPI community below.

Ben Hong

Senior Staff Developer Experience Engineer at Netlify

What’s the best piece of advice you can offer new/junior developers looking to grow into leadership roles?

There is a lot of advice out there on how to get the most out of your career by climbing the ladder and getting leadership roles. Before you embark on that journey, first ask yourself the question "Why do I want this?"

Becoming a leader comes with a lot of glitz and glamor, but the reality is that it carries a huge weight of responsibility because the decisions and actions you take as a leader will impact the lives of those around you in significant ways you can't foresee.

As a result, the key to becoming the best leader you can be is to:

  1. Establish what your values and principles are
  2. Align them to the actions you take each and every day

Because at the end of the day, leaders are often faced with difficult decisions that lead to an uncertain future. And without core values and principles to guide you as an individual, you run the risk of being easily swayed by short term trade offs that could result in a long term loss.

This world needs leaders who can stand their ground against the temptations of short-term wins and make the best decisions they can while fighting for those that follow them. If you stand firm in your values and listen to those around you, you'll be able to create profound impact in your community.

Taha Bouhsine

Data Scientist and GDSCUIZ Lead

What’s the best piece of advice you can offer new/junior developers looking to grow into leadership roles?

Create a journey worth taking. You will face many challenges and a new set of problems. You will start asking a lot of questions as everything seems to be unfamiliar.

Things get much lighter if you are guided by a mentor, as you will get guidance on how to act in this new chapter of life. In your early days, invest as much as you can in building and nurturing a team, as it will save you a lot of time along the road. Surround yourself with real people who take the initiative, get to the action, and are willing to grow and learn, nurture their skills and guide them towards your common goal. Don't try to be a people pleaser as it's an impossible mission.

Your actions will offend some people one way or the other. That’s ok as you should believe in your mission, create a clear plan with well-defined tasks and milestones, and be firm with your decision. In the end, responsibility is yours to bear, so at least take it on something you decided, not something that was forced upon you by others.

Finally, when there is fire, look for ways to put it out. Take care of your soul, and enjoy the journey!

Huyen Tue Dao

Android Developer, Trello

What do you love most about being a part of the developer community?

It has been the most rewarding and critical part of my career to meet other developers, learning and sharing knowledge and getting to know them as human beings.

Development is a job of constant learning, whether it is the latest technology, trends, issues, and challenges or the day-to-day intricacies and nuances of writing specialized code and solving problems in efficient and elegant ways. I don't think I'd have the tools to solve issues large and small without the sharing of knowledge and experience of the developer community. If you're having a problem of any kind, chances are that someone has had the same challenges. You can take comfort that you can probably find the answer or at least find people that can help you. You can also feel confident that if you discovered something new or learned important lessons, someone will want to hear what you have to say.

I love seeing and being part of this cycle and interchange; as we pool our experience, our knowledge, and insights, we become stronger and more skilled as a community. I would not be the engineer or person that I am without the opportunities of this exchange.

Just as important, though, is the camaraderie and support of those who do what I do and love it. I have been so fortunate to have been in communities that have been open and welcoming, ready to make connections and form networks, eager to celebrate victories and commiserate with challenges. Regardless of the technical and personal challenges of the everyday that may get to me, there are people that understand and can support me and provide brilliantly diverse perspectives of different industries, countries, cultures, and ages.

Malak Magdy Ali

Google Developer Student Club Lead at Canadian International College, Egypt

What’s the best piece of advice you can offer new/junior developers looking to grow into leadership roles?

The best piece of advice I can give to new leaders is to have empathy. Having empathy will make you understand people’s actions and respect their feelings. This will make for stronger teams.

Also, give others a space to lead. Involve your team in making decisions; they come up with great ideas that can help you and teammates learn from each other. In this process, trust is also built, resulting in a better quality product.

Finally, don't underestimate yourself. Do your best and involve your team to discuss the overall quality of your work and let them make recommendations.

100 things we announced at I/O

And that’s a wrap on I/O 2022! We returned to our live keynote event, packed in more than a few product surprises, showed off some experimental projects and… actually, let’s just dive right in. Here are 100 things we announced at I/O 2022.

Gear news galore

Pixel products grouped together on a white background. Products include Pixel Bud Pro, Google Pixel Watch and Pixel phones.
  1. Let’s start at the very beginning — with some previews. We showed off a first look at the upcoming Pixel 7 and Pixel 7 Pro[1ac74e], powered by the next version of Google Tensor
  2. We showed off an early look at Google Pixel Watch! It’s our first-ever all-Google built watch: 80% recycled stainless steel[ec662b], Wear OS, Fitbit integration, Assistant access…and it’s coming this fall.
  3. Fitbit is coming to Google Pixel Watch. More experiences built for your wrist are coming later this year from apps like Deezer and Soundcloud.
  4. Later this year, you’ll start to see more devices powered with Wear OS from Samsung, Fossil Group, Montblanc and others.
  5. Google Assistant is coming soon to the Samsung Galaxy Watch 4 series.
  6. The new Pixel Buds Pro use Active Noise Cancellation (ANC), a feature powered by a custom 6-core audio chip and Google algorithms to put the focus on your music — and nothing else.
  7. Silent Seal™ helps Pixel Buds Pro adapt to the shape of your ear, for better sound. Later this year, Pixel Buds Pro will also support spatial audio to put you in the middle of the action when watching a movie or TV show with a compatible device and supported content.
  8. They also come in new colors: Charcoal, Fog, Coral and Lemongrass. Ahem, multiple colors — the Pixel Buds Pro have a two-tone design.
  9. With Multipoint connectivity, Pixel Buds Pro can automatically switch between your previously paired Bluetooth devices — including compatible laptops, tablets, TVs, and Android and iOS phones.
  10. Plus, the earbuds and their case are water-resistant[a53326].
  11. …And you can preorder them on July 21.
  12. Then there’s the brand new Pixel 6a, which comes with the full Material You experience.
  13. The new Pixel 6a has the same Google Tensor processor and hardware security architecture with Titan M2 as the Pixel 6 and Pixel 6 Pro.
  14. It also has two dual rear cameras — main and ultrawide lenses.
  15. You’ve got three Pixel 6a color options: Chalk, Charcoal and Sage. The options keep going if you pair it with one of the new translucent cases.
  16. It costs $449 and will be available for pre-order on July 21.
  17. We also showed off an early look at the upcoming Pixel tablet[a12f26], which we’re aiming to make available next year.

Android updates

18. In the last year, over 1 billion new Android phones have been activated.

19. You’ll no longer need to grant location to apps to enable Wi-Fi scanning in Android 13.

20. Android 13 will automatically delete your clipboard history after a short time to preemptively block apps from seeing old copied information

21. Android 13’s new photo picker lets you select the exact photos or videos you want to grant access to, without needing to share your entire media library with an app.

22. You’ll soon be able to copy a URL or picture from your phone, and paste it on your tablet in Android 13.

23. Android 13 allows you to select different language preferences for different apps.

24. The latest Android OS will also require apps to get your permission before sending you notifications.

25. And later this year, you’ll see a new Security & Privacy settings page with Android 13.

26. Google’s Messages app already has half a billion monthly active users with RCS, a new standard that enables you to share high-quality photos, see type indicators, message over Wi-Fi and get a better group messaging experience.

27. Messages is getting a public beta of end-to-end encryption for group conversations.

28. Early earthquake warnings are coming to more high-risk regions around the world.

29. On select headphones, you’ll soon be able to automatically switch audio between the devices you’re listening on with Android.

30. Stream and use messaging apps from your Android phone to laptop with Chromebook’s Phone Hub, and you won’t even have to install any apps.

31. Google Wallet is here! It’s a new home for things like your student ID, transit tickets, vaccine card, credit cards, debits cards.

32. You can even use Google Wallet to hold your Walt Disney World park pass.

33. Google Wallet is coming to Wear OS, too.

34. Improved app experiences are coming for Android tablets: YouTube Music, Google Maps and Messages will take advantage of the extra screen space, and more apps coming soon include TikTok, Zoom, Facebook, Canva and many others.

Developer deep dive

Illustration depicting a smart home, with lights, thermostat, television, screen and mobile device.

35. The Google Home and Google Home Mobile software developer kit (SDK) for Matter will be launching in June as developer previews.

36. The Google Home SDK introduces Intelligence Clusters, which make intelligence features like Home and Away, available to developers.

37. Developers can even create QR codes for Google Wallet to create their own passes for any use case they’d like.

38. Matter support is coming to the Nest Thermostat.

39. The Google Home Developer Center has lots of updates to check out.

40. There’s now built-in support for Matter on Android, so you can use Fast Pair to quickly connect Matter-enabled smart home devices to your network, Google Home and other accompanying apps in just a few taps.

41. The ARCore Geospatial API makes Google Maps’ Live View technology available to developers for free. Companies like Lime are using it to help people find parking spots for their scooters and save time.

42. DOCOMO and Curiosity are using the ARCore Geospatial API to build a new game that lets you fend off virtual dragons with robot companions in front of iconic Tokyo landmarks, like the Tokyo Tower.

43. AlloyDB is a new, fully-managed PostgreSQL-compatible database service designed to help developers manage enterprise database workloads — in our performance tests, it’s more than four times faster for transactional workloads and up to 100 times faster for analytical queries than standard PostgreSQL.

44. AlloyDB uses the same infrastructure building blocks that power large-scale products like YouTube, Search, Maps and Gmail.

45. Google Cloud’s machine learning cluster powered by Cloud TPU v4 Pods is super powerful — in fact, we believe it’s the world’s largest publicly available machine learning hub in terms of compute power…

46. …and it operates at 90% carbon-free energy.

47. We also announced a preview of Cloud Run jobs, which reduces the time developers spend running administrative tasks like database migration or batch data transformation.

48. We announced Flutter 3.0, which will enable developers to publish production-ready apps to six platforms at once, from one code base (Android, iOS, Desktop Web, Linux, Desktop Windows and MacOS).

49. To help developers build beautiful Wear apps, we announced the beta of Jetpack Compose for Wear OS.

50. We’re making it faster and easier for developers to build modern, high-quality apps with new Live edit features in Android Studio.

Help for the home

GIF of a man baking cookies with a speech bubble saying “Set a timer for 10 minutes.” His Google Nest Hub Max responds with a speech bubble saying “OK, 10 min. And that’s starting…now.”

51. Many Nest Devices will become Matter controllers, which means they can serve as central hubs to control Matter-enabled devices both locally and remotely from the Google Home app.

52. Works with Hey Google is now Works with Google Home.

53. The new home.google is your new hub for finding out everything you can do with your Google Home system.

54. Nest Hub Max is getting Look and Talk, where you can simply look at your device to ask a question without saying “Hey Google.”

55. Look and Talk works when Voice Match and Face Match recognize that it’s you.

56. And video from Look and Talk interactions is processed entirely on-device, so it isn’t shared with Google or anyone else.

57. Look and Talk is opt-in. Oh, and FYI, you can still say “Hey Google” whenever you want!

58. Want to learn more about it? Just say “Hey Google, what is Look and Talk?” or “Hey Google, how do you enable Look and Talk?”

59. We’re also expanding quick phrases to Nest Hub Max, so you can skip saying “Hey Google” for some of your most common daily tasks – things like “set a timer for 10 minutes” or “turn off the living room lights.”

60. You can choose the quick phrases you want to turn on.

61. Your quick phrases will work when Voice Match recognizes it’s you .

62. And looking ahead, Assistant will be able to better understand the imperfections of human speech without getting tripped up — including the pauses, “umms” and interruptions — making your interactions feel much closer to a natural conversation.

Taking care of business

Animated GIF  demonstrating portrait light, bringing studio-quality lighting effects to Google Meet.

63. Google Meet video calls will now look better thanks to portrait restore and portrait light, which use AI and machine learning to improve quality and lighting on video calls.

64. Later this year we’re scaling the phishing and malware protections that guard Gmail to Google Docs, Sheets and Slides.

65. Live sharing is coming to Google Meet, meaning users will be able to share controls and interact directly within the meeting, whether it’s watching an icebreaker video from YouTube or sharing a playlist.

66. Automated built-in summaries are coming to Spaces so you can get a helpful digest of conversations to catch up quickly.

67. De-reverberation for Google Meet will filter out echoes in spaces with hard surfaces, giving you conference-room audio quality whether you’re in a basement, a kitchen, or a big empty room.

68. Later this year, we're bringing automated transcriptions of Google Meet meetings to Google Workspace, so people can catch up quickly on meetings they couldn't attend.

Apps for on-the-go

A picture of London in immersive view.

69. Google Wallet users will be able to check the balance of transit passes and top up within Google Maps.

70. Google Translate added 24 new languages.

71. As part of this update, Indigenous languages of the Americas (Quechua, Guarani and Aymara) and an English dialect (Sierra Leonean Krio) have also been added to Translate for the first time.

72. Google Translate now supports a total of 133 languages used around the globe.

73. These are the first languages we’ve added using Zero-resource Machine Translation, where a machine learning model only sees monolingual text — meaning, it learns to translate into another language without ever seeing an example.

74. Google Maps’ new immersive view is a whole new way to explore so you can see what an area truly looks and feels like.

75. Immersive view will work on nearly any phone or tablet; you don’t need the fanciest or newest device.

76. Immersive view will first be available in L.A., London, New York, San Francisco and Tokyo — with more places coming soon.

77. Last year we launched eco-friendly routing in the U.S. and Canada. Since then, people have used it to travel 86 billion miles, which saved more than half a million metric tons of carbon emissions — that’s like taking 100,000 cars off the road.

78. And we’re expanding eco-friendly routing to more places, like Europe.

All in on AI

Ten circles in a row, ranging from dark to light.

The 10 shades of the Monk Skin Tone Scale.

79. A team at Google Research partnered with Harvard’s Dr. Ellis Monk to openly release the Monk Skin Tone Scale, a new tool for measuring skin tone that can help build more inclusive products.

80. Google Search will use the Monk Skin Tone Scale to make it easier to find more relevant results — for instance, if you search for “bridal makeup,” you’ll see an option to filter by skin tone so you can refine to results that meet your needs.

81. Oh, and the Monk Skin Tone Scale was used to evaluate a new set of Real Tone filters for Photos that are designed to work well across skin tones. These filters were created and tested in partnership with artists like Kennedi Carter and Joshua Kissi.

82. We’re releasing LaMDA 2, as a part of the AI Test Kitchen, a new space to learn, improve, and innovate responsibly on this technology together.

83. PaLM is a new language model that can solve complex math word problems, and even explain its thought process, step-by-step.

84. Nest Hub Max’s new Look and Talk feature uses six machine learning models to process more than 100 signals in real time to detect whether you’re intending to make eye contact with your device so you can talk to Google Assistant and not just giving it a passing glance.

85. We recently launched multisearch in the Google app, which lets you search by taking a photo and asking a question at the same time. At I/O, we announced that later this year, you'll be able to take a picture or screenshot and add "near me" to get local results from restaurants, retailers and more.

86. We introduced you to an advancement called “scene exploration,” where in the future, you’ll be able to use multisearch to pan your camera and instantly glean insights about multiple objects in a wider scene.

Privacy, security and information

A GIF that shows someone’s Google account with a yellow alert icon, flagging recommended actions they should take to secure their account.

87. We’ve expanded our support for Project Shield to protect the websites of 200+ Ukrainian government agencies, news outlets and more.

88. Account Safety Status will add a simple yellow alert icon to flag actions you should take to secure your Google Account.

89. Phishing protections in Google Workspace are expanding to Docs, Slides and Sheets.

90. My Ad Center is now giving you even more control over the ads you see on YouTube, Search, and your Discover feed.

91. Virtual cards are coming to Chrome and Android this summer, adding an additional layer of security and eliminating the need to enter certain card details at checkout.

92. In the coming months, you’ll be able to request removal of Google Search results that have your contact info with an easy-to-use tool.

93. Protected Computing, a toolkit that helps minimize your data footprint, de-identifies your data and restricts access to your sensitive data.

94. On-device encryption is now available for Google Password Manager.

95. We’re continuing to auto enroll people in 2-Step Verification to reduce phishing risks.

What else?!

Illustration of a black one-story building with large windows. Inside are people walking around wooden tables and white walls containing Google hardware products. There is a Google Store logo on top of the building.

96. A new Google Store is opening in Williamsburg.

97. This is our first “neighborhood store” — it’s in a more intimate setting that highlights the community. You can find it at 134 N 6th St., opening on June 16.

98. The store will feature an installation by Brooklyn-based artist Olalekan Jeyifous.

99. Visitors there can picture everyday life with Google products through interactive displays that show how our hardware and services work together, and even get hands-on help with devices from Google experts.

100. We showed a prototype of what happens when we bring technologies like transcription and translation to your line of sight.

Google I/O 2022: Advancing knowledge and computing

[TL;DR]

Nearly 24 years ago, Google started with two graduate students, one product, and a big mission: to organize the world’s information and make it universally accessible and useful. In the decades since, we’ve been developing our technology to deliver on that mission.

The progress we've made is because of our years of investment in advanced technologies, from AI to the technical infrastructure that powers it all. And once a year — on my favorite day of the year :) — we share an update on how it’s going at Google I/O.

Today, I talked about how we’re advancing two fundamental aspects of our mission — knowledge and computing — to create products that are built to help. It’s exciting to build these products; it’s even more exciting to see what people do with them.

Thank you to everyone who helps us do this work, and most especially our Googlers. We are grateful for the opportunity.

- Sundar


Editor’s note: Below is an edited transcript of Sundar Pichai's keynote address during the opening of today's Google I/O Developers Conference.

Hi, everyone, and welcome. Actually, let’s make that welcome back! It’s great to return to Shoreline Amphitheatre after three years away. To the thousands of developers, partners and Googlers here with us, it’s great to see all of you. And to the millions more joining us around the world — we’re so happy you’re here, too.

Last year, we shared how new breakthroughs in some of the most technically challenging areas of computer science are making Google products more helpful in the moments that matter. All this work is in service of our timeless mission: to organize the world's information and make it universally accessible and useful.

I'm excited to show you how we’re driving that mission forward in two key ways: by deepening our understanding of information so that we can turn it into knowledge; and advancing the state of computing, so that knowledge is easier to access, no matter who or where you are.

Today, you'll see how progress on these two parts of our mission ensures Google products are built to help. I’ll start with a few quick examples. Throughout the pandemic, Google has focused on delivering accurate information to help people stay healthy. Over the last year, people used Google Search and Maps to find where they could get a COVID vaccine nearly two billion times.

A visualization of Google’s flood forecasting system, with three 3D maps stacked on top of one another, showing landscapes and weather patterns in green and brown colors. The maps are floating against a gray background.

Google’s flood forecasting technology sent flood alerts to 23 million people in India and Bangladesh last year.

We’ve also expanded our flood forecasting technology to help people stay safe in the face of natural disasters. During last year’s monsoon season, our flood alerts notified more than 23 million people in India and Bangladesh. And we estimate this supported the timely evacuation of hundreds of thousands of people.

In Ukraine, we worked with the government to rapidly deploy air raid alerts. To date, we’ve delivered hundreds of millions of alerts to help people get to safety. In March I was in Poland, where millions of Ukrainians have sought refuge. Warsaw’s population has increased by nearly 20% as families host refugees in their homes, and schools welcome thousands of new students. Nearly every Google employee I spoke with there was hosting someone.

Adding 24 more languages to Google Translate

In countries around the world, Google Translate has been a crucial tool for newcomers and residents trying to communicate with one another. We’re proud of how it’s helping Ukrainians find a bit of hope and connection until they are able to return home again.

Two boxes, one showing a question in English — “What’s the weather like today?” — the other showing its translation in Quechua. There is a microphone symbol below the English question and a loudspeaker symbol below the Quechua answer.

With machine learning advances, we're able to add languages like Quechua to Google Translate.

Real-time translation is a testament to how knowledge and computing come together to make people's lives better. More people are using Google Translate than ever before, but we still have work to do to make it universally accessible. There’s a long tail of languages that are underrepresented on the web today, and translating them is a hard technical problem. That’s because translation models are usually trained with bilingual text — for example, the same phrase in both English and Spanish. However, there's not enough publicly available bilingual text for every language.

So with advances in machine learning, we’ve developed a monolingual approach where the model learns to translate a new language without ever seeing a direct translation of it. By collaborating with native speakers and institutions, we found these translations were of sufficient quality to be useful, and we'll continue to improve them.

A list of the 24 new languages Google Translate now has available.

We’re adding 24 new languages to Google Translate.

Today, I’m excited to announce that we’re adding 24 new languages to Google Translate, including the first indigenous languages of the Americas. Together, these languages are spoken by more than 300 million people. Breakthroughs like this are powering a radical shift in how we access knowledge and use computers.

Taking Google Maps to the next level

So much of what’s knowable about our world goes beyond language — it’s in the physical and geospatial information all around us. For more than 15 years, Google Maps has worked to create rich and useful representations of this information to help us navigate. Advances in AI are taking this work to the next level, whether it’s expanding our coverage to remote areas, or reimagining how to explore the world in more intuitive ways.

An overhead image of a map of a dense urban area, showing gray roads cutting through clusters of buildings outlined in blue.

Advances in AI are helping to map remote and rural areas.

Around the world, we’ve mapped around 1.6 billion buildings and over 60 million kilometers of roads to date. Some remote and rural areas have previously been difficult to map, due to scarcity of high-quality imagery and distinct building types and terrain. To address this, we’re using computer vision and neural networks to detect buildings at scale from satellite images. As a result, we have increased the number of buildings on Google Maps in Africa by 5X since July 2020, from 60 million to nearly 300 million.

We’ve also doubled the number of buildings mapped in India and Indonesia this year. Globally, over 20% of the buildings on Google Maps have been detected using these new techniques. We’ve gone a step further, and made the dataset of buildings in Africa publicly available. International organizations like the United Nations and the World Bank are already using it to better understand population density, and to provide support and emergency assistance.

Immersive view in Google Maps fuses together aerial and street level images.

We’re also bringing new capabilities into Maps. Using advances in 3D mapping and machine learning, we’re fusing billions of aerial and street level images to create a new, high-fidelity representation of a place. These breakthrough technologies are coming together to power a new experience in Maps called immersive view: it allows you to explore a place like never before.

Let’s go to London and take a look. Say you’re planning to visit Westminster with your family. You can get into this immersive view straight from Maps on your phone, and you can pan around the sights… here’s Westminster Abbey. If you’re thinking of heading to Big Ben, you can check if there's traffic, how busy it is, and even see the weather forecast. And if you’re looking to grab a bite during your visit, you can check out restaurants nearby and get a glimpse inside.

What's amazing is that isn't a drone flying in the restaurant — we use neural rendering to create the experience from images alone. And Google Cloud Immersive Stream allows this experience to run on just about any smartphone. This feature will start rolling out in Google Maps for select cities globally later this year.

Another big improvement to Maps is eco-friendly routing. Launched last year, it shows you the most fuel-efficient route, giving you the choice to save money on gas and reduce carbon emissions. Eco-friendly routes have already rolled out in the U.S. and Canada — and people have used them to travel approximately 86 billion miles, helping save an estimated half million metric tons of carbon emissions, the equivalent of taking 100,000 cars off the road.

Still image of eco-friendly routing on Google Maps — a 53-minute driving route in Berlin is pictured, with text below the map showing it will add three minutes but save 18% more fuel.

Eco-friendly routes will expand to Europe later this year.

I’m happy to share that we’re expanding this feature to more places, including Europe later this year. In this Berlin example, you could reduce your fuel consumption by 18% taking a route that’s just three minutes slower. These small decisions have a big impact at scale. With the expansion into Europe and beyond, we estimate carbon emission savings will double by the end of the year.

And we’ve added a similar feature to Google Flights. When you search for flights between two cities, we also show you carbon emission estimates alongside other information like price and schedule, making it easy to choose a greener option. These eco-friendly features in Maps and Flights are part of our goal to empower 1 billion people to make more sustainable choices through our products, and we’re excited about the progress here.

New YouTube features to help people easily access video content

Beyond Maps, video is becoming an even more fundamental part of how we share information, communicate, and learn. Often when you come to YouTube, you are looking for a specific moment in a video and we want to help you get there faster.

Last year we launched auto-generated chapters to make it easier to jump to the part you’re most interested in.

This is also great for creators because it saves them time making chapters. We’re now applying multimodal technology from DeepMind. It simultaneously uses text, audio and video to auto-generate chapters with greater accuracy and speed. With this, we now have a goal to 10X the number of videos with auto-generated chapters, from eight million today, to 80 million over the next year.

Often the fastest way to get a sense of a video’s content is to read its transcript, so we’re also using speech recognition models to transcribe videos. Video transcripts are now available to all Android and iOS users.

Animation showing a video being automatically translated. Then text reads "Now available in sixteen languages."

Auto-translated captions on YouTube.

Next up, we’re bringing auto-translated captions on YouTube to mobile. Which means viewers can now auto-translate video captions in 16 languages, and creators can grow their global audience. We’ll also be expanding auto-translated captions to Ukrainian YouTube content next month, part of our larger effort to increase access to accurate information about the war.

Helping people be more efficient with Google Workspace

Just as we’re using AI to improve features in YouTube, we’re building it into our Workspace products to help people be more efficient. Whether you work for a small business or a large institution, chances are you spend a lot of time reading documents. Maybe you’ve felt that wave of panic when you realize you have a 25-page document to read ahead of a meeting that starts in five minutes.

At Google, whenever I get a long document or email, I look for a TL;DR at the top — TL;DR is short for “Too Long, Didn’t Read.” And it got us thinking, wouldn’t life be better if more things had a TL;DR?

That’s why we’ve introduced automated summarization for Google Docs. Using one of our machine learning models for text summarization, Google Docs will automatically parse the words and pull out the main points.

This marks a big leap forward for natural language processing. Summarization requires understanding of long passages, information compression and language generation, which used to be outside of the capabilities of even the best machine learning models.

And docs are only the beginning. We’re launching summarization for other products in Workspace. It will come to Google Chat in the next few months, providing a helpful digest of chat conversations, so you can jump right into a group chat or look back at the key highlights.

Animation showing summary in Google Chat

We’re bringing summarization to Google Chat in the coming months.

And we’re working to bring transcription and summarization to Google Meet as well so you can catch up on some important meetings you missed.

Visual improvements on Google Meet

Of course there are many moments where you really want to be in a virtual room with someone. And that’s why we continue to improve audio and video quality, inspired by Project Starline. We introduced Project Starline at I/O last year. And we’ve been testing it across Google offices to get feedback and improve the technology for the future. And in the process, we’ve learned some things that we can apply right now to Google Meet.

Starline inspired machine learning-powered image processing to automatically improve your image quality in Google Meet. And it works on all types of devices so you look your best wherever you are.

An animation of a man looking directly at the camera then waving and smiling. A white line sweeps across the screen, adjusting the image quality to make it brighter and clearer.

Machine learning-powered image processing automatically improves image quality in Google Meet.

We’re also bringing studio quality virtual lighting to Meet. You can adjust the light position and brightness, so you’ll still be visible in a dark room or sitting in front of a window. We’re testing this feature to ensure everyone looks like their true selves, continuing the work we’ve done with Real Tone on Pixel phones and the Monk Scale.

These are just some of the ways AI is improving our products: making them more helpful, more accessible, and delivering innovative new features for everyone.

Gif shows a phone camera pointed towards a rack of shelves, generating helpful information about food items. Text on the screen shows the words ‘dark’, ‘nut-free’ and ‘highly-rated’.

Today at I/O Prabhakar Raghavan shared how we’re helping people find helpful information in more intuitive ways on Search.

Making knowledge accessible through computing

We’ve talked about how we’re advancing access to knowledge as part of our mission: from better language translation to improved Search experiences across images and video, to richer explorations of the world using Maps.

Now we’re going to focus on how we make that knowledge even more accessible through computing. The journey we’ve been on with computing is an exciting one. Every shift, from desktop to the web to mobile to wearables and ambient computing has made knowledge more useful in our daily lives.

As helpful as our devices are, we’ve had to work pretty hard to adapt to them. I’ve always thought computers should be adapting to people, not the other way around. We continue to push ourselves to make progress here.

Here’s how we’re making computing more natural and intuitive with the Google Assistant.

Introducing LaMDA 2 and AI Test Kitchen

Animation shows demos of how LaMDA can converse on any topic and how AI Test Kitchen can help create lists.

A demo of LaMDA, our generative language model for dialogue application, and the AI Test Kitchen.

We're continually working to advance our conversational capabilities. Conversation and natural language processing are powerful ways to make computers more accessible to everyone. And large language models are key to this.

Last year, we introduced LaMDA, our generative language model for dialogue applications that can converse on any topic. Today, we are excited to announce LaMDA 2, our most advanced conversational AI yet.

We are at the beginning of a journey to make models like these useful to people, and we feel a deep responsibility to get it right. To make progress, we need people to experience the technology and provide feedback. We opened LaMDA up to thousands of Googlers, who enjoyed testing it and seeing its capabilities. This yielded significant quality improvements, and led to a reduction in inaccurate or offensive responses.

That’s why we’ve made AI Test Kitchen. It’s a new way to explore AI features with a broader audience. Inside the AI Test Kitchen, there are a few different experiences. Each is meant to give you a sense of what it might be like to have LaMDA in your hands and use it for things you care about.

The first is called “Imagine it.” This demo tests if the model can take a creative idea you give it, and generate imaginative and relevant descriptions. These are not products, they are quick sketches that allow us to explore what LaMDA can do with you. The user interfaces are very simple.

Say you’re writing a story and need some inspirational ideas. Maybe one of your characters is exploring the deep ocean. You can ask what that might feel like. Here LaMDA describes a scene in the Mariana Trench. It even generates follow-up questions on the fly. You can ask LaMDA to imagine what kinds of creatures might live there. Remember, we didn’t hand-program the model for specific topics like submarines or bioluminescence. It synthesized these concepts from its training data. That’s why you can ask about almost any topic: Saturn’s rings or even being on a planet made of ice cream.

Staying on topic is a challenge for language models. Say you’re building a learning experience — you want it to be open-ended enough to allow people to explore where curiosity takes them, but stay safely on topic. Our second demo tests how LaMDA does with that.

In this demo, we’ve primed the model to focus on the topic of dogs. It starts by generating a question to spark conversation, “Have you ever wondered why dogs love to play fetch so much?” And if you ask a follow-up question, you get an answer with some relevant details: it’s interesting, it thinks it might have something to do with the sense of smell and treasure hunting.

You can take the conversation anywhere you want. Maybe you’re curious about how smell works and you want to dive deeper. You’ll get a unique response for that too. No matter what you ask, it will try to keep the conversation on the topic of dogs. If I start asking about cricket, which I probably would, the model brings the topic back to dogs in a fun way.

This challenge of staying on-topic is a tricky one, and it’s an important area of research for building useful applications with language models.

These experiences show the potential of language models to one day help us with things like planning, learning about the world, and more.

Of course, there are significant challenges to solve before these models can truly be useful. While we have improved safety, the model might still generate inaccurate, inappropriate, or offensive responses. That’s why we are inviting feedback in the app, so people can help report problems.

We will be doing all of this work in accordance with our AI Principles. Our process will be iterative, opening up access over the coming months, and carefully assessing feedback with a broad range of stakeholders — from AI researchers and social scientists to human rights experts. We’ll incorporate this feedback into future versions of LaMDA, and share our findings as we go.

Over time, we intend to continue adding other emerging areas of AI into AI Test Kitchen. You can learn more at: g.co/AITestKitchen.

Advancing AI language models

LaMDA 2 has incredible conversational capabilities. To explore other aspects of natural language processing and AI, we recently announced a new model. It’s called Pathways Language Model, or PaLM for short. It’s our largest model to date and trained on 540 billion parameters.

PaLM demonstrates breakthrough performance on many natural language processing tasks, such as generating code from text, answering a math word problem, or even explaining a joke.

It achieves this through greater scale. And when we combine that scale with a new technique called chain-of- thought prompting, the results are promising. Chain-of-thought prompting allows us to describe multi-step problems as a series of intermediate steps.

Let’s take an example of a math word problem that requires reasoning. Normally, how you use a model is you prompt it with a question and answer, and then you start asking questions. In this case: How many hours are in the month of May? So you can see, the model didn’t quite get it right.

In chain-of-thought prompting, we give the model a question-answer pair, but this time, an explanation of how the answer was derived. Kind of like when your teacher gives you a step-by-step example to help you understand how to solve a problem. Now, if we ask the model again — how many hours are in the month of May — or other related questions, it actually answers correctly and even shows its work.

There are two boxes below a heading saying ‘chain-of-thought prompting’. A box headed ‘input’ guides the model through answering a question about how many tennis balls a person called Roger has. The output box shows the model correctly reasoning through and answering a separate question (‘how many hours are in the month of May?’)

Chain-of-thought prompting leads to better reasoning and more accurate answers.

Chain-of-thought prompting increases accuracy by a large margin. This leads to state-of-the-art performance across several reasoning benchmarks, including math word problems. And we can do it all without ever changing how the model is trained.

PaLM is highly capable and can do so much more. For example, you might be someone who speaks a language that’s not well-represented on the web today — which makes it hard to find information. Even more frustrating because the answer you are looking for is probably out there. PaLM offers a new approach that holds enormous promise for making knowledge more accessible for everyone.

Let me show you an example in which we can help answer questions in a language like Bengali — spoken by a quarter billion people. Just like before we prompt the model with two examples of questions in Bengali with both Bengali and English answers.

That’s it, now we can start asking questions in Bengali: “What is the national song of Bangladesh?” The answer, by the way, is “Amar Sonar Bangla” — and PaLM got it right, too. This is not that surprising because you would expect that content to exist in Bengali.

You can also try something that is less likely to have related information in Bengali such as: “What are popular pizza toppings in New York City?” The model again answers correctly in Bengali. Though it probably just stirred up a debate amongst New Yorkers about how “correct” that answer really is.

What’s so impressive is that PaLM has never seen parallel sentences between Bengali and English. Nor was it ever explicitly taught to answer questions or translate at all! The model brought all of its capabilities together to answer questions correctly in Bengali. And we can extend the techniques to more languages and other complex tasks.

We're so optimistic about the potential for language models. One day, we hope we can answer questions on more topics in any language you speak, making knowledge even more accessible, in Search and across all of Google.

Introducing the world’s largest, publicly available machine learning hub

The advances we’ve shared today are possible only because of our continued innovation in our infrastructure. Recently we announced plans to invest $9.5 billion in data centers and offices across the U.S.

One of our state-of-the-art data centers is in Mayes County, Oklahoma. I’m excited to announce that, there, we are launching the world’s largest, publicly-available machine learning hub for our Google Cloud customers.

Still image of a data center with Oklahoma map pin on bottom left corner.

One of our state-of-the-art data centers in Mayes County, Oklahoma.

This machine learning hub has eight Cloud TPU v4 pods, custom-built on the same networking infrastructure that powers Google’s largest neural models. They provide nearly nine exaflops of computing power in aggregate — bringing our customers an unprecedented ability to run complex models and workloads. We hope this will fuel innovation across many fields, from medicine to logistics, sustainability and more.

And speaking of sustainability, this machine learning hub is already operating at 90% carbon-free energy. This is helping us make progress on our goal to become the first major company to operate all of our data centers and campuses globally on 24/7 carbon-free energy by 2030.

Even as we invest in our data centers, we are working to innovate on our mobile platforms so more processing can happen locally on device. Google Tensor, our custom system on a chip, was an important step in this direction. It’s already running on Pixel 6 and Pixel 6 Pro, and it brings our AI capabilities — including the best speech recognition we’ve ever deployed — right to your phone. It’s also a big step forward in making those devices more secure. Combined with Android’s Private Compute Core, it can run data-powered features directly on device so that it’s private to you.

People turn to our products every day for help in moments big and small. Core to making this possible is protecting your private information each step of the way. Even as technology grows increasingly complex, we keep more people safe online than anyone else in the world, with products that are secure by default, private by design and that put you in control.

We also spent time today sharing updates to platforms like Android. They’re delivering access, connectivity, and information to billions of people through their smartphones and other connected devices like TVs, cars and watches.

And we shared our new Pixel Portfolio, including the Pixel 6a, Pixel Buds Pro, Google Pixel Watch, Pixel 7, and Pixel tablet all built with ambient computing in mind. We’re excited to share a family of devices that work better together — for you.

The next frontier of computing: augmented reality

Today we talked about all the technologies that are changing how we use computers and access knowledge. We see devices working seamlessly together, exactly when and where you need them and with conversational interfaces that make it easier to get things done.

Looking ahead, there's a new frontier of computing, which has the potential to extend all of this even further, and that is augmented reality. At Google, we have been heavily invested in this area. We’ve been building augmented reality into many Google products, from Google Lens to multisearch, scene exploration, and Live and immersive views in Maps.

These AR capabilities are already useful on phones and the magic will really come alive when you can use them in the real world without the technology getting in the way.

That potential is what gets us most excited about AR: the ability to spend time focusing on what matters in the real world, in our real lives. Because the real world is pretty amazing!

It’s important we design in a way that is built for the real world — and doesn’t take you away from it. And AR gives us new ways to accomplish this.

Let’s take language as an example. Language is just so fundamental to connecting with one another. And yet, understanding someone who speaks a different language, or trying to follow a conversation if you are deaf or hard of hearing can be a real challenge. Let's see what happens when we take our advancements in translation and transcription and deliver them in your line of sight in one of the early prototypes we’ve been testing.

You can see it in their faces: the joy that comes with speaking naturally to someone. That moment of connection. To understand and be understood. That’s what our focus on knowledge and computing is all about. And it’s what we strive for every day, with products that are built to help.

Each year we get a little closer to delivering on our timeless mission. And we still have so much further to go. At Google, we genuinely feel a sense of excitement about that. And we are optimistic that the breakthroughs you just saw will help us get there. Thank you to all of the developers, partners and customers who joined us today. We look forward to building the future with all of you.