Tag Archives: Google I/O

Get ready for Google I/O: Program lineup revealed

The Google I/O agenda is live. We're excited to share Google’s biggest announcements across AI, Android, Web, and Cloud May 20-21. Tune in to learn how we’re making development easier so you can build faster.

We'll kick things off with the Google Keynote at 10:00 AM PT on May 20th, followed by the Developer Keynote at 1:30 PM PT. This year, we're livestreaming two days of sessions directly from Mountain View, bringing more of the I/O experience to you, wherever you are.

Here’s a sneak peek of what we’ll cover:

    • AI advancements: Learn how Gemini models enable you to build new applications and unlock new levels of productivity. Explore the flexibility offered by options like our Gemma open models and on-device capabilities.
    • Build excellent apps, across devices with Android: Crafting exceptional app experiences across devices is now even easier with Android. Dive into sessions focused on building intelligent apps with Google AI and boosting your productivity, alongside creating adaptive user experiences and leveraging the power of Google Play.
    • Powerful web, made easier: Exciting new features continue to accelerate web development, helping you to build richer, more reliable web experiences. We’ll share the latest innovations in web UI, Baseline progress, new multimodal built-in AI APIs using Gemini Nano, and how AI in DevTools streamline building innovative web experiences.

Plan your I/O

Join us online for livestreams May 20-21, followed by on-demand sessions and codelabs on May 22. Register today and explore the full program for sessions like these:

We're excited to share what's next and see what you build!

By the Google I/O team

Get ready for Google I/O: Program lineup revealed

The Google I/O agenda is live. We're excited to share Google’s biggest announcements across AI, Android, Web, and Cloud May 20-21. Tune in to learn how we’re making development easier so you can build faster.

We'll kick things off with the Google Keynote at 10:00 AM PT on May 20th, followed by the Developer Keynote at 1:30 PM PT. This year, we're livestreaming two days of sessions directly from Mountain View, bringing more of the I/O experience to you, wherever you are.

Here’s a sneak peek of what we’ll cover:

    • AI advancements: Learn how Gemini models enable you to build new applications and unlock new levels of productivity. Explore the flexibility offered by options like our Gemma open models and on-device capabilities.
    • Build excellent apps, across devices with Android: Crafting exceptional app experiences across devices is now even easier with Android. Dive into sessions focused on building intelligent apps with Google AI and boosting your productivity, alongside creating adaptive user experiences and leveraging the power of Google Play.
    • Powerful web, made easier: Exciting new features continue to accelerate web development, helping you to build richer, more reliable web experiences. We’ll share the latest innovations in web UI, Baseline progress, new multimodal built-in AI APIs using Gemini Nano, and how AI in DevTools streamline building innovative web experiences.

Plan your I/O

Join us online for livestreams May 20-21, followed by on-demand sessions and codelabs on May 22. Register today and explore the full program for sessions like these:

We're excited to share what's next and see what you build!

By the Google I/O team

Get ready for Google I/O: Program lineup revealed

Posted by the Google I/O team

The Google I/O agenda is live. We're excited to share Google’s biggest announcements across AI, Android, Web, and Cloud May 20-21. Tune in to learn how we’re making development easier so you can build faster.

We'll kick things off with the Google Keynote at 10:00 AM PT on May 20th, followed by the Developer Keynote at 1:30 PM PT. This year, we're livestreaming two days of sessions directly from Mountain View, bringing more of the I/O experience to you, wherever you are.

Here’s a sneak peek of what we’ll cover:

    • AI advancements: Learn how Gemini models enable you to build new applications and unlock new levels of productivity. Explore the flexibility offered by options like our Gemma open models and on-device capabilities.
    • Build excellent apps, across devices with Android: Crafting exceptional app experiences across devices is now even easier with Android. Dive into sessions focused on building intelligent apps with Google AI and boosting your productivity, alongside creating adaptive user experiences and leveraging the power of Google Play.
    • Powerful web, made easier: Exciting new features continue to accelerate web development, helping you to build richer, more reliable web experiences. We’ll share the latest innovations in web UI, Baseline progress, new multimodal built-in AI APIs using Gemini Nano, and how AI in DevTools streamline building innovative web experiences.

Plan your I/O

Join us online for livestreams May 20-21, followed by on-demand sessions and codelabs on May 22. Register today and explore the full program for sessions like these:

We're excited to share what's next and see what you build!

Meet the Android Studio Team: A Conversation with Director of Product Management, Jamal Eason

Posted by Ashley Tschudin – Social Media Specialist, MTP at Google

Dive into the world of Android Studio and meet the masterminds behind your favorite development tools! In our recurring blog series, "Meet the Android Studio Team," we'll introduce you to the brilliant engineers, designers, product managers, and more who are shaping the future of Android development.

Join us each week to uncover the unique perspectives and stories of the people who make Android Studio the best it can be.


Jamal Eason: Building better Android apps - insights on Gemini, Crashlytics, and App Quality

Meet Jamal Eason, a Director of Product Management at Google, whose passion for empowering developers shines through in his work on Android Studio.

His journey, from studying computer science at West Point to developing Android hardware at Intel (including contributions to the Motorola Razr i), showcases a deep understanding of the developer experience. From attending the very first Android Studio unveiling at Google I/O to now shaping its future, Jamal brings a unique perspective to the team.

Jamal shares his insights on the evolution of Android Studio, the importance of a strong developer community, and the features he's most proud of.


Can you tell us about your journey to becoming a part of the Android Studio team? What sparked your interest in Android development?

I have had an interest in programming at an early age especially since studying computer science in undergrad at the United States Military Academy (West Point), and in that time I have had an interest not just in the creation of software but also in the tools developers use to make software.

My interest in Android development came when I was preparing for my first job after my telecommunications & computer networks military career when I was joining a team at the Intel Corporation that worked with Google to build Android hardware products. I thought the best way to understand Google and mobile was to download the Android SDK and create my own app end to end. My first taste of Android was Froyo 2.2 using the Eclipse based Android Developer Tools IDE.

At Intel, I worked on creating the x86 based version of the Android Emulator and Emulator system image, and also a new Hypervisor that would accelerate the performance of the Android Emulator on x86 based laptops. After helping ship the Motorola Razr i (xt890) Android phone with Intel technology inside and x86 optimized apps on the device, I made the move to the Android team at Google. With my experience in developing Android apps, and shipping Android developer tools, the Android developer tools team was a natural fit.

Interestingly, I attended the Google I/O the year Android Studio was first revealed as an attendee, and the following year I was working on the team to bring Android Studio to its Beta release at the following years Google I/O.

What unique perspective or experience do you bring to the Android Studio team, and how does it influence your work?

Unique experiences I bring include:

  • Technical Translation - In my prior roles, I worked with highly technical teams, and learned how to take absurd technical concepts and present them to different audiences of different technical skill levels. And in the reverse, I worked with many non-technical customers and colleagues and learned how to translate their pain points into product opportunities solved with technical solutions and innovation.
  • User Empathy - Previously, I was a software developer, and I regularly like to code on small side projects, and really enjoy spending time with developers who use Android Studio. From first-hand experience and user engagement, I regularly bring in the voice of the user into the discussion from the inception of a product idea to the final stages of the release process.
  • UX Design Sense - In a previous career, I designed and created websites, and user interfaces for software. I developed an eye for good UX design and flows particularly in technical software products. These skills aid in complementing the dedicated UX design team in Android Studio, and aids in avoiding productivity pitfalls with poor product and UX flows.

In your opinion, what is the most impactful feature or improvement the Android team has introduced in recent years, and why?

It’s hard to nail down just one, but the top three are:

    1) product quality

    2) integration of Gemini and

    3) integrations with Crashlytics and Play with App Quality Insights.

The most impactful feature we worked on is product quality. We treat quality, especially the core code editing experience as a feature. If a developer can’t write a line of code and deploy it to a device, then everything else is secondary. Since Android is always evolving, it is an on-going effort but critical for the team to stay focused on.

On top of quality, thoughtful integration of Gemini into Android Studio is a real accelerate for app development. Our focus with AI is to make Android developers more productive, and make the harder tasks and toil easier. So from AI powered code completion, or built-in Gemini chat for Android app development, to enhancing existing tools with AI such as using Gemini to generate Jetpack Compose UI Previews, we are just at the beginning of leveraging AI to make Android app developers more productive.

Lastly, with App Quality Insights, it is now much easier for app developers to address the performance and quality issues found with Firebase Crashlytics and Android Vitals from Google Play. Surfacing these issues right next to source code and source control, make resolving issues much faster and intuitive.

How does the Android Studio team ensure that products or features meet the ever-changing needs of developers?

First step, the Android Studio team works hand-in-hand with the Android OS team so we strive to deliver developer tools in concert with new Android OS and API changes so developers are ready to adopt new Android platform capability into their apps. Then, we constantly review and prioritize developer feedback received via our issue tracker or via our bi-annaul developer survey we post on the Android Developers site. When we can, we sometimes engage with developers via various social media channels. And lastly, we regularly interview developers at various experience levels, and regions around the world in targeted User Research studies.

What advice would you give to aspiring Android developers who are just starting their journey?

  1. Start with a robust set of code labs and tutorials.
  2. Get inspired on the possibilities of Android and what you can build.
  3. Join the Android developer community:

Deploy with Confidence

Inspired by Jamal's journey and dedication to empowering developers? Explore the latest Android Studio features, including App Quality Insights, to improve your app's performance and address issues quickly.

Stay tuned

Don't miss the next installment of our "Meet the Android Studio Team" series, where we'll introduce you to another amazing member of our team and share their unique journey. Stay tuned for more!

Find Jamal Eason on LinkedIn and X.

Google Research at I/O 2023

Wednesday, May 10th was an exciting day for the Google Research community as we watched the results of months and years of our foundational and applied work get announced on the Google I/O stage. With the quick pace of announcements on stage, it can be difficult to convey the substantial effort and unique innovations that underlie the technologies we presented. So today, we’re excited to reveal more about the research efforts behind some of the many exciting announcements at this year's I/O.


PaLM 2

Our next-generation large language model (LLM), PaLM 2, is built on advances in compute-optimal scaling, scaled instruction-fine tuning and improved dataset mixture. By fine-tuning and instruction-tuning the model for different purposes, we have been able to integrate state-of-the-art capabilities into over 25 Google products and features, where it is already helping to inform, assist and delight users. For example:

  • Bard is an early experiment that lets you collaborate with generative AI and helps to boost productivity, accelerate ideas and fuel curiosity. It builds on advances in deep learning efficiency and leverages reinforcement learning from human feedback to provide more relevant responses and increase the model’s ability to follow instructions. Bard is now available in 180 countries, where users can interact with it in English, Japanese and Korean, and thanks to the multilingual capabilities afforded by PaLM 2, support for 40 languages is coming soon.
  • With Search Generative Experience we’re taking more of the work out of searching, so you’ll be able to understand a topic faster, uncover new viewpoints and insights, and get things done more easily. As part of this experiment, you’ll see an AI-powered snapshot of key information to consider, with links to dig deeper.
  • MakerSuite is an easy-to-use prototyping environment for the PaLM API, powered by PaLM 2. In fact, internal user engagement with early prototypes of MakerSuite accelerated the development of our PaLM 2 model itself. MakerSuite grew out of research focused on prompting tools, or tools explicitly designed for customizing and controlling LLMs. This line of research includes PromptMaker (precursor to MakerSuite), and AI Chains and PromptChainer (one of the first research efforts demonstrating the utility of LLM chaining).
  • Project Tailwind also made use of early research prototypes of MakerSuite to develop features to help writers and researchers explore ideas and improve their prose; its AI-first notebook prototype used PaLM 2 to allow users to ask questions of the model grounded in documents they define.
  • Med-PaLM 2 is our state-of-the-art medical LLM, built on PaLM 2. Med-PaLM 2 achieved 86.5% performance on U.S. Medical Licensing Exam–style questions, illustrating its exciting potential for health. We’re now exploring multimodal capabilities to synthesize inputs like X-rays.
  • Codey is a version of PaLM 2 fine-tuned on source code to function as a developer assistant. It supports a broad range of Code AI features, including code completions, code explanation, bug fixing, source code migration, error explanations, and more. Codey is available through our trusted tester program via IDEs (Colab, Android Studio, Duet AI for Cloud, Firebase) and via a 3P-facing API.

Perhaps even more exciting for developers, we have opened up the PaLM APIs & MakerSuite to provide the community opportunities to innovate using this groundbreaking technology.

PaLM 2 has advanced coding capabilities that enable it to find code errors and make suggestions in a number of different languages.

Imagen

Our Imagen family of image generation and editing models builds on advances in large Transformer-based language models and diffusion models. This family of models is being incorporated into multiple Google products, including:

  • Image generation in Google Slides and Android’s Generative AI wallpaper are powered by our text-to-image generation features.
  • Google Cloud’s Vertex AI enables image generation, image editing, image upscaling and fine-tuning to help enterprise customers meet their business needs.
  • I/O Flip, a digital take on a classic card game, features Google developer mascots on cards that were entirely AI generated. This game showcased a fine-tuning technique called DreamBooth for adapting pre-trained image generation models. Using just a handful of images as inputs for fine-tuning, it allows users to generate personalized images in minutes. With DreamBooth, users can synthesize a subject in diverse scenes, poses, views, and lighting conditions that don’t appear in the reference images.
    I/O Flip presents custom card decks designed using DreamBooth.

Phenaki

Phenaki, Google’s Transformer-based text-to-video generation model was featured in the I/O pre-show. Phenaki is a model that can synthesize realistic videos from textual prompt sequences by leveraging two main components: an encoder-decoder model that compresses videos to discrete embeddings and a transformer model that translates text embeddings to video tokens.


ARCore and the Scene Semantic API

Among the new features of ARCore announced by the AR team at I/O, the Scene Semantic API can recognize pixel-wise semantics in an outdoor scene. This helps users create custom AR experiences based on the features in the surrounding area. This API is empowered by the outdoor semantic segmentation model, leveraging our recent works around the DeepLab architecture and an egocentric outdoor scene understanding dataset. The latest ARCore release also includes an improved monocular depth model that provides higher accuracy in outdoor scenes.

Scene Semantics API uses DeepLab-based semantic segmentation model to provide accurate pixel-wise labels in a scene outdoors.

Chirp

Chirp is Google's family of state-of-the-art Universal Speech Models trained on 12 million hours of speech to enable automatic speech recognition (ASR) for 100+ languages. The models can perform ASR on under-resourced languages, such as Amharic, Cebuano, and Assamese, in addition to widely spoken languages like English and Mandarin. Chirp is able to cover such a wide variety of languages by leveraging self-supervised learning on unlabeled multilingual dataset with fine-tuning on a smaller set of labeled data. Chirp is now available in the Google Cloud Speech-to-Text API, allowing users to perform inference on the model through a simple interface. You can get started with Chirp here.


MusicLM

At I/O, we launched MusicLM, a text-to-music model that generates 20 seconds of music from a text prompt. You can try it yourself on AI Test Kitchen, or see it featured during the I/O preshow, where electronic musician and composer Dan Deacon used MusicLM in his performance.

MusicLM, which consists of models powered by AudioLM and MuLAN, can make music (from text, humming, images or video) and musical accompaniments to singing. AudioLM generates high quality audio with long-term consistency. It maps audio to a sequence of discrete tokens and casts audio generation as a language modeling task. To synthesize longer outputs efficiently, it used a novel approach we’ve developed called SoundStorm.


Universal Translator dubbing

Our dubbing efforts leverage dozens of ML technologies to translate the full expressive range of video content, making videos accessible to audiences across the world. These technologies have been used to dub videos across a variety of products and content types, including educational content, advertising campaigns, and creator content, with more to come. We use deep learning technology to achieve voice preservation and lip matching and enable high-quality video translation. We’ve built this product to include human review for quality, safety checks to help prevent misuse, and we make it accessible only to authorized partners.


AI for global societal good

We are applying our AI technologies to solve some of the biggest global challenges, like mitigating climate change, adapting to a warming planet and improving human health and wellbeing. For example:

  • Traffic engineers use our Green Light recommendations to reduce stop-and-go traffic at intersections and improve the flow of traffic in cities from Bangalore to Rio de Janeiro and Hamburg. Green Light models each intersection, analyzing traffic patterns to develop recommendations that make traffic lights more efficient — for example, by better synchronizing timing between adjacent lights, or adjusting the “green time” for a given street and direction.
  • We’ve also expanded global coverage on the Flood Hub to 80 countries, as part of our efforts to predict riverine floods and alert people who are about to be impacted before disaster strikes. Our flood forecasting efforts rely on hydrological models informed by satellite observations, weather forecasts and in-situ measurements.

Technologies for inclusive and fair ML applications

With our continued investment in AI technologies, we are emphasizing responsible AI development with the goal of making our models and tools useful and impactful while also ensuring fairness, safety and alignment with our AI Principles. Some of these efforts were highlighted at I/O, including:

  • The release of the Monk Skin Tone Examples (MST-E) Dataset to help practitioners gain a deeper understanding of the MST scale and train human annotators for more consistent, inclusive, and meaningful skin tone annotations. You can read more about this and other developments on our website. This is an advancement on the open source release of the Monk Skin Tone (MST) Scale we launched last year to enable developers to build products that are more inclusive and that better represent their diverse users.
  • A new Kaggle competition (open until August 10th) in which the ML community is tasked with creating a model that can quickly and accurately identify American Sign Language (ASL) fingerspelling — where each letter of a word is spelled out in ASL rapidly using a single hand, rather than using the specific signs for entire words — and translate it into written text. Learn more about the fingerspelling Kaggle competition, which features a song from Sean Forbes, a deaf musician and rapper. We also showcased at I/O the winning algorithm from the prior year’s competition powers PopSign, an ASL learning app for parents of deaf or hard of hearing children created by Georgia Tech and Rochester Institute of Technology (RIT).

Building the future of AI together

It’s inspiring to be part of a community of so many talented individuals who are leading the way in developing state-of-the-art technologies, responsible AI approaches and exciting user experiences. We are in the midst of a period of incredible and transformative change for AI. Stay tuned for more updates about the ways in which the Google Research community is boldly exploring the frontiers of these technologies and using them responsibly to benefit people’s lives around the world. We hope you're as excited as we are about the future of AI technologies and we invite you to engage with our teams through the references, sites and tools that we’ve highlighted here.


Source: Google AI Blog


Let’s go. It’s Google I/O 2023

Posted by Jeanine Banks, VP & General Manager, Developer X, and Head of Developer Relations

Google I/O is back and you’re invited to join us online May 10! Learn about Google’s latest solutions, products, and technologies for developers, that help unlock your creativity and simplify your development workflow. You’ll also get to hear about ways to use the latest in technology, from AI and cloud, to mobile and web. Tune in to watch the live streamed keynotes from Shoreline Amphitheater in Mountain View, CA, then dive into 100+ on-demand technical sessions and engage with helpful learning material. Visit the Google I/O site and register to stay informed about I/O and other related events coming soon.

Want to get a head start?

Stay tuned for more updates. We look forward to seeing you in May!

Let’s go. It’s Google I/O 2023

Posted by Jeanine Banks, VP & General Manager, Developer X, and Head of Developer Relations

Google I/O is back and you’re invited to join us online May 10! Learn about Google’s latest solutions, products, and technologies for developers, that help unlock your creativity and simplify your development workflow. You’ll also get to hear about ways to use the latest in technology, from AI and cloud, to mobile and web. Tune in to watch the live streamed keynotes from Shoreline Amphitheater in Mountain View, CA, then dive into 100+ on-demand technical sessions and engage with helpful learning material. Visit the Google I/O site and register to stay informed about I/O and other related events coming soon.

Want to get a head start?

Stay tuned for more updates. We look forward to seeing you in May!

Let’s go. It’s Google I/O 2023

Posted by Jeanine Banks, VP & General Manager, Developer X, and Head of Developer Relations

Google I/O is back and you’re invited to join us online May 10! Learn about Google’s latest solutions, products, and technologies for developers, that help unlock your creativity and simplify your development workflow. You’ll also get to hear about ways to use the latest in technology, from AI and cloud, to mobile and web. Tune in to watch the live streamed keynotes from Shoreline Amphitheater in Mountain View, CA, then dive into 100+ on-demand technical sessions and engage with helpful learning material. Visit the Google I/O site and register to stay informed about I/O and other related events coming soon.

Want to get a head start?

    Stay tuned for more updates. We look forward to seeing you in May!

    Modern Android Development at Google I/O ‘22

    Posted by Nick Butcher, Developer Relations Engineer

    Blue Jetpack Compose logo 

    Our goal is to make developing beautiful and engaging Android apps as fast and easy as possible. We want to take on the complex parts of building apps so that you can focus on your app’s features and deliver high quality experiences to your users.

    We call this approach Modern Android Development (or MAD for short!) and deliver it through a suite of tools, libraries and guidance. At Google I/O we announced a number of updates and additions to our MAD offerings; here’s a recap of the three largest announcements.


    #1 Compose 1.2 Beta

    Jetpack Compose 1.2 reaches the first Beta, which means the API is stable. We continue to build out our roadmap, bringing the APIs you need to support more advanced use cases like downloadable fonts, LazyGrids, window insets, nested scrolling interop, and more tooling support with features like LiveEdit, Recomposition counts in the Layout Inspector and Animation Preview. Learn more about how developers like Airbnb are improving their productivity with Jetpack Compose, and check out what else is new in Compose.


    #2 Baseline Profiles

    Baseline profiles allow you to embed a profile to guide the Android Runtime about which code paths should be pre-compiled rather than interpreted, which could dramatically impact critical user journeys like app startup. This is especially significant when using unbundled libraries like Jetpack Compose which don’t benefit from optimizations in platform code.

    Many Jetpack libraries (including Jetpack Compose) already ship baseline profiles, but you can learn how to add them to your own apps and libraries to boost their performance. We've seen up to 40% faster app startup times thanks to adding baseline profiles alone, no other code changes required!


    #3 Live Edit

    With Live Edit you can edit composables and view those changes in real time, on the Compose Preview or on physical devices or emulators, enabling rapid iteration. Live Edit is an opt-in experimental feature in Android Studio Electric Eel, with a number of limitations. Please try it out and provide your feedback.

    Those were the top three announcements about Modern Android Development at Google I/O. To learn more, check out the full playlist of talks and workshops.

    Building better products for new internet users

    Since the launch of Google’s Next Billion Users (NBU) initiative in 2015, nearly 3 billion people worldwide came online for the very first time. In the next four years, we expect another 1.2 billion new internet users, and building for and with these users allows us to build better for the rest of the world.

    For this year’s I/O, the NBU team has created sessions that will showcase how organizations can address representation bias in data, learn how new users experience the web, and understand Africa’s fast-growing developer ecosystem to drive digital inclusion and equity in the world around us.

    We invite you to join these developers sessions and hear perspectives on how to build for the next billion users. Together, we can make technology helpful, relevant, and inclusive for people new to the internet.

    Session: Building for everyone: the importance of representative data

    Mike Knapp, Hannah Highfill and Emila Yang from Google’s Next Billion Users team, in partnership with Ben Hutchinson from Google’s Responsible AI team, will be leading a session on how to crowdsource data to build more inclusive products.

    Data gathering is often the most overlooked aspect of AI, yet the data used for machine learning directly impacts a project’s success and lasting potential. Many organizations—Google included—struggle to gather the right datasets required to build inclusively and equitably for the next billion users. “We are going to talk about a very experimental product and solution to building more inclusive technology,” says Knapp of his session. “Google is testing a paid crowdsourcing app [Task Mate] to better serve underrepresented communities. This tool enables developers to reach ‘crowds’ in previously underrepresented regions. It is an incredible step forward in the mission to create more inclusive technology.”

    Bookmark this session to your I/O developer profile.

    Session: What we can learn from the internet’s newest users

    “The first impression that your product makes matters,” says Nicole Naurath, Sr. UX Researcher - Next Billion Users at Google. “It can either spark curiosity and engagement, or confuse your audience.”

    Everyday, thousands of people are coming online for the first time. Their experience can be directly impacted by how familiar they are with technology. People with limited digital experience, or novice internet users, experience the web differently and sometimes developers are not used to building for them. Design elements such as images, icons, and colors play a key role in digital experience. If images are not relatable, icons are irrelevant, and colors are not grounded in cultural context, the experience can confuse anyone, especially someone new to the internet.

    Nicole Naurath and Neha Malhotra, from Google’s Next Billion Users team, will be leading the session on what we can learn from the internet’s newest users, how users experience the web and share a framework for evaluating products that work for novice internet users.”

    Bookmark this session to your I/O developer profile.

    Session: Africa’s booming developer ecosystem

    Software developers are the catalyst for digital transformation in Africa. They empower local communities, spark growth for businesses, and drive innovation in a continent which more than 1.3 billion people call home. Demand for African developers reached an all-time high last year, driven by both local and remote opportunities, and is growing even faster than the continent's developer population.

    Andy Volk and John Kimani from the Developer and Startup Ecosystem team in Sub-Saharan Africa will share findings from the Africa Developer Ecosystem 2021 report.

    In their words, “This session is for anyone who wants to find out more about how African developers are building for the world or who is curious to find out more about this fast-growing opportunity on the continent. We are presenting trends, case studies and new research from Google and its partners to illustrate how people and organizations are coming together to support the rapid growth of the developer ecosystem.”

    Bookmark this session to your I/O developer profile.

    To learn more about Google’s Next Billion Users initiative, visit nextbillionusers.google