Tag Archives: Google I/O

Google Research at I/O 2023

Wednesday, May 10th was an exciting day for the Google Research community as we watched the results of months and years of our foundational and applied work get announced on the Google I/O stage. With the quick pace of announcements on stage, it can be difficult to convey the substantial effort and unique innovations that underlie the technologies we presented. So today, we’re excited to reveal more about the research efforts behind some of the many exciting announcements at this year's I/O.


PaLM 2

Our next-generation large language model (LLM), PaLM 2, is built on advances in compute-optimal scaling, scaled instruction-fine tuning and improved dataset mixture. By fine-tuning and instruction-tuning the model for different purposes, we have been able to integrate state-of-the-art capabilities into over 25 Google products and features, where it is already helping to inform, assist and delight users. For example:

  • Bard is an early experiment that lets you collaborate with generative AI and helps to boost productivity, accelerate ideas and fuel curiosity. It builds on advances in deep learning efficiency and leverages reinforcement learning from human feedback to provide more relevant responses and increase the model’s ability to follow instructions. Bard is now available in 180 countries, where users can interact with it in English, Japanese and Korean, and thanks to the multilingual capabilities afforded by PaLM 2, support for 40 languages is coming soon.
  • With Search Generative Experience we’re taking more of the work out of searching, so you’ll be able to understand a topic faster, uncover new viewpoints and insights, and get things done more easily. As part of this experiment, you’ll see an AI-powered snapshot of key information to consider, with links to dig deeper.
  • MakerSuite is an easy-to-use prototyping environment for the PaLM API, powered by PaLM 2. In fact, internal user engagement with early prototypes of MakerSuite accelerated the development of our PaLM 2 model itself. MakerSuite grew out of research focused on prompting tools, or tools explicitly designed for customizing and controlling LLMs. This line of research includes PromptMaker (precursor to MakerSuite), and AI Chains and PromptChainer (one of the first research efforts demonstrating the utility of LLM chaining).
  • Project Tailwind also made use of early research prototypes of MakerSuite to develop features to help writers and researchers explore ideas and improve their prose; its AI-first notebook prototype used PaLM 2 to allow users to ask questions of the model grounded in documents they define.
  • Med-PaLM 2 is our state-of-the-art medical LLM, built on PaLM 2. Med-PaLM 2 achieved 86.5% performance on U.S. Medical Licensing Exam–style questions, illustrating its exciting potential for health. We’re now exploring multimodal capabilities to synthesize inputs like X-rays.
  • Codey is a version of PaLM 2 fine-tuned on source code to function as a developer assistant. It supports a broad range of Code AI features, including code completions, code explanation, bug fixing, source code migration, error explanations, and more. Codey is available through our trusted tester program via IDEs (Colab, Android Studio, Duet AI for Cloud, Firebase) and via a 3P-facing API.

Perhaps even more exciting for developers, we have opened up the PaLM APIs & MakerSuite to provide the community opportunities to innovate using this groundbreaking technology.

PaLM 2 has advanced coding capabilities that enable it to find code errors and make suggestions in a number of different languages.

Imagen

Our Imagen family of image generation and editing models builds on advances in large Transformer-based language models and diffusion models. This family of models is being incorporated into multiple Google products, including:

  • Image generation in Google Slides and Android’s Generative AI wallpaper are powered by our text-to-image generation features.
  • Google Cloud’s Vertex AI enables image generation, image editing, image upscaling and fine-tuning to help enterprise customers meet their business needs.
  • I/O Flip, a digital take on a classic card game, features Google developer mascots on cards that were entirely AI generated. This game showcased a fine-tuning technique called DreamBooth for adapting pre-trained image generation models. Using just a handful of images as inputs for fine-tuning, it allows users to generate personalized images in minutes. With DreamBooth, users can synthesize a subject in diverse scenes, poses, views, and lighting conditions that don’t appear in the reference images.
    I/O Flip presents custom card decks designed using DreamBooth.

Phenaki

Phenaki, Google’s Transformer-based text-to-video generation model was featured in the I/O pre-show. Phenaki is a model that can synthesize realistic videos from textual prompt sequences by leveraging two main components: an encoder-decoder model that compresses videos to discrete embeddings and a transformer model that translates text embeddings to video tokens.


ARCore and the Scene Semantic API

Among the new features of ARCore announced by the AR team at I/O, the Scene Semantic API can recognize pixel-wise semantics in an outdoor scene. This helps users create custom AR experiences based on the features in the surrounding area. This API is empowered by the outdoor semantic segmentation model, leveraging our recent works around the DeepLab architecture and an egocentric outdoor scene understanding dataset. The latest ARCore release also includes an improved monocular depth model that provides higher accuracy in outdoor scenes.

Scene Semantics API uses DeepLab-based semantic segmentation model to provide accurate pixel-wise labels in a scene outdoors.

Chirp

Chirp is Google's family of state-of-the-art Universal Speech Models trained on 12 million hours of speech to enable automatic speech recognition (ASR) for 100+ languages. The models can perform ASR on under-resourced languages, such as Amharic, Cebuano, and Assamese, in addition to widely spoken languages like English and Mandarin. Chirp is able to cover such a wide variety of languages by leveraging self-supervised learning on unlabeled multilingual dataset with fine-tuning on a smaller set of labeled data. Chirp is now available in the Google Cloud Speech-to-Text API, allowing users to perform inference on the model through a simple interface. You can get started with Chirp here.


MusicLM

At I/O, we launched MusicLM, a text-to-music model that generates 20 seconds of music from a text prompt. You can try it yourself on AI Test Kitchen, or see it featured during the I/O preshow, where electronic musician and composer Dan Deacon used MusicLM in his performance.

MusicLM, which consists of models powered by AudioLM and MuLAN, can make music (from text, humming, images or video) and musical accompaniments to singing. AudioLM generates high quality audio with long-term consistency. It maps audio to a sequence of discrete tokens and casts audio generation as a language modeling task. To synthesize longer outputs efficiently, it used a novel approach we’ve developed called SoundStorm.


Universal Translator dubbing

Our dubbing efforts leverage dozens of ML technologies to translate the full expressive range of video content, making videos accessible to audiences across the world. These technologies have been used to dub videos across a variety of products and content types, including educational content, advertising campaigns, and creator content, with more to come. We use deep learning technology to achieve voice preservation and lip matching and enable high-quality video translation. We’ve built this product to include human review for quality, safety checks to help prevent misuse, and we make it accessible only to authorized partners.


AI for global societal good

We are applying our AI technologies to solve some of the biggest global challenges, like mitigating climate change, adapting to a warming planet and improving human health and wellbeing. For example:

  • Traffic engineers use our Green Light recommendations to reduce stop-and-go traffic at intersections and improve the flow of traffic in cities from Bangalore to Rio de Janeiro and Hamburg. Green Light models each intersection, analyzing traffic patterns to develop recommendations that make traffic lights more efficient — for example, by better synchronizing timing between adjacent lights, or adjusting the “green time” for a given street and direction.
  • We’ve also expanded global coverage on the Flood Hub to 80 countries, as part of our efforts to predict riverine floods and alert people who are about to be impacted before disaster strikes. Our flood forecasting efforts rely on hydrological models informed by satellite observations, weather forecasts and in-situ measurements.

Technologies for inclusive and fair ML applications

With our continued investment in AI technologies, we are emphasizing responsible AI development with the goal of making our models and tools useful and impactful while also ensuring fairness, safety and alignment with our AI Principles. Some of these efforts were highlighted at I/O, including:

  • The release of the Monk Skin Tone Examples (MST-E) Dataset to help practitioners gain a deeper understanding of the MST scale and train human annotators for more consistent, inclusive, and meaningful skin tone annotations. You can read more about this and other developments on our website. This is an advancement on the open source release of the Monk Skin Tone (MST) Scale we launched last year to enable developers to build products that are more inclusive and that better represent their diverse users.
  • A new Kaggle competition (open until August 10th) in which the ML community is tasked with creating a model that can quickly and accurately identify American Sign Language (ASL) fingerspelling — where each letter of a word is spelled out in ASL rapidly using a single hand, rather than using the specific signs for entire words — and translate it into written text. Learn more about the fingerspelling Kaggle competition, which features a song from Sean Forbes, a deaf musician and rapper. We also showcased at I/O the winning algorithm from the prior year’s competition powers PopSign, an ASL learning app for parents of deaf or hard of hearing children created by Georgia Tech and Rochester Institute of Technology (RIT).

Building the future of AI together

It’s inspiring to be part of a community of so many talented individuals who are leading the way in developing state-of-the-art technologies, responsible AI approaches and exciting user experiences. We are in the midst of a period of incredible and transformative change for AI. Stay tuned for more updates about the ways in which the Google Research community is boldly exploring the frontiers of these technologies and using them responsibly to benefit people’s lives around the world. We hope you're as excited as we are about the future of AI technologies and we invite you to engage with our teams through the references, sites and tools that we’ve highlighted here.


Source: Google AI Blog


Let’s go. It’s Google I/O 2023

Posted by Jeanine Banks, VP & General Manager, Developer X, and Head of Developer Relations

Google I/O is back and you’re invited to join us online May 10! Learn about Google’s latest solutions, products, and technologies for developers, that help unlock your creativity and simplify your development workflow. You’ll also get to hear about ways to use the latest in technology, from AI and cloud, to mobile and web. Tune in to watch the live streamed keynotes from Shoreline Amphitheater in Mountain View, CA, then dive into 100+ on-demand technical sessions and engage with helpful learning material. Visit the Google I/O site and register to stay informed about I/O and other related events coming soon.

Want to get a head start?

    Stay tuned for more updates. We look forward to seeing you in May!

    Let’s go. It’s Google I/O 2023

    Posted by Jeanine Banks, VP & General Manager, Developer X, and Head of Developer Relations

    Google I/O is back and you’re invited to join us online May 10! Learn about Google’s latest solutions, products, and technologies for developers, that help unlock your creativity and simplify your development workflow. You’ll also get to hear about ways to use the latest in technology, from AI and cloud, to mobile and web. Tune in to watch the live streamed keynotes from Shoreline Amphitheater in Mountain View, CA, then dive into 100+ on-demand technical sessions and engage with helpful learning material. Visit the Google I/O site and register to stay informed about I/O and other related events coming soon.

    Want to get a head start?

    Stay tuned for more updates. We look forward to seeing you in May!

    Let’s go. It’s Google I/O 2023

    Posted by Jeanine Banks, VP & General Manager, Developer X, and Head of Developer Relations

    Google I/O is back and you’re invited to join us online May 10! Learn about Google’s latest solutions, products, and technologies for developers, that help unlock your creativity and simplify your development workflow. You’ll also get to hear about ways to use the latest in technology, from AI and cloud, to mobile and web. Tune in to watch the live streamed keynotes from Shoreline Amphitheater in Mountain View, CA, then dive into 100+ on-demand technical sessions and engage with helpful learning material. Visit the Google I/O site and register to stay informed about I/O and other related events coming soon.

    Want to get a head start?

    Stay tuned for more updates. We look forward to seeing you in May!

    Modern Android Development at Google I/O ‘22

    Posted by Nick Butcher, Developer Relations Engineer

    Blue Jetpack Compose logo 

    Our goal is to make developing beautiful and engaging Android apps as fast and easy as possible. We want to take on the complex parts of building apps so that you can focus on your app’s features and deliver high quality experiences to your users.

    We call this approach Modern Android Development (or MAD for short!) and deliver it through a suite of tools, libraries and guidance. At Google I/O we announced a number of updates and additions to our MAD offerings; here’s a recap of the three largest announcements.


    #1 Compose 1.2 Beta

    Jetpack Compose 1.2 reaches the first Beta, which means the API is stable. We continue to build out our roadmap, bringing the APIs you need to support more advanced use cases like downloadable fonts, LazyGrids, window insets, nested scrolling interop, and more tooling support with features like LiveEdit, Recomposition counts in the Layout Inspector and Animation Preview. Learn more about how developers like Airbnb are improving their productivity with Jetpack Compose, and check out what else is new in Compose.


    #2 Baseline Profiles

    Baseline profiles allow you to embed a profile to guide the Android Runtime about which code paths should be pre-compiled rather than interpreted, which could dramatically impact critical user journeys like app startup. This is especially significant when using unbundled libraries like Jetpack Compose which don’t benefit from optimizations in platform code.

    Many Jetpack libraries (including Jetpack Compose) already ship baseline profiles, but you can learn how to add them to your own apps and libraries to boost their performance. We've seen up to 40% faster app startup times thanks to adding baseline profiles alone, no other code changes required!


    #3 Live Edit

    With Live Edit you can edit composables and view those changes in real time, on the Compose Preview or on physical devices or emulators, enabling rapid iteration. Live Edit is an opt-in experimental feature in Android Studio Electric Eel, with a number of limitations. Please try it out and provide your feedback.

    Those were the top three announcements about Modern Android Development at Google I/O. To learn more, check out the full playlist of talks and workshops.

    Building better products for new internet users

    Since the launch of Google’s Next Billion Users (NBU) initiative in 2015, nearly 3 billion people worldwide came online for the very first time. In the next four years, we expect another 1.2 billion new internet users, and building for and with these users allows us to build better for the rest of the world.

    For this year’s I/O, the NBU team has created sessions that will showcase how organizations can address representation bias in data, learn how new users experience the web, and understand Africa’s fast-growing developer ecosystem to drive digital inclusion and equity in the world around us.

    We invite you to join these developers sessions and hear perspectives on how to build for the next billion users. Together, we can make technology helpful, relevant, and inclusive for people new to the internet.

    Session: Building for everyone: the importance of representative data

    Mike Knapp, Hannah Highfill and Emila Yang from Google’s Next Billion Users team, in partnership with Ben Hutchinson from Google’s Responsible AI team, will be leading a session on how to crowdsource data to build more inclusive products.

    Data gathering is often the most overlooked aspect of AI, yet the data used for machine learning directly impacts a project’s success and lasting potential. Many organizations—Google included—struggle to gather the right datasets required to build inclusively and equitably for the next billion users. “We are going to talk about a very experimental product and solution to building more inclusive technology,” says Knapp of his session. “Google is testing a paid crowdsourcing app [Task Mate] to better serve underrepresented communities. This tool enables developers to reach ‘crowds’ in previously underrepresented regions. It is an incredible step forward in the mission to create more inclusive technology.”

    Bookmark this session to your I/O developer profile.

    Session: What we can learn from the internet’s newest users

    “The first impression that your product makes matters,” says Nicole Naurath, Sr. UX Researcher - Next Billion Users at Google. “It can either spark curiosity and engagement, or confuse your audience.”

    Everyday, thousands of people are coming online for the first time. Their experience can be directly impacted by how familiar they are with technology. People with limited digital experience, or novice internet users, experience the web differently and sometimes developers are not used to building for them. Design elements such as images, icons, and colors play a key role in digital experience. If images are not relatable, icons are irrelevant, and colors are not grounded in cultural context, the experience can confuse anyone, especially someone new to the internet.

    Nicole Naurath and Neha Malhotra, from Google’s Next Billion Users team, will be leading the session on what we can learn from the internet’s newest users, how users experience the web and share a framework for evaluating products that work for novice internet users.”

    Bookmark this session to your I/O developer profile.

    Session: Africa’s booming developer ecosystem

    Software developers are the catalyst for digital transformation in Africa. They empower local communities, spark growth for businesses, and drive innovation in a continent which more than 1.3 billion people call home. Demand for African developers reached an all-time high last year, driven by both local and remote opportunities, and is growing even faster than the continent's developer population.

    Andy Volk and John Kimani from the Developer and Startup Ecosystem team in Sub-Saharan Africa will share findings from the Africa Developer Ecosystem 2021 report.

    In their words, “This session is for anyone who wants to find out more about how African developers are building for the world or who is curious to find out more about this fast-growing opportunity on the continent. We are presenting trends, case studies and new research from Google and its partners to illustrate how people and organizations are coming together to support the rapid growth of the developer ecosystem.”

    Bookmark this session to your I/O developer profile.

    To learn more about Google’s Next Billion Users initiative, visit nextbillionusers.google

    Assistant Recap Google I/O 2021

    Written by: Jessica Dene Earley-Cha, Mike Bifulco and Toni Klopfenstein, Developer Relations Engineers for Google Assistant

    Now that we’ve packed up all of the virtual stages from Google I/O 2021, let's take a look at some of the highlights and new product announcements for App Actions, Conversational Actions, and Smart Home Actions. We also held a number of amazing live events and meetups that happened during I/O - which we’ll summarize as well.

    App Actions

    App Actions allows developers to extend their Android App to Google Assistant. For our Android Developers, we are happy to announce that App Actions is now part of the Android framework. With the introduction of the beta shortcuts.xml configuration resource and our latest Google Assistant Plug App Actions is moving closer to the Android platform.

    Capabilities

    Capabilities is a new Android framework API that allows you to declare the types of actions users can take to launch your app and jump directly to performing a specific task. Assistant provides the first available concrete implementation of the capabilities API. You can utilize capabilities by creating shortcuts.xml resources and defining your capabilities. Capabilities specify two things: how it's triggered and what to do when it's triggered. To add a capability, use Built-In intents (BIIs), which are pre-built intents that provide all the Natural Language Understanding to map the user's input to individual fields. When a BII is matched by the user’s speech, your capability will trigger an Android Intent that delivers the understood BII fields to your app, so you can determine what to show in response.

    This framework integration is in the Beta release stage, and will eventually replace the original implementation of App Actions that uses actions.xml. If your app provides both the new shortcuts.xml and old actions.xml, the latter will be disregarded.

    Voice shortcuts for Discovery

    Google Assistant suggests relevant shortcuts to users and has made it easier for users to discover and add shortcuts by saying “Hey Google, shortcuts.”

    Image of Google Assistant voice shortcuts

    You can use the Google Shortcuts Integration library, currently in beta, to push an unlimited number of dynamic shortcuts to Google to make your shortcuts visible to users as voice shortcuts. Assistant can suggest relevant shortcuts to users to help make it more convenient for the user to interact with your Android app.

    gif of In App Promo SDK

    In-App Promo SDK

    Not only can Assistant suggest shortcuts, with In-App Promo SDK you can proactively suggest shortcuts in your app for actions that the user can repeat with a voice command to Assistant, in beta. The SDK allows you to check if the shortcut you want to suggest already exists for that user and prompt the user to create the suggested shortcut.

    Google Assistant plugin for Android Studio

    To support testing Capabilities, Google Assistant plugin for Android Studio was launched. It contains an updated App Action Test Tool that creates a preview of your App Action, so you can test an integration before publishing it to the Play store.

    New App Actions resources

    Learn more with new or updated content:

    Conversational Actions

    During the What's New in Google Assistant keynote, Director of Product for the Google Assistant Developer Platform Rebecca Nathenson mentioned several coming updates and changes for Conversational Actions.

    Updates to Interactive Canvas

    Over the coming weeks, we’ll introduce new functionality to Interactive Canvas. Canvas developers will be able to manage intent fulfillment client-side, removing the need for intermediary webhooks in some cases. For use cases which require server-side fulfillment, like transactions and account linking, developers will be able to opt-in to server-side fulfillment as needed.

    We’re also introducing a new function, outputTts(), which allows you to trigger Text to Speech client-side. This should help reduce latency for end users.

    Additionally, there will be updates to the APIs available to get and set storage for both the home and individual users, allowing for client-side storage of user information. You’ll be able to persist user information within your web app, which was previously only available for access by webhook.


    These new features for Interactive Canvas will be made available soon as part of a developer preview for Conversational Actions Developers. For more details on these new features, check out the preview page.

    Updates to Transaction UX for Smart Displays

    Also coming soon to Conversational Actions - we’re updating the workflow for completing transactions, allowing users to complete transactions from their smart screens, by confirming the CVC code from their chosen payment method. Watch our demo video showing new transaction features on smart devices to get a feel for these changes.

    Tips on Launching your Conversational Action

    Make sure to catch our technical session Driving a successful launch for Conversational Actions to learn about some strategies for putting together a marketing team and go-to-market plan for releasing your Conversational Action.

    AMA: Games on Google Assistant

    If you’re interested in building Games for Google Assistant with Conversational Actions, you should check out the recording of our AMA, where Googlers answered questions from I/O attendees about designing, building, and launching games.


    Smart Home Actions

    The What's new in Smart Home keynote covered several updates for Smart Home Actions. Following our continued emphasis on quality smart home integrations with the updated policy launch, we added new features to help you build engaging, reliable Actions for your users.

    Test Suite and Analytics

    The updated Test Suite for Smart Home now supports automatic testing, without the use of TTS. Additionally, the Analytics dashboards have been expanded with more detailed logs and in-depth error reporting to help you more quickly identify any potential issues with your Action. For a deeper dive into these enhancements, try out the Debugging the Smart Home workshop. There are also two new debugging codelabs to help you get more familiar with using these tools to improve the quality of your Action.

    Notifications

    We expanded support for proactive notifications to include the device traits RunCycle and SensorState. Users can now be proactively notified for multiple different device events. We also announced the release of follow-up responses. These follow-up responses enable your smart devices to notify users asynchronously to device changes succeeding or failing.

    WebRTC

    We added support for WebRTC to the CameraStream trait. Smart camera users can now benefit from lower latency and half-duplex talk between devices. As mentioned in the keynote, we will also be making updates to the other currently supported protocols for smart cameras.

    Bluetooth Seamless Setup

    To improve the on-boarding experience, developers can now enable BLE (bluetooth low energy) for device onboarding with Bluetooth Seamless Setup. Google Home and Nest devices can act as local hubs to provision and register nearby devices for any Action configured with local fulfillment.

    Matter

    Project CHIP has officially rebranded as Matter. Once the IP-based connectivity protocol officially launches, we will be supporting devices running the protocol. Watch the Getting started with Project CHIP tech session to learn more.

    Ecosystem and Community

    The women building voice AI and their role in the voice revolution

    Voice AI is fundamentally changing how we interact with technology and its future will be a product of the people that build it. Watch this session to hear about the talented women shaping the Voice AI field, including an interview with Lilian Rincon, Sr. Director of Product Management at Google. Leslie also discusses strategies for achieving equal gender representation in Voice AI, an ambitious but essential goal.

    AMA: How the Assistant Investment Program can help fund your startup

    This "Ask Me Anything" session was hosted by the all-star team who runs the Google for Startups Accelerator: Voice AI. The team fielded questions from startups and investors around the world who are interested in building businesses based on voice technology. Check out the recording of this event here. The day after the AMA session, the 2021 cohort for the Voice AI accelerator had their demo day - you can catch the recording of their presentations here.

    Image from the AMA titled: How the Assistant Investment Program can help fund your startup

    Women in Voice Meetup

    We connected with amazing women in Voice AI and discussed ways allies can help women in Voice to be more successful while building a more inclusive ecosystem. It was hosted by Leslie Garcia-Amaya, Jessica Dene Earley-Cha, Karina Alarcon, Mike Bifulco, Cathy Pearl, Toni Klopfenstein, Shikha Kapoor & Walquiria Saad

    Smart home developer Meetups

    One of the perks of I/O being virtual this year was the ability to connect with students, hobbyists, and developers around the globe to discuss the current state of Smart Home, as well as some of the upcoming features. We hosted 3 meetups for the APAC, Americas, and EMEA regions and gathered some great feedback from the community.

    Assistant Google Developers Experts Meetup

    Every year we host an Assistant Google Developer Expert meetup to connect and share knowledge. This year we were able to invite everyone who is interested in building for Google Assistant to network and connect with one another. At the end several attendees came together at the Assistant Sandbox for a virtual photo!

    Image of GoogleIO assitant meetup

    Thanks for reading! To share your thoughts or questions, join us on Reddit at r/GoogleAssistantDev.

    Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on. Can’t wait to see what you build!

    Android 12 Beta 2 Update

    Posted by Dave Burke, VP of Engineering

    Android 12 logo

    Just a few weeks ago at Google I/O we unwrapped the first beta of Android 12, focusing on a new UI that adapts to you, improved performance, and privacy and security at the core. For developers, Android 12 gives you better tools to build delightful experiences for people on phones, laptops, tablets, wearables, TVs, and cars.

    Today we’re releasing the second Beta of Android 12 for you to try. Beta 2 adds new privacy features like the Privacy Dashboard and continues our work of refining the release.

    End-to-end there’s a lot for developers in Android 12 - from the redesigned UI and app widgets, to rich haptics, improved video and image quality, privacy features like approximate location, and much more. For a quick look at related Google I/O sessions, see Android 12 at Google I/O later in the post.

    You can get Beta 2 today on your Pixel device by enrolling here for over-the-air updates, and if you previously enrolled for Beta 1, you’ll automatically get today’s update. Android 12 Beta is also available on select devices from several of our partners - learn more at android.com/beta.

    Visit the Android 12 developer site for details on how to get started.

    What’s new in Beta 2?

    Beta 2 includes several of the new privacy features we talked about at Google I/O, as well as various feature updates to improve functionality, stability, and performance. Here are a few highlights.

    Privacy Dashboard - We’ve added a Privacy Dashboard to give users better visibility over the data that apps are accessing. The dashboard offers a simple and clear timeline view of all recent app accesses to microphone, camera, and location. Users can also request details from an app on why it has accessed sensitive data, and developers can provide this information in an activity by handling a new system intent, ACTION_VIEW_PERMISSION_USAGE_FOR_PERIOD. We recommend that apps take advantage of this intent to proactively help users understand accesses in the given time period. To help you track these accesses in your code and any third-party libraries, we recommend using the Data Auditing APIs. More here.

    Privacy Dashboard and location access gif

    Privacy dashboard and location access timeline.

    Mic and camera indicators - We’ve added indicators to the status bar to let users know when apps are using the device camera or microphone. Users can go to Quick Settings to see which apps are accessing their camera or microphone data and manage permissions if needed. For developers, we recommend reviewing your app’s uses of the microphone and camera and removing any that users would not expect. More here.

    Microphone & camera toggles - We’ve added Quick Settings toggles on supported devices that make it easy for users to instantly disable app access to the microphone and camera. When the toggles are turned off, an app accessing these sensors will receive blank camera and audio feeds, and the system handles notifying the user to enable access to use the app’s features. Developers can use a new API, SensorPrivacyManager, to check whether toggles are supported on the device. The microphone and camera controls apply to all apps regardless of their platform targeting. More here.

    Clipboard read notification - To give users more transparency on when apps are reading from the clipboard, Android 12 now displays a toast at the bottom of the screen each time an app calls getPrimaryClip(). Android won’t show the toast if the clipboard was copied from the same app. We recommend minimizing your app’s reads from the clipboard, and making sure that you only access the clipboard when it will be expected by users. More here.

    More intuitive connectivity experience - To help users understand and manage their network connections better, we’re introducing a simpler and more intuitive connectivity experience across the Status Bar, Quick Settings, and Settings. The new Internet Panel helps users switch between their Internet providers and troubleshoot network connectivity issues more easily. Let us know what you think!

    Quick Settings controls

    New Internet controls through Quick Settings.

    Visit the Android 12 developer site to learn more about all of the new features in Android 12.

    Android 12 at Google I/O

    At Google I/O we talked about everything that’s new in Android for developers - from Android 12 to Modern Android Development tools, new form factors like Wear and foldables, and Google Play. Here are the top 3 things to know about Android 12 at Google I/O.

    #1 A new UI for Android - Android 12 brings the biggest design change in Android's history. We rethought the entire experience, from the colors to the shapes, light and motion, making Android 12 more expressive, dynamic, and personal, under a single design language called Material You.

    #2 Performance - With Android 12, we made significant and deep investments in performance, from foundational system performance and battery life to foreground service changes, media quality and performance, and new tools to optimize apps.

    #3 Privacy and security - In Android 12 we’re continuing to give users more transparency and control while keeping their devices and data secure.

    For an overview of Android 12 for developers, watch this year’s What's new in Android talk, and check out Top 12 tips to get ready for Android 12 for an overview of where to test your app for compatibility. The full list of Android content at Google I/O is here.

    App compatibility

    With more early-adopter users and developers getting Android 12 beta on Pixel and other devices, now is the time to make sure your apps are ready!

    To test your app for compatibility, install the published version from Google Play or other source onto a device or emulator running Android 12 Beta. Work through all of the app’s flows and watch for functional or UI issues. Review the behavior changes to focus your testing. There’s no need to change your app’s targetSdkVersion at this time, so when you’ve resolved any issues, publish an update as soon as possible for your Android 12 Beta users.

    timeline for Android 12

    With Beta 2, Android 12 is closing in on Platform Stability in August 2021. Starting then, app-facing system behaviors, SDK/NDK APIs, and non-SDK lists will be finalized. At that time, you should finish up your final compatibility testing and release a fully compatible version of your app, SDK, or library. More on the timeline for developers is here.

    Get started with Android 12!

    Today’s Beta release has everything you need to try the latest Android 12 features, test your apps, and give us feedback. Just enroll any supported Pixel device to get the update over-the-air. To get started developing, set up the Android 12 SDK.

    You can also get Android 12 Beta 2 on devices from some of our top device-maker partners like Sharp. Visit android.com/beta to see the full list of partners participating in Android 12 Beta. For even broader testing, you can try Android 12 Beta on Android GSI images, and if you don’t have a device you can test on the Android Emulator.

    Beta 2 is also available for Android TV, so you can check out the latest TV features and test your apps on the all-new Google TV experience. Try it out with the ADT-3 developer kit. More here.

    For complete details on Android 12 Beta, visit the Android 12 developer site.

    What’s new in Android TV (and Google TV!)

    Posted by Ben Serridge, Director of Product Management - TV Platforms and Dan Aharon, Product Manager

    Android TV

    Today at Google I/O 2021, we announced a significant milestone for our team: we have over 80 million monthly active devices on Android TV OS, with more than 80% growth in the US alone. We would not be here without the hard work of the developer community, so a huge and heartfelt thank you to you all.

    Android TV OS is the operating system that powers a number of devices around the world including the new Google TV experience launched last fall. Google TV has generated a lot of excitement from consumers, developers, and industry partners alike, offering a content forward TV experience that helps the user discover more of the movies and shows they love. Google TV is available on streaming devices like the Chromecast with Google TV, smart TVs from Sony (and soon TCL!), and as an app on Android devices. Check out this presentation on how to get your app ready for Google TV.

    Our goal is to always enable you to build better and more engaging experiences on Android TV OS. One example of this is the widely utilized Watch Next API which increases app re-engagement by ~30% in certain cases1. Well over 100 major media partners are already using WatchNext API and you can learn more about how to add your app here.

    We are also announcing several new tools and helpful features to make developing for Android TV OS easier and enable you to create engaging experiences for your users. Some are already available and some will be available soon:

    • Cast Connect with Stream Transfer and Stream Expansion: Cast Connect allows users to cast from their phone/ tablet or Chrome browser onto your app on Android TV. Stream Transfer and Stream Expansion allow users to transfer media to other devices and/or play audio on multiple devices.
    • Emulator updates: To help you make your app work better on Google TV without requiring new hardware, we are now making our first Google TV Emulator available, running on Android 11. There will also be an Android 11 image with the traditional Android TV experience. You can now also use a remote that more closely mimics TV remotes directly within the Emulator.
    • Firebase Test Lab: Firebase Test Lab runs millions of tests every week on behalf of developers. Following requests from developers, we are excited to share that Firebase Test Lab is adding Android TV support. Firebase Test Lab Virtual Devices run your app in the cloud on Android TV emulators and allow you to scale your test across hundreds or thousands of virtual devices. Physical Devices will be coming soon.
    • Android 12 Beta 1: We are making the Android 12 Beta 1 available for TV on ADT-3 today. With this release the developer community will be able to take advantage of many of the changes and improvements coming with Android 12. We encourage you to try it and provide us with feedback.

    Thank you for your continued support of the Android TV OS platform. The future of TV is bright and we can’t wait to see what you build next!

    1 Average gain in number of days active in the app in a 28-day period amongst app 28DAUs, based on 3 apps analyzed during the 11/2020 - 2/2021 period.

    Google releases source code for Google I/O 2019 for Android

    Posted by Takeshi Hagikura, Developer Programs Engineer

    Today we're releasing the source code for the official Google I/O 2019 Android app.

    This year's app substantially modified existing functionality and added several new features. In this post, we’ll highlight several notable changes.

    Android Q out of the box

    • Gesture navigation

    Android Q introduced an option for fully gestural navigation, allowing the user to navigate back and to the home screen using only gestures. To support gesture navigation, app developers need to do two things:

    1. Extend app content to draw edge-to-edge
    2. Handle any conflicting app gestures

    The Google I/O 2019 app was one of the first apps to support fully the gestural navigation. For more details, check out this series of blog posts about gesture navigation and the commit in the Google I/O app repository that extended the content to draw edge-to-edge.

    Gesture navigation navigating back and to the home screen

    • Dark theme

    Another new feature that was introduced with Android Q was the new system Dark theme that applies to both the Android system UI and apps running on Android devices. Dark theme brings many benefits to developers, including being able to reduce power usage and improving visibility for users with low vision and those who are sensitive to bright light.

    To support the dark theme, you must set the app’s theme to inherit from a dark theme.

    <style name="AppTheme" parent="Theme.AppCompat.DayNight">
    OR
    <style name="AppTheme" parent="Theme.MaterialComponents.DayNight">


    You also need to avoid hard-coded colors or icons. You should use theme attributes (such as ?android:attr/textColorPrimary) or night-qualified resources (such as colors defined both in the res/values/colors.xml and res/values-night/colors.xml) instead. Check out the Google I/O talk about Dark Theme & Gesture Navigation for more details or the series of commits (1, 2, 3) in the Google I/O 2019 app repository for how we achieved implementing the dark theme in a real app.

    Schedule UI in dark theme

    Improved schedule screen

    In 2018, we adopted a tabbed interface for the schedule UI with horizontal swiping, each tab represented a conference day. In 2019, we changed the UI to address some usability and performance problems. For example, the views in the all tabs were rendered at the same time when the schedule UI became visible. That caused a noticeable UI slowdown especially on a low-end device.

    The new schedule UI is a single stream, allowing the app to render only visible content and users to easily jump to another conference day by choosing a day at the top of the UI. Check out the series of commits (1, 2) for how we revamped the schedule UI.

    This year’s schedule UI jumping to another conference day

    Navigation component

    We introduced Navigation component to simplify this year’s app into a Single Activity app and observed the following benefits:

    • Being able to see all the transitions at a glance in the navigation editor which simplified launching Session Details and the Map from launch actions
    • Removed boiler plate code for handling up and back navigations
    • Arguments between Fragments were statically typed by using the Safe Args gradle plugin

    Check out the getting started guide for how you can start introducing the Navigation component in your app and the series of commits (1, 2, 3, 4) in the Google I/O 2019 app repository for the usage in a real app.

    All transitions in the navigation editor

    Full Text Search with Room

    For this year’s app we added a search feature for users to quickly find sessions, speakers, and codelabs. To accomplish this, we used the Full Text Search feature of the Room Jetpack component. Whenever the conference data is fetched from the server, we update the session, speaker, and codelab data in the Room tables, which have corresponding FTS mapping tables. When a user starts typing in the search box, the search term is used to query the session title and description, speaker names, and codelab title. The search results are shown almost instantly, which allows the search results to be updated with each character typed in the search field. The user can then tap on a search result to navigate to see the details on the session, speaker, or codelab. Check out the series of commits (1, 2, 3, 4) for how we achieved the Full Text Search feature.

    Searching for a session and a speaker

    Lots of improvements

    These were the biggest changes we made to the app, but we improved a lot of little things as well. We added the new Home UI, allowing the app to tell the user time relevant information during the conference and the Codelab UI, which gave users more information about codelabs at I/O and how to participate in them.

    Home UI and Codelabs UI

    We also introduced Firebase Remote Config to toggle the visibility of each feature by updating the boolean values in the Remote Config without updating the app and removed the hard-coded values that were used for representing start and end time of each event in the Agenda UI.

    Go explore the code

    If you’re interested go checkout the code and let us know what you think. If you have any questions or issues, please let us know via the issue tracker on GitHub.