Tag Archives: Explore

Finding Stability in Open Source Work

At Google, open source is at the core of our infrastructure, processes, and culture. For the last 19 years, Google’s Open Source Programs Office (OSPO) has enabled our organization to support open source ecosystems through funding, training, mentorship and direct contribution. Every year for the last 5 years, roughly 10% of our workforce has contributed to open source projects as part of their work as well as in their personal time. We’re focused on investing in and protecting open source communities and infrastructure, as well as expanding access to open source opportunities around the world. Every day we seek to promote open and connected ecosystems as the foundation of technological advancement.

For the last four years, researchers in Google's Open Source Programs Office (OSPO) have analyzed our open source contribution activity annually to identify trends and changes in behavior. The goal of this effort has been to increase transparency and accountability across all of the communities we engage with, as well as provide feedback indicators for Alphabet’s internal tools, processes, and policies. In this iteration, our 2022 open source contribution metrics were remarkably consistent with what we found in 2021, which gives us confidence that what we're measuring is a good representation of open source behavior, especially after the extreme outlier year of 2020.


Security remains a priority

At Alphabet, open source software remains a critical component of our infrastructure, products, and services and we continue to rely on the health and availability of open source projects. Through internal efforts and collaboration with industry-led efforts such as OpenSSF, Alphabet is committed to bolstering the security posture of projects, users, and developers of open source software.

In 2021, Google began funding two Linux Foundation contractors to focus exclusively on security, and in 2022 we've continued to sponsor their work to eliminate fragile C language features and APIs in the kernel. We also continue to support the Rust-in-Linux project, with the goal of improving memory safety, strengthening APIs, and reducing the number of bugs overall in the project. In late 2022, Rust infrastructure support landed in the upstream kernel.

The deps.dev project released a public BigQuery dataset, allowing anyone to explore and analyze the dependencies, advisories, ownership, license, and other metadata of open source packages across supported ecosystems, and explore how this metadata has changed over time.

In 2022 we announced:

  • The OSV-Scanner, a free tool enabling open source developers and users to identify and remediate known vulnerabilities in their project's OSS dependencies. The OSV-Scanner provides a supported frontend to the OSV database which connects a project’s list of dependencies with the vulnerabilities that affect them.
  • The GOSST Upstream Team, a dedicated staff of Google open source security engineers who spend 100% of their time working closely with upstream maintainers to improve the security of critical open source projects.
  • Graph for Understanding Artifact Composition (GUAC) which aggregates software security metadata into a high fidelity graph database–normalizing entity identities and mapping standard relationships between them.

Our contributions continue to scale with our growing workforce

In 2022, roughly 10% of Alphabet's full-time workforce contributed to open source projects hosted on GitHub or Git-on-Borg - our internal production Git service (more details below). This percentage has remained roughly consistent over the last five years, indicating that our open source contribution has continued to scale with the growth of Alphabet. Similar to last year, FTEs represented over 95% of our open source workers, while the remainder includes vendors, independent contractors, temporary staff, and interns who contributed to open source projects during their tenure at Alphabet.

As open source work is core to our ongoing operations, we continue to track engagement over time, helping to compare continuous and sporadic participation. On average, over 45% of our active* contributing population for the year logged an activity on GitHub or Git-on-Borg in an average month. (see Figure 1)
This chart shows Alphabet's monthly active users on GitHub and Git-on-Borg. Over the last five years, the trajectory of monthly active users has continued to increase on both GitHub and Git-on-Borg by more than 15% year over year per month

Our portfolio of projects remains active

We estimate that more than 2000 projects that originated from Alphabet teams and employees were still active* (not archived). To make this estimate, we chose a broad and variable definition of an open source project, including developer tools, utilities, languages, frameworks, libraries, demos, sample code, models, raw data, designs, and more.

Project counts should not be confused with repositories as projects can include many repositories. Within Alphabet, we maintain over 7500 public repositories on GitHub and 1600 public repositories on Git-on-Borg. Our total repositories under management have reduced over time with the enforcement of a new archiving policy that flags repositories for archiving based on activity levels and owner feedback. Most of these repositories are open to outside contribution: more than 500,000 unique GitHub accounts not affiliated with Alphabet workers contributed to Alphabet projects in 2022.

The majority of our open source work happens outside of Alphabet organizations

The majority of repositories we work on are outside of Alphabet organizations: Over the last five years, more than 70% of non-personal GitHub repositories Alphabet contributors interacted with were outside of Google-managed organizations. We updated the methodology behind this metric since our last edition to filter out forks created in the pull request workflow. The top projects (by unique contributors at Alphabet) include Google-initiated projects such as Kuberenetes, Apache Beam, and gRPC as well as community-led projects such as LLVM, Envoy, and Rust.


We continue to invest in the sustainability of open source ecosystems

The mission of the Google Open Source Programs Office remains the same: we sponsor, create, and invest in projects and programs that enable everyone to join and contribute to the global open source ecosystem. In 2022, OSPO provided $5.7M in membership fees and sponsorship funding to 60 key open source projects and organizations. This funding was in addition to our established annual programs:

  • In its 18th year, Google Summer of Code enabled more than 1000 individuals to contribute to more than 150 organizations. Over the lifetime of this program, more than 19,000 individuals from 112 countries have contributed to more than 800 open source organizations across the globe.
  • In its fourth year, Google Season of Docs provided direct grants to 30 open source projects to hire more than 50 technical writers to improve open source project documentation, and published its second case study report highlighting useful open source documentation metrics. More than half of the documentation created in the 2022 program were how-tos, tutorials, and reference documentation; projects primarily wanted to add documentation for missing use cases and fix disorganized documentation.
  • Since 2011, the Google Open Source Peer Bonus Program has awarded bonuses for open source contributions to members of our extended community. In 2022 more than 300 contributors received awards, working in over 40 countries on more than 200 open source projects.

Our open source work will continue to grow and evolve to support the changing needs of our communities. Thank you to our colleagues and community members who continue to dedicate their personal and professional time supporting the open source ecosystem. Follow our work at opensource.google.

By Sophia Vargas – Researcher, Google Open Source Programs Office


About this data:

This report features metrics provided by many teams and programs across Alphabet. In regards to the code and code-adjacent activities data, we wanted to share more details about the derivation of those metrics.

2022 updates: This year, we decided to remove event counts as it is increasingly difficult to differentiate automated activities from human-centered work. Even after filtering out non-human accounts, we couldn’t correlate these events to employee time spent on open source projects, and so we reduced our reporting to focus on our population and scope of effort.

  • Data sources: These data represent activities on repositories hosted on GitHub and our internal production Git service Git-on-Borg. These sources represent a subset of open source activity currently tracked by Google OSPO.
    • GitHub: We continue to use GitHub Archive as the primary source for GitHub data, which is available as a public dataset on BigQuery. Alphabet activity within GitHub is identified by self-registered accounts, which we estimate underreports actual activity.
    • Git-on-Borg: This is our primary platform for internal projects and some of our larger, long running public projects such as Android and Chromium. While we continue to develop on this platform, most of our open source activity has moved to GitHub to increase exposure and encourage community growth.
    • Distinct event types: Note that Git-on-Borg and GitHub APIs produce distinct sets of events—so we report activity metrics per platform. Where GitHub Event logs capture a wide range of activity from code creation and review to issue creation and comments, the Gerrit Event stream (used by Git-on-Borg) only captures code changes and reviews.
  • Driven by humans: We have created many automated bots and systems that can propose changes on various hosting platforms. We have intentionally filtered these data to focus on human-initiated activities.
  • Business and personal: Activity on GitHub reflects a mixture of Alphabet projects, third party projects, experimental efforts, and personal projects. Our metrics report on all of the above unless otherwise specified.
  • Alphabet contributors: Please note that unless additional detail is specified, activity counts attributed to Alphabet open source contributors will include our full-time employees as well as our extended Alphabet community (temps, vendors, contractors, and interns).
  • GitHub Accounts: For counts of GitHub accounts not affiliated with Alphabet, we cannot assume that one account is equivalent to one person, as multiple accounts could be tied to one individual or bot account.
  • *Active counts: Where possible, we will show ‘active users’ defined by logged activity (excluding ‘WatchEvent’) within a specified timeframe (a month, year, etc.) and ‘active repositories’ and ‘active projects’ as those that have enough activity to meet our internal criteria and have not been archived.

Passkeys week is here

Posted by Milica Mihajlija, Technical Writer

Passkeys are an easier and more secure alternative to passwords. They let users sign-in simply with a fingerprint, face scan, PIN or a pattern. This week we are sharing resources to help you understand passkeys and upgrade authentication on your sites and apps.

Every day from 23-27 October on @ChromiumDev and @AndroidDev we’ll share new materials, including blog posts, case studies, and a Q&A session. Use #PasskeysWeek to participate in the conversation and spread the word about your sites and apps that support passkeys.


Join our live Q&A

On 25 October at 10 AM PDT, we’ll host a live Q&A session on Google for Developers YouTube channel where you’ll be able to ask questions in the live chat and get answers from passkeys engineers from Google. To send us your questions ahead of time through social media channels tag @ChromiumDev and @AndroidDev and use #PasskeysWeek.

Bookmark this link or click "Notify me" to get alerted when the livestream is about to start:

The recording will also be available on the channel after the event. Save the date and learn more about passkeys.


Where are passkeys today

Google Accounts have supported passkeys since May this year and on 10 October, 2023 have made passkeys the default sign in method for all devices that support it. If you haven’t created a passkey for your Google account yet, head over to g.co/passkeys.

Google is also partnering with brands to enable passkeys across Chrome and Android platforms. Partners across the ecommerce, financial tech, and travel industries—along with other software providers—already support passkeys creating easier, secure sign-ins for their users.

eBay, Uber and WhatsApp have recently joined that list, you can now sign into your account on these services with passkeys on Chrome and Android.

Passkeys Authenticator partner logos - 1Password, Adobe, Dashlane, Docusign, ebay, KAYAK, Mercari, PayPal, Uber, WhatsApp, YahooJapan

Success stories

When the travel company KAYAK integrated passkeys into its Android and web apps, they reduced the time it takes their users to sign up and sign in by 50%.

Password manager Dashlane can also manage passkeys across its Android, iOS, macOS, and Windows apps, as well as on the web with an extension for Chrome, Firefox, Edge, and Safari. Since introducing passkeys, Dashlane has seen a 70% increase in conversion rate for signing in with passkeys compared to passwords.

To learn more about these success stories keep an eye on #PasskeysWeek on @ChromiumDev and @AndroidDev, where we'll share full case studies in the next couple of days.


Learn how to implement passkeys and earn a badge

Are you a web developer? Are you ready to learn how to implement passkeys in a web app?

We have compiled everything you need to know in a short course: Passwordless login on the web with passkeys.

Are you an Android developer? Head over to Passkeys on Android.

Read the docs, complete the codelab, pass the quiz, and you’ll earn a passkeys badge on your Google Developer profile.

Passkeys Week badges for mobile and web

More resources

Stay tuned for more.

Save the date for Firebase’s first Demo Day!

Posted by Annum Munir, Product Marketing Manager

This article was originally posted on the Firebase blog.

For the past six years, we have shared the latest and greatest updates to Firebase, Google’s app development platform, at our annual Firebase Summit – this year, we wanted to do something a little different for our community of developers. So, in addition to the Flutter Firebase festival that just wrapped up, and meeting you all over the world at DevFests, we’re thrilled to announce our very first Firebase Demo Day, happening on November 8, 2023!

What is Demo Day?

Demo Day will be a virtual experience where we'll unveil short demos (i.e. pre-recorded videos) that showcase what's new, what's possible, and how you can solve your biggest app development challenges with Firebase. You’ll hear directly from our team about what they’ve been working on in a format that will feel both refreshing but also familiar.

What will you learn?

You’ll learn how Firebase can help you build and run fullstack apps faster, harness the power of AI to build smart experiences, and use Google technology and tools together to be more productive. We’ve been working closely with our friends from Flutter, Google Cloud, and Project IDX to ensure the demos cover a variety of topics and feature integrated solutions from your favorite Google products.

How can you participate?

Since Demo Day is not your typical physical or virtual event, you don’t need to worry about registering, securing a ticket, or even traveling. This is one of the easiest ways to peek at the exciting future of Firebase! Simply bookmark the website (and add the event to your calendar), then check back on Wednesday, November 8, 2023 at 1:00 pm EST to watch the videos at your own pace and be inspired to make your app the best it can be for users and your business.

In the meantime, we encourage you to follow us on X (formerly Twitter) and LinkedIn and join the conversation using #FirebaseDemoDay. We’ll be sharing teasers and behind-the-scenes footage throughout October as we count down to Demo Day, so stay tuned!

Make with MakerSuite Part 2: Tuning LLMs

Posted by Pranay Bhatia – Product Manager, Google Labs

AI is changing how developers work, and it’s also making it possible for more people to build. In Part 1, we learned how MakerSuite can be used to easily prompt LLMs through plain language. Today, in Part 2, we’re introducing Tuning in MakerSuite, which will let you customize a model for your specific needs in minutes.

What is tuning?

In Part 1, we introduced a technique called few-shot prompting to improve a model’s performance by giving it a handful of examples. Tuning improves on this technique by training the model on many more examples—so many that they can’t all fit in the prompt.


Fine-tuning vs. Parameter Efficient Tuning

You may have heard about classic “fine-tuning” of models. This is where a pre-trained model is adapted to a particular task by training it on a smaller set of task-specific labeled data. But with today’s LLMs and their huge number of parameters, re-training is complex: it requires machine learning expertise, lots of data, and lots of compute.

Tuning in MakerSuite uses a technique called Parameter Efficient Tuning (PET) to produce customized, high-quality models without the additional costs and complexity of traditional fine-tuning. In addition, PET produces high quality models with as little as a few hundred data points, reducing the burden of data collection for the developer.


Tune models in MakerSuite in minutes


1. Create a tuned model

It’s easy to tune models in MakerSuite. Simply select “Create new” and choose “Tuned model.”

Moving image of how to access 'Tuned Model' option from Create New menu in MakerSuite

2. Select data for tuning

You can tune your model from a saved data prompt or import data from Google Sheets or a CSV file. We recommend using at least 100 examples to get the best performance before you hit the Tune button.

Moving image of importing data for tuning into MakerSuite

3. View your tuned model

View your tuning progress in your library. Once the model has finished tuning, you can view the details by clicking on your model.

Moving image of viewing details of a model once it has fiunished tuning

4. Run your tuned model

To start using your newly tuned model, create a new text or data prompt and select your newly tuned model from the list of available models.

Image showing location of model in list of available models in MakerSuite


MakerSuite: a powerful, easy tool for tuning

Tuning in MakerSuite empowers developers to harness the full potential of models like PaLM 2 with delightful ease. Whether you've already tuned a model with the API or just started experimenting with generative AI, you’ll find that MakerSuite opens up exciting possibilities to make the model more relevant and effective for your own application in just minutes.

Build with Google AI: new video series for developers

Posted by Joe Fernandez, AI Developer Relations, and Jaimie Hwang, AI Developer Marketing

Artificial intelligence (AI) represents a new frontier for technology we are just beginning to explore. While many of you are interested in working with AI, we realize that most developers aren't ready to dive into building their own artificial intelligence models (yet). With this in mind, we've created resources to get you started building applications with this technology.

Today, we are launching a new video series called Build with Google AI. This series features practical, useful AI-powered projects that don't require deep knowledge of artificial intelligence, or huge development resources. In fact, you can get these projects working in less than a day.

From self-driving cars to medical diagnosis, AI is automating tasks, improving efficiency, and helping us make better decisions. At the center of this wave of innovation are artificial intelligence models, including large language models like Google PaLM 2 and more focused AI models for translation, object detection, and other tasks. The frontier of AI, however, is not simply building new and better AI models, but also creating high-quality experiences and helpful applications with those models.

Practical AI code projects

This series is by developers, for developers. We want to help you build with AI, and not just any code project will do. They need to be practical and extensible. We are big believers in starting small and tackling concrete problems. The open source projects featured in the series are selected so that you can get them working quickly, and then build beyond them. We want you to take these projects and make them your own. Build solutions that matter to you.

Finally, and most importantly, we want to promote the use of AI that's beneficial to users, developers, creators, and organizations. So, we are focused on solutions that follow our principles for responsible use of artificial intelligence.

For the first arc of this series, we focus on how you can leverage Google's AI language model capabilities for applications, particularly the Google PaLM API. Here's what's coming up:

  • AI Content Search with Doc Agent (10/3) We'll show you how a technical writing team at Google built an AI-powered conversation search interface for their content, and how you can take their open source project and build the same functionality for your content. 
  • AI Writing Assistant with Wordcraft (10/10) Learn how the People and AI Research team at Google built a story writing application with AI technology, and how you can extend their code to build your own custom writing app. 
  • AI Coding Assistant with Pipet Code Agent (10/17) We'll show you how the AI Developer Relations team at Google built a coding assistance agent as an extension for Visual Studio Code, and how you can take their open source project and make it work for your development workflow.

For the second arc of the series, we'll bring you a new set of projects that run artificial intelligence applications locally on devices for lower latency, higher reliability, and improved data privacy.

Insights from the development teams

As developers, we love code, and we know that understanding someone else's code project can be a daunting task. The series includes demos and tutorials on how to customize the code, and we'll talk with the people behind the code. Why did they build it? What did they learn along the way? You’ll hear insights directly from the project team, so you can take it further.

Discover AI technologies from across Google

Google provides a host of resources for developers to build solutions with artificial intelligence. Whether you are looking to develop with Google's AI language models, build new models with TensorFlow, or deploy full-stack solutions with Google Cloud Vertex AI, it's our goal to help you find the AI technology solution that works best for your development projects. To start your journey, visit Build with Google AI.

We hope you are as excited about the Build with Google AI video series as we are to share it with you. Check out Episode #1 now! Use those video comments to let us know what you think and tell us what you'd like to see in future episodes.

Keep learning! Keep building!

Kakao Games increased FPS stability to 96% through Android Adaptability

Posted by Dohyun Kim, Developer Relations Engineer, Android Games

Finding the balance between graphics quality and performance

Ares: Rise of Guardians is a mobile-to-PC sci-fi MMORPG developed by Second Dive, a game studio based in Korea known for its expertise in developing action RPG series and published by Kakao Games. Set in a vast universe with a detailed, futuristic background, Ares is full of exciting gameplay and beautifully rendered characters involving combatants wearing battle suits. However, because of these richly detailed graphics, some users’ devices struggled to handle the gameplay without affecting the performance.

For some users, their device would overheat after just a few minutes of gameplay and enter a thermally throttled state. In this state, the CPU and GPU frequency are reduced, affecting the game’s performance and causing the FPS to drop. However, as soon as the decreased FPS improved the thermal situation, the FPS would increase again and the cycle would repeat. This FPS fluctuation would cause the game to feel janky.

Adjust the performance in real time with Android Adaptability

To solve this problem, Kakao Games used Android Adaptability and Unity Adaptive Performance to improve the performance and thermal management of their game.

Android Adaptability is a set of tools and libraries to understand and respond to changing performance, thermal, and user situations in real time. These include the Android Dynamic Performance Framework’s thermal APIs, which provide information about the thermal state of a device, and the PerformanceHint API, which help Android choose the optimal CPU operating point and core placement. Both APIs work with the Unity Adaptive Performance package to help developers optimize their games.

Android Adaptability and Unity Adaptive Performance work together to adjust the graphics settings of your app or game to match the capabilities of the user’s device. As a result, it can improve performance, reduce thermal throttling and power consumption, and preserve battery life.

Moving image of gameplay from Ares: Rise of Guardians

Results

After integrating adaptive performance, Ares was better able to manage its thermal situation, which resulted in less throttling. As a result, users were able to enjoy a higher frame rate, and FPS stability increased from 75% to 96%.

In the charts below, the blue line indicates the thermal warning level. The bottom line (0.7) indicates no warning, the midline (0.8) is throttling imminent, and the upper line (0.9) is throttling. As you can see in the first chart, before implementing Android Adaptability, throttling happened after about 16 minutes of gameplay. In the second chart, you can see that after integration, throttling didn’t occur until around 22 minutes.

Graph showing high graphic quality setting measuring thermal headroom against thermal warning level in frames-per-second

Graph showing enabled android adaptability measuring thermal headroom against thermal warning level in frames-per-second

Kakao Games also wanted to reduce device heating, which they knew wasn’t possible with a continuously high graphic quality setting. The best practice is to gradually lower the graphical fidelity as device temperature increases to maintain a constant framerate and thermal equilibrium. So Kakao Games created a six-step change sequence with Android Adaptability, offering stable FPS and lower device temperatures. Automatic changes in fidelity are reflected in the in-game graphic quality settings (resolution, texture, shadow, effect, etc.) in the settings menu. Because some users want the highest graphic quality even if their device can’t sustain performance at that level, Kakao Games gave them the option to manually disable Unity Adaptive Performance.

Get started with Android Adaptability

Android Adaptability and Unity Adaptive Performance is now available to all Android game developers using the Android provider on most Android devices after API level 30 (thermal) and 31 (performance Hint API). Developers are able to use the Android provider from the Adaptive Performance 5.0.0 version. The thermal APIs are integrated with Adaptive Performance to help developers easily retrieve device thermal information and the performance Hint API is called every Update() automatically without any additional work.

Learn how Android Adaptability and Unity Adaptive Performance can help you stabilize your game’s FPS and reduce thermal throttling.

Make with MakerSuite – Part 1: An Introduction

Posted by Ray Thai – Product Manager, Labs

We’re always on the lookout for tools and technologies that bring innovative solutions to our developer community. Generative AI refers to the ability of machine learning models, such as Large Language Models (LLMs) trained on massive amounts of data, to learn patterns and create new content such as text, images, videos, or audio. These are still under development, but we’re already seeing how models like PaLM 2 can enhance the quality of our code to make us more productive with tools like Project IDX and Android’s Studio Bot, or help us build new innovative user experiences like Bard. It’s exciting how simple it is to interact with these powerful LLMs so we’re kicking off a 5-part series called “Make with MakerSuite” to show you how easy it is to get started.


What is MakerSuite?

MakerSuite is a fast, easy way to start building generative AI apps. It provides an efficient UI for prompting some of Google’s latest models and easily translates prompts into production-ready code you can integrate into your applications. Today, we’ve removed the waitlist so anyone in 179 countries and territories can use MakerSuite.

The art of prompting LLMs

Interacting with LLMs is as straightforward as crafting a plain language prompt, making it accessible to everyone. Prompts can be as simple as a single input, but you have the flexibility to provide additional context or examples, effectively guiding the model to produce the most optimal response. You'll observe that you can achieve different outcomes by simply tweaking the way you phrase your prompts. To harness the power of these models safely and effectively, careful crafting and iterative refining becomes essential.

Choosing the Right Prompt Type: Text, Data, or Chat?

When it comes to using MakerSuite, there are three prompt types to help you achieve your goals.

1. Text Prompts: Unleash Your Creativity

Text prompts in MakerSuite provide a flexible and freeform experience that allows you to express yourself creatively through your prompts. Whether you're a beginner or an experienced user, text prompts offer a simple way to interact with the model.

image showing user generating ideas in MakerSuite
Generating ideas for a dinner party using a text prompt in MakerSuite

2. Data Prompts: Structured Few-Shot Prompts

Data prompts are the go-to choice when you have examples to help you specify precisely what you want from the model. They are perfect for applications that require a consistent input and output format such as data generation, translation, and more.

image showing user creating a reverse dictionary in MakerSuite
A reverse dictionary using a data prompt in MakerSuite


3. Chat Prompts: Building Conversational Experiences

If your goal is to create interactive chatbots or to simulate conversations, chat prompts are the solution! These prompts enable you to build engaging and interactive conversational experiences.

Image showing user chatting with a snowman in MakerSuite
Chatting with a snowman using a chat prompt in MakerSuite

No matter which prompt type you choose, you’ll find how easy it is to use MakerSuite to prompt some of the latest models from Google to build exciting, new user experiences.


We can’t wait to see what you build

AI is fundamentally reshaping the landscape of developer work and creativity, and we’re committed to empowering our developer community with access to cutting-edge models. We believe an open and collaborative developer community fuels progress and we're thrilled to see companies like LlamaIndex and Chroma harnessing MakerSuite as building blocks for their own innovations.

You can sign up to get started with MakerSuite in 179 countries and territories.You’ll find sample prompts for inspiration or just start prompting to see what the model generates. Once you’re happy with your configuration, easily export to code from MakerSuite and start integrating into your applications, products, and services. If you prefer to prompt our models directly with the API, sign up and grab your API key from MakerSuite to start!

Studio Bot expands to 170+ international markets!

Posted by Isabella Fiterman – Product Marketing Manager, and Sandhya Mohan – Product Manager

At this year’s Google I/O, one of the most exciting announcements for Android developers was the introduction of Studio Bot, an AI-powered coding assistant which can be accessed directly in Android Studio. Studio Bot can accelerate your ability to write high-quality Android apps faster by helping generate code for your app, answering your questions, finding relevant resources— all without ever having to leave Android Studio. After our announcement you told us how excited you were about this AI-powered coding companion, and those of you outside of the U.S were eager to get your hands on it. We heard your feedback, and expanded Studio Bot to over 170 countries and territories in the canary release channel of Android Studio.

Ask Studio Bot your Android development questions

Studio Bot is powered by artificial intelligence and can understand natural language, so you can ask development questions in your own words. While it’s now available in most countries, it is designed to be used in English. You can enter your questions in Studio Bot’s chat window ranging from very simple and open-ended ones to specific problems that you need help with. Here are some examples of the types of queries it can answer:

How do I add camera support to my app?


I want to create a Room database.

Can you remind me of the format for javadocs?

What's the best way to get location on Android?

Studio Bot remembers the context of the conversation, so you can also ask follow-up questions, such as “Can you give me the code for this in Kotlin?” or “Can you show me how to do it in Compose?”


Moving image showing a user having conversation with Studio Bot

Designed with privacy in mind

Studio Bot was designed with privacy in mind. You don’t need to send your source code to take advantage of Studio Bot’s features. By default, Studio Bot’s responses are purely based on conversation history, and you control whether you want to share additional context or code for customized responses. Much like our work on other AI projects, we stick to a set of AI Principles that hold us accountable.

Focus on quality

Studio Bot is still in its early days, and we suggest validating its responses before using them in a production app. We’re continuing to improve its Android development knowledge base and quality of responses so that it can better support your development needs in the future. You can help us improve Studio Bot by trying it out and sharing your feedback on its responses using the thumbs up and down buttons.

Try it out!

Download the latest canary release of Android Studio and read more about how you can get started with Studio Bot. You can also sign up to receive updates on Studio Bot as the experience evolves.

Latest ARTwork on hundreds of millions of devices

Posted by Serban Constantinescu, Product Manager

Wouldn’t it be great if each update improved start-up times, execution speed, and memory usage of your apps? Google Play system updates for the Android Runtime (ART) do just that. These updates deliver performance improvements, the latest security fixes, and unify the core OpenJDK APIs across hundreds of millions of devices, including all Android 12+ devices and soon Android Go.

ART is the engine behind the Android operating system (OS). It provides the runtime and core APIs that all apps and most OS services rely on. Both Java and Kotlin are compiled down to bytecode executed by ART. Improvements in the runtime, compiler and core API benefit all developers making app execution faster and bytecode compilation more efficient.

While parts of Android are customizable by device manufacturers, ART is the same for all devices and Google Play system updates enable a path to modular updates.

Modularizing the OS

Android was originally designed for monolithic updates, which meant that OS components did not need to have clear API boundaries. This is because all dependent software would be built together. However, this made it difficult to update ART independently of the rest of the OS. Our first challenge was to untangle ART's dependencies and create clear, well-defined, and tested API boundaries. This allowed us to modularize ART and make it independently updatable.

Illustration of a racecar with an engine part hovering above the hood. A curved arrow points to where this part should go

As a core part of the OS, ART had to blaze new trails and engineer new OS boundaries. These new boundaries were so extensive that manually adding and updating them would be too time-consuming. Therefore, we implemented automatic generation of those through introspection in the build system.

Another example is stack unwinding, which reports the functions last executed when an issue is detected. Before modularizing the OS, all stack unwinding code was built together and could change across Android versions. This made the transition even more challenging, since there is only one version of ART that is delivered to many versions of Android, we had to create a new API boundary as well as design it to be forward-compatible with newer versions of the ART APEX module on devices that are no longer getting full OS updates.

Recently, for Android 14, we refactored the interface between the Package Manager, the service that determines how to install and update apps, and ART. This moves the OS boundary from the ART dex2oat command line to a well-defined interface that enables future optimizations, such as finer-grained control over the compilation mode.

ART updatability also introduced new challenges. For example, the collection of Java libraries, referred to as the Boot Classpath, had to be securely recompiled to ensure good performance. This required introducing a new secure state for compilation during boot as well as a fallback JIT compilation mode.

On older devices, the secure compilation happens on the first reboot after an ART update. On newer devices that support the Android Virtualization Framework, the compilation happens while the device is idle, in an enclave called Isolated Compilation – saving up to 20 seconds of boot-time.

Testing the ART APEX module

The ART APEX module is a complex piece of software with an order of magnitude more APIs than any other APEX module. It also backs a quarter of the developer APIs available in the Android SDK. In addition, ART has a compiler that aims to make the most of the underlying hardware by generating chipset-specific instructions, such as Arm SVE. This, together with the multiple OS versions on which the ART APEX module has to run, makes testing challenging.

We first modularized the testing framework from per-platform release (e.g. Android CTS) to per module. We did this by introducing an ART-specific Mainline Test Suite (MTS), which tests both compiler and runtime, as well as core OpenJDK APIs, while collecting code coverage statistics.

Our target is 100% API coverage and high line coverage, especially for new APIs. Together with HWASan and fuzzing, all of the tests described above contribute to a massive test load that needs to be sharded across multiple devices to ensure that it completes in a reasonable amount of time.

Illustration of modularized testing framework

We test the upcoming ART release every day by compiling over 18 million APKs and running app compatibility tests, and startup, performance, and memory benchmarks on a variety of Android devices that replicate the diversity of our ecosystem as closely as possible. Once tests pass with all possible compilation modes, all Garbage Collector algorithms, and supported OS versions, we begin gradually rolling out the next ART release.

Benefits of ART Google Play system updates

By updating ART independently of OS updates, users get the latest performance optimizations and security fixes as quickly as possible, while developers get OpenJDK improvements and compiler optimisations that benefit both Java and Kotlin.

As shown in the graph below, the runtime and compiler optimizations in the ART 13 update delivered real-world app start-up improvements of up to 30% on some devices.

Graph of average app startup time showing startup time in milliseconds with improvement up to 30% across 12 weeks on devices running the latest ART Google Play system update

ART updates allow us to frequently deploy fixes with little additional effort from our ecosystem partners. They include propagating upstream OpenJDK fixes to Android devices as quickly as possible, as well as runtime and compiler security fixes, such as CVE-2022-20502, which was detected by our automated fuzzing tests.

For developers, ART updates mean that you can now target the latest programming features. ART 13 delivered OpenJDK 11 core language features, which was the fastest-ever adoption of a new OpenJDK release on Android devices.

What’s next

In the coming months, we'll be releasing ART 14 to all compatible devices. ART 14 includes OpenJDK 17 support along with new compiler and runtime optimizations that improve performance while reducing code size. Stay tuned for more details on ART 14!

Java and OpenJDK are trademarks or registered trademarks of Oracle and/or its affiliates.

MediaPipe for Raspberry Pi and iOS

Posted by Paul Ruiz, Developer Relations Engineer

Back in May we released MediaPipe Solutions, a set of tools for no-code and low-code solutions to common on-device machine learning tasks, for Android, web, and Python. Today we’re happy to announce that the initial version of the iOS SDK, plus an update for the Python SDK to support the Raspberry Pi, are available. These include support for audio classification, face landmark detection, and various natural language processing tasks. Let’s take a look at how you can use these tools for the new platforms.

Object Detection for Raspberry Pi

Aside from setting up your Raspberry Pi hardware with a camera, you can start by installing the MediaPipe dependency, along with OpenCV and NumPy if you don’t have them already.

python -m pip install mediapipe

From there you can create a new Python file and add your imports to the top.

import mediapipe as mp from mediapipe.tasks import python from mediapipe.tasks.python import vision import cv2 import numpy as np

You will also want to make sure you have an object detection model stored locally on your Raspberry Pi. For your convenience, we’ve provided a default model, EfficientDet-Lite0, that you can retrieve with the following command.

wget -q -O efficientdet.tflite -q https://storage.googleapis.com/mediapipe-models/object_detector/efficientdet_lite0/int8/1/efficientdet_lite0.tflite

Once you have your model downloaded, you can start creating your new ObjectDetector, including some customizations, like the max results that you want to receive, or the confidence threshold that must be exceeded before a result can be returned.

# Initialize the object detection model base_options = python.BaseOptions(model_asset_path=model)options = vision.ObjectDetectorOptions(                                   base_options=base_options,                                   running_mode=vision.RunningMode.LIVE_STREAM,                                   max_results=max_results,                                                       score_threshold=score_threshold,                                    result_callback=save_result) detector = vision.ObjectDetector.create_from_options(options)

After creating the ObjectDetector, you will need to open the Raspberry Pi camera to read the continuous frames. There are a few preprocessing steps that will be omitted here, but are available in our sample on GitHub.

Within that loop you can convert the processed camera image into a new MediaPipe.Image, then run detection on that new MediaPipe.Image before displaying the results that are received in an associated listener.

mp_image = mp.Image(image_format=mp.ImageFormat.SRGB, data=rgb_image) detector.detect_async(mp_image, time.time_ns())

Once you draw out those results and detected bounding boxes, you should be able to see something like this:

Moving image of a person holidng up a cup and a phone, and detected bounded boxes identifying these items in real time

You can find the complete Raspberry Pi example shown above on GitHub, or see the official documentation here.

Text Classification on iOS

While text classification is one of the more direct examples, the core ideas will still apply to the rest of the available iOS Tasks. Similar to the Raspberry Pi, you’ll start by creating a new MediaPipe Tasks object, which in this case is a TextClassifier.

var textClassifier: TextClassifier? textClassifier = TextClassifier(modelPath: model.modelPath)

Now that you have your TextClassifier, you just need to pass a String to it to get a TextClassifierResult.

func classify(text: String) -> TextClassifierResult? { guard let textClassifier = textClassifier else { return nil } return try? textClassifier.classify(text: text) }

You can do this from elsewhere in your app, such as a ViewController DispatchQueue, before displaying the results.

let result = self?.textClassifier.classify(text: inputText) let categories = result?.classificationResult.classifications.first?.categories?? []

You can find the rest of the code for this project on GitHub, as well as see the full documentation on developers.google.com/mediapipe.

Moving image of TextClasifier on an iPhone

Getting started

To learn more, watch our I/O 2023 sessions: Easy on-device ML with MediaPipe, Supercharge your web app with machine learning and MediaPipe, and What's new in machine learning, and check out the official documentation over on developers.google.com/mediapipe.

We look forward to all the exciting things you make, so be sure to share them with @googledevs and your developer communities!