Tag Archives: Announcements

Google Open Source Peer Bonus program announces second group of 2023 winners



We are excited to announce the second group of winners for the 2023 Google Open Source Peer Bonus Program! This program recognizes external open source contributors who have been nominated by Googlers for their exceptional contributions to open source projects.

The Google Open Source Peer Bonus Program is a key part of Google's ongoing commitment to open source software. By supporting the development and growth of open source projects, Google is fostering a more collaborative and innovative software ecosystem that benefits everyone.

This cycle's Open Source Peer Bonus Program received 163 nominations and winners come from 35 different countries around the world, reflecting the program's global reach and the immense impact of open source software. Community collaboration is a key driver of innovation and progress, and we are honored to be able to support and celebrate the contributions of these talented individuals from around the world through this program.

We would like to extend our congratulations to the winners! Included below are those who have agreed to be named publicly.

Winner

Open Source Project

Tim Dettmers

8-bit CUDA functions for PyTorch

Odin Asbjørnsen

Accompanist

Lazarus Akelo

Android FHIR

Khyati Vyas

Android FHIR

Fikri Milano

Android FHIR

Veyndan Stuart

AndroidX

Alex Van Boxel

Apache Beam

Dezső Biczó

Apigee Edge Drupal module

Felix Yan

Arch Linux

Gerlof Langeveld

atop

Fabian Meumertzheim

Bazel

Keith Smiley

Bazel

Andre Brisco

Bazel Build Rules for Rust

Cecil Curry

beartype

Paul Marcombes

bigfunctions

Lucas Yuji Yoshimine

Camposer

Anita Ihuman

CHAOSS

Jesper van den Ende

Chrome DevTools

Aboobacker MK

CircuitVerse.org

Aaron Ballman

Clang

Alejandra González

Clippy

Catherine Flores

Clippy

Rajasekhar Kategaru

Compose Actors

Olivier Charrez

comprehensive-rust

John O'Reilly

Confetti

James DeFelice

container-storage-interface

Akihiro Suda

containerd, runc, OCI specs, Docker, Kubernetes

Neil Bowers

CPAN

Aleksandr Mikhalitsyn

CRIU

Daniel Stenberg

curl

Ryosuke TOKUAMI

Dataform

Salvatore Bonaccorso

Debian

Moritz Muehlenhoff

Debian

Sylvestre Ledru

DebianLLVM

Andreas Deininger

Docsy

Róbert Fekete

Docsy

David Sherret

dprint

Justin Grant

ECMAScript Time Zone Canonicalization Proposal

Chris White

EditorConfig

Charles Schlosser

Eigen

Daniel Roe

Elk - Mastodon Client

Christopher Quadflieg

FakerJS

Ostap Taran

Firebase Apple SDK

Frederik Seiffert

Firebase C++ SDK

Juraj Čarnogurský

firebase-tools

Callum Moffat

Flutter

Anton Borries

Flutter

Tomasz Gucio

Flutter

Chinmoy Chakraborty

Flutter

Daniil Lipatkin

Flutter

Tobias Löfstrand

Flutter go_router package

Ole André Vadla Ravnås

Frida

Jaeyoon Choi

Fuchsia

Jeuk Kim

Fuchsia

Dongjin Kim

Fuchsia

Seokhwan Kim

Fuchsia

Marcel Böhme

FuzzBench

Md Awsafur Rahman

GCViT-tf, TransUNet-tf,Kaggle

Qiusheng Wu

GEEMap

Karsten Ohme

GlobalPlatform

Sacha Chua

GNU Emacs

Austen Novis

Goblet

Tiago Temporin

Golang

Josh van Leeuwen

Google Certificate Authority Service Issuer for cert-manager

Dustin Walker

google-cloud-go

Parth Patel

GUAC

Kevin Conner

GUAC

Dejan Bosanac

GUAC

Jendrik Johannes

Guava

Chao Sun

Hive, Spark

Sean Eddy

hmmer

Paulus Schoutsen

Home Assistant

Timo Lassmann

Kalign

Stephen Augustus

Kubernetes

Vyom Yadav

Kubernetes

Meha Bhalodiya

Kubernetes

Madhav Jivrajani

Kubernetes

Priyanka Saggu

Kubernetes

DANIEL FINNERAN

kubeVIP

Junfeng Li

LanguageClient-neovim

Andrea Fioraldi

LibAFL

Dongjia Zhang

LibAFL

Addison Crump

LibAFL

Yuan Tong

libavif

Gustavo A. R. Silva

Linux kernel

Mathieu Desnoyers

Linux kernel

Nathan Chancellor

Linux Kernel, LLVM

Gábor Horváth

LLVM / Clang

Martin Donath

Material for MkDocs

Jussi Pakkanen

Meson Build System

Amos Wenger

Mevi

Anders F Björklund

minikube

Maksim Levental

MLIR

Andrzej Warzynski

MLIR, IREE

Arnaud Ferraris

Mobian

Rui Ueyama

mold

Ryan Lahfa

nixpkgs

Simon Marquis

Now in Android

William Cheng

OpenAPI Generator

Kim O'Sullivan

OpenFIPS201

Yigakpoa Laura Ikpae

Oppia

Aanuoluwapo Adeoti

Oppia

Philippe Antoine

oss-fuzz

Tornike Kurdadze

Pinput

Andrey Sitnik

Postcss (and others: Autoprefixer, postcss, browserslist, logux)

Marc Gravell

protobuf-net

Jean Abou Samra

Pygments

Qiming Sun

PySCF

Trey Hunner

Python

Will Constable

PyTorch/XLA

Jay Berkenbilt

qpdf

Ahmed El-Helw

Quran App for Android

Jan Gorecki

Reproducible benchmark of database-like ops

Ralf Jung

Rust

Frank Steffahn

Rust, ICU4X

Bhaarat Krishnan

Serverless Web APIs Workshop

Maximilian Keppeler

Sheets-Compose-Dialogs

Cory LaViska

Shoelace

Carlos Panato

Sigstore

Keith Zantow

spdx/tools-golang

Hayley Patton

Steel Bank Common Lisp

Qamar Safadi

Sunflower

Victor Julien

Suricata

Eyoel Defare

textfield_tags

Giedrius Statkevičius

Thanos

Michael Park

The Good Docs Project

Douglas Theobald

Theseus

David Blevins

Tomee

Anthony Fu

Vitest

Ryuta Mizuno

Volcago

Nicolò Ribaudo

WHATWG HTML Living Standard; ECMAScript Language Specification

Antoine Martin

xpra

Toru Komatsu

youki

We are incredibly proud of all of the nominees for their outstanding contributions to open source, and we look forward to seeing even more amazing contributions in the years to come. An additional thanks to Maria Tabak who has helped to lay the groundwork and management of this program for the past 5 years!

By Mike Bufano, Google Open Source Peer Bonus Program Lead

Bazel 7 Release

Posted by the Google Bazel team

Bazel 7 is now released. Bazel is Google's open source build system for fast and correct builds. It has built-in support for building both client and server software, including client applications for both Android and iOS platforms. It also provides an extensible framework that you can use to develop your own build rules. Bazel builds almost all Google products, including Google Search, GMail, and Google Docs.


What’s new in Bazel 7?

Bazel 7 is the latest major release on the long-term support (LTS) track. It includes:

Bzlmod: Bzlmod, Bazel's new modular external dependency management system, is now enabled by default (i.e. --enable_bzlmod defaults to true). If your project doesn't have a MODULE.bazel file, Bazel will create an empty one for you. The old WORKSPACE mechanism will continue to work alongside the new Bzlmod-managed system. Learn more about what’s changed since Bazel 6 and what’s coming up in Bazel 8 and 9.

Build without the Bytes (BwoB): Build without the Bytes for builds using remote execution is now enabled by default (i.e. --remote_download_outputs defaults to toplevel). Bazel will no longer try to download any intermediate outputs from the remote server, but only the outputs of requested top-level targets instead. This significantly improves remote build performance. Learn more about BwoB.

Merged analysis and execution (Skymeld): Project Skymeld aims to improve multi-target build performance by removing the boundary between the analysis and execution phases and allowing targets to be independently executed as soon as their analysis finishes.

Platform-based toolchain resolution for Android and C++: This change helps streamline the toolchain resolution API across all rulesets, obviating the need for language-specific flags. It also removes technical debt by having Android and C++ rules use the same toolchain resolution logic as other rulesets. Full details for Android developers are available in the Android Platforms announcement.


What's next?

Read the full release notes for Bazel 7, and follow along as we work together towards Bazel 8:

If you have any questions or feedback, or would like to share something you’ve built, reach out to [email protected]. We would love to hear from you!

A look back at BazelCon ’23 and the launch of Bazel 7

In October ‘23, the Google Bazel team hosted the 7th annual BazelCon, a gathering for the Bazel community and broader Build ecosystem. We welcomed enterprise users and program partners, companies building businesses on top of Bazel, as well as enthusiasts curious to learn more about this space. This year, BazelCon made its debut outside North America and was hosted in the Google Munich office.


BazelCon recap

The Bazel ecosystem is growing. This year, we had over 200 in-person external attendees, over 3K livestream views, and a record number of 120 proposals submitted by the community.

We started the conference with a keynote address by Mícheál Ó Foghlú (Engineering Director at Google), followed by a state-of-the-union address by John Field and Tobias Werth (Engineering Managers at Google).

The Bazel community showcased a series of technical and lightning main-stage talks. To highlight a few:

    • BMW shared insights into how they released several “Bazel cars”
    • JetBrains* announced the preview release of their new Bazel plugin for their IDEs
    • Booking.com walked through their journey of adopting Bazel, thereby reducing CI time from 22 minutes to under 2 minutes and container image size by 80%

Take a look at published recordings of all of these talks at your own leisure.

In addition to hearing from presenters, conference attendees also had the opportunity to engage with each other in smaller, more interactive forums. Through live Q&A with the Bazel team and several Birds of a Feather sessions on topics ranging from authoring rulesets, to collecting usage data responsibly, to IDE integrations, the Bazel community was able to provide direct feedback to the team and spark productive discussions. Make sure to check out published notes from these sessions.

At BazelCon, we also proudly announced the initial release candidate for Bazel 7, which has since launched.


What’s new in Bazel 7?

Bazel 7 is the latest major release on the long-term support (LTS) track. Many multi-year efforts have landed in this release. For example:

Bzlmod: Bzlmod, Bazel's new modular external dependency management system, is now enabled by default (i.e. --enable_bzlmod defaults to true). If your project doesn't have a MODULE.bazel file, Bazel will create an empty one for you. The old WORKSPACE mechanism will continue to work alongside the new Bzlmod-managed system. Learn more about what’s changed since Bazel 6 and what’s coming up in Bazel 8 and 9.

Build without the Bytes (BwoB): Build without the Bytes for builds using remote execution is now enabled by default (i.e. --remote_download_outputs defaults to toplevel). Bazel will no longer try to download any intermediate outputs from the remote server, but only the outputs of requested top-level targets instead. This significantly improves remote build performance. Learn more about BwoB.

Merged analysis and execution (Skymeld): Project Skymeld aims to improve multi-target build performance by removing the boundary between the analysis and execution phases and allowing targets to be independently executed as soon as their analysis finishes.

Platform-based toolchain resolution for Android and C++: This change helps streamline the toolchain resolution API across all rulesets, obviating the need for language-specific flags. It also removes technical debt by having Android and C++ rules use the same toolchain resolution logic as other rulesets. Full details for Android developers are available in the Android Platforms announcement.

Read the full release notes for Bazel 7.
 

Stay up-to-date with Bazel

We are thankful to everyone who played a role in making BazelCon ‘23 a big success - speakers, contributors, attendees, the planning committee, and more. We look forward to seeing you again next year!

In the meantime, follow along as we work together towards Bazel 8:

If you have any questions or feedback, or would like to share something you’ve built, reach out to [email protected]. We would love to hear from you!

By the Google Bazel team

*Copyright © 2023 JetBrains s.r.o. JetBrains and IntelliJ are registered trademarks of JetBrains s.r.o

A New Foundation for AI on Android

Posted by Dave Burke, VP of Engineering

Foundation Models learn from a diverse range of data sources to produce AI systems capable of adapting to a wide range of tasks, instead of being trained for a single narrow use case. Today, we announced Gemini, our most capable model yet. Gemini was designed for flexibility, so it can run on everything from data centers to mobile devices. It's been optimized for three different sizes: Ultra, Pro and Nano.

Gemini Nano, optimized for mobile

Gemini Nano, our most efficient model built for on-device tasks, runs directly on mobile silicon, opening support for a range of important use cases. Running on-device enables features where the data should not leave the device, such as suggesting replies to messages in an end-to-end encrypted messaging app. It also enables consistent experiences with deterministic latency, so features are always available even when there’s no network.

Gemini Nano is distilled down from the larger Gemini models and specifically optimized to run on mobile silicon accelerators. Gemini Nano enables powerful capabilities such as high quality text summarization, contextual smart replies, and advanced proofreading and grammar correction. For example, the enhanced language understanding of Gemini Nano enables the Pixel 8 Pro to concisely summarize content in the Recorder app, even when the phone’s network connection is offline.

Moving image of Gemini Nano being used in the Recorder app on a Pixel 8 Pro device
Pixel 8 Pro using Gemini Nano in the Recorder app to summarize meeting audio, even without a network connection.

Gemini Nano is starting to power Smart Reply in Gboard on Pixel 8 Pro, ready to be enabled in settings as a developer preview. Available now to try with WhatsApp and coming to more apps next year, the on-device AI model saves you time by suggesting high-quality responses with conversational awareness1.

Moving image of WhatsApp’s use of Smart Reply in Gboard using Gemini Nano on Pixel 8 Pro device
Smart Reply in Gboard within WhatsApp using Gemini Nano on Pixel 8 Pro.

Android AICore, a new system service for on-device foundation models

Android AICore is a new system service in Android 14 that provides easy access to Gemini Nano. AICore handles model management, runtimes, safety features and more, simplifying the work for you to incorporate AI into your apps.

AICore is private by design, following the example of Android’s Private Compute Core with isolation from the network via open-source APIs, providing transparency and auditability. As part of our efforts to build and deploy AI responsibly, we also built dedicated safety features to make it safer and more inclusive for everyone.

AICore architechture
AICore manages model, runtime and safety features.

AICore enables Low Rank Adaptation (LoRA) fine tuning with Gemini Nano. This powerful concept enables app developers to create small LoRA adapters based on their own training data. The LoRA adapter is loaded by AICore, resulting in a powerful large language model fine tuned for the app’s own use-cases.

AICore takes advantage of new ML hardware like the latest Google Tensor TPU and NPUs in flagship Qualcomm Technologies, Samsung S.LSI and MediaTek silicon. AICore and Gemini Nano are rolling out to Pixel 8 Pro, with more devices and silicon partners to be announced in the coming months.

Build with Gemini

We're excited to bring together state-of-the-art AI research with easy-to-use tools and APIs for Android developers to build with Gemini on-device. If you are interested in building apps using Gemini Nano and AICore, please sign up for our Early Access Program.


1 Available globally, only using the United States English keyboard language. Read more for details.

Upcoming Android Events

Posted by Anirudh Dewani, Director of Android Developer Relations

One of our favorite things to do is connect with Android developers–like you–around the world, and it’s even more fun when we’re able to do so in person. Earlier this year, we had the opportunity to meet thousands of you at Google I/O and through global Google I/O Connect events in Miami, Amsterdam, Bengaluru and China, and we’re constantly inspired by your energy, your passion to build for Android, and your dedication to improve app quality.

But there are still more opportunity for us to connect at events unfolding later this year, as we bring the Android team and our Android Google Developer Expert friends to events around the world.

Here’s a snapshot:

droidcon London

Next week, on October 26 & 27, the Android team is bringing the excitement to droidcon London with tech talk topics including app performance, screenshot testing, Compose, and more. We’ll also have a full lineup of subject matter experts to host a fireside chat and office hours, happy to answer all your development and product questions. Learn more about the content and get your tickets on droidcon's website.

DevFest Season

DevFest 2023 has just kicked off, with nearly 500 DevFests already scheduled. DevFest is a community-led technology conference series, and is proud to embrace developers from all corners of the globe and diverse backgrounds. Conference agendas are tailored to suit the needs and interests of local developer communities and include talks, hands-on demos, workshops, and codelabs on the latest Google technologies.

This year, many Android GDE will be speaking at hundreds of DevFest events around the world, with special appearances from the Android team at DevFests in New York, the Bay Area, London, and Singapore among others.

Want to join us? Just navigate to any location on the interactive DevFest map and RSVP. It's that simple!

Stay in Touch

This was just a small peek of some of the events through the end of 2023. Don’t forget to check out our YouTube channel for all the latest news, technical talks, tutorials, tips and tricks, and follow and engage with us on X (formerly known as Twitter) and LinkedIn. We can’t wait to connect with thousands of you in person!

Tune in for another episode of #TheAndroidShow on October 19!

Posted by the Android team

In just a few days, on Thursday, October 19 at 10AM PT, we’ll be kicking off another episode of #TheAndroidShow, live on YouTube and on developer.android.com! In this episode, we’ll be showing how we’re making it faster and easier to build excellent apps across devices with live technical demos and more, plus a live fireside Q&A with the Android team!


Across the show, we’ll be covering the latest in Android development, including a look at the new Pixel watch and the world of wearables, gathering the Android team to demo tools and libraries to build for foldables, large screen devices, with Compose, Android 14, Studio Bot, and more.

You'll hear the latest from the developers and engineers who build Android, including a conversation with Android’s Dave Burke.

Send us your burning questions using #AskAndroid

In this episode of #TheAndroidShow, we’ll also be hosting a live Q&A from the Googleplex in California, where we've assembled a team of experts ready to answer your questions live. Then, tune in on October 19 to see if your question is answered live, on the air!

#TheAndroidShow is your conversation with the Android developer community, this time hosted by Nick Butcher and Annyce Davis. You'll hear the latest from the developers and engineers who build Android. Don’t forget to tune in live on October 19 at 10AM PT, live on YouTube and on developer.android.com!

Make with MakerSuite – Part 1: An Introduction

Posted by Ray Thai – Product Manager, Labs

We’re always on the lookout for tools and technologies that bring innovative solutions to our developer community. Generative AI refers to the ability of machine learning models, such as Large Language Models (LLMs) trained on massive amounts of data, to learn patterns and create new content such as text, images, videos, or audio. These are still under development, but we’re already seeing how models like PaLM 2 can enhance the quality of our code to make us more productive with tools like Project IDX and Android’s Studio Bot, or help us build new innovative user experiences like Bard. It’s exciting how simple it is to interact with these powerful LLMs so we’re kicking off a 5-part series called “Make with MakerSuite” to show you how easy it is to get started.


What is MakerSuite?

MakerSuite is a fast, easy way to start building generative AI apps. It provides an efficient UI for prompting some of Google’s latest models and easily translates prompts into production-ready code you can integrate into your applications. Today, we’ve removed the waitlist so anyone in 179 countries and territories can use MakerSuite.

The art of prompting LLMs

Interacting with LLMs is as straightforward as crafting a plain language prompt, making it accessible to everyone. Prompts can be as simple as a single input, but you have the flexibility to provide additional context or examples, effectively guiding the model to produce the most optimal response. You'll observe that you can achieve different outcomes by simply tweaking the way you phrase your prompts. To harness the power of these models safely and effectively, careful crafting and iterative refining becomes essential.

Choosing the Right Prompt Type: Text, Data, or Chat?

When it comes to using MakerSuite, there are three prompt types to help you achieve your goals.

1. Text Prompts: Unleash Your Creativity

Text prompts in MakerSuite provide a flexible and freeform experience that allows you to express yourself creatively through your prompts. Whether you're a beginner or an experienced user, text prompts offer a simple way to interact with the model.

image showing user generating ideas in MakerSuite
Generating ideas for a dinner party using a text prompt in MakerSuite

2. Data Prompts: Structured Few-Shot Prompts

Data prompts are the go-to choice when you have examples to help you specify precisely what you want from the model. They are perfect for applications that require a consistent input and output format such as data generation, translation, and more.

image showing user creating a reverse dictionary in MakerSuite
A reverse dictionary using a data prompt in MakerSuite


3. Chat Prompts: Building Conversational Experiences

If your goal is to create interactive chatbots or to simulate conversations, chat prompts are the solution! These prompts enable you to build engaging and interactive conversational experiences.

Image showing user chatting with a snowman in MakerSuite
Chatting with a snowman using a chat prompt in MakerSuite

No matter which prompt type you choose, you’ll find how easy it is to use MakerSuite to prompt some of the latest models from Google to build exciting, new user experiences.


We can’t wait to see what you build

AI is fundamentally reshaping the landscape of developer work and creativity, and we’re committed to empowering our developer community with access to cutting-edge models. We believe an open and collaborative developer community fuels progress and we're thrilled to see companies like LlamaIndex and Chroma harnessing MakerSuite as building blocks for their own innovations.

You can sign up to get started with MakerSuite in 179 countries and territories.You’ll find sample prompts for inspiration or just start prompting to see what the model generates. Once you’re happy with your configuration, easily export to code from MakerSuite and start integrating into your applications, products, and services. If you prefer to prompt our models directly with the API, sign up and grab your API key from MakerSuite to start!

Supporting Black tech entrepreneurs through the fourth Google for Startups Accelerator: Black Founders program

Posted by Lauren O’Neil, Startup Developer Ecosystem Lead, and Matt Ridenour, Head of US Startup Ecosystem

We are thrilled to announce our latest cohort of the Google for Startups Accelerator: Black Founders program as it embarks on its fourth year serving Black founders in the U.S. and Canada.

The 12 companies selected for this year’s cohort reflect the trends of the broader application pool - startups focused on improving healthcare outcomes, protecting the environment, reducing consumer energy consumption, and removing barriers to financial resources and home ownership (just to name a few). Additionally, these companies are utilizing emerging AI technologies to streamline and simplify customer, consumer, and professional experiences at all levels.

"This year's cohort represents the massive opportunity that Google has to invest in the future of tech entrepreneurship, and how Google supports a broader ecosystem of driving innovation in key industries. It’s truly impressive to see how this cohort is tackling some of the world’s toughest problems, from energy to medicine to finance, and enabling the creator economy for games, music, and content."  
– Jeanine Banks, VP & General Manager, Developer X and Head of Developer Relations.


Hear from a few founders who will participate in this 10-week program, commencing September 26th.

Tell us the story of your startup:

Seyi Adesola, Cofounder & CEO of AfroHealth: “Losing my mom to a preventable illness ignited my journey into healthcare, leading me to become a professional healthcare practitioner while providing individual health coaching to my church community, family and friends. AfroHealth was formed as an expansion of this vision, an online platform to provide Black individuals with culturally-sensitive online health coaching.”

Nana Wilberforce, Founder & CEO of Akeptus: “In the United States alone, one-third of households grapple with monthly energy bills, with 20% on the brink of losing access, and this hardship disproportionately affects minority groups. Akeptus was founded to empower households and enterprises to control their energy costs via AI solutions that simplify energy management.”

Nicole Clay, Cofounder & CMO of Hue: “My co-founders and I came together as three women across the skin tone spectrum who struggled with representation in beauty and finding products for our unique complexions. We are an e-commerce technology company that matches shoppers to real people who share the same skin tone, skin type, or preferences as you.”

What are the primary technical challenges you’re hoping to address during the program?

Seyi: “During the program, our first priority is perfecting the integration of Artificial Intelligence with our platform. We hope to utilize the full potential of Google's ML and TensorFlow frameworks to improve health outcomes in the Afro community.”

Nana: “We're most excited about the accelerator for the hands-on Cloud and AI expertise to refine our algorithms and infrastructure, allowing us to scale our impact on sustainability.”

Nicole: “During the program, we are looking to apply AI/ML to create and optimize video content, and leverage AI to ease the process for everyday end-users to create their own video reviews.”

Learn more about all 12 participating startups below:

AfroHealth (Dallas, TX) is a digital health & wellness platform utilizing AI to provide personalized healthcare coaching to Black and Brown communities.

Akeptus (Glenwood, MD) is an AI-powered energy management platform that provides real-time insights and control to optimize usage and energy costs, reduce waste, and strengthen grid resilience.

CareCopilot (New York, NY) is a curated marketplace of key services that families need when caring for elderly loved ones.

eBanqo (Alpharetta, GA) is a customer engagement AI platform that empowers businesses of all sizes to provide instant and seamless service to their customers across all channels, 24/7.

Expedier (Hamilton, ON) is the first Black-led, Black-Owned & BIPOC facing digital bank in Canada serving six million underserved BIPOC Canadians. (learn more about Expedier on our Google Canada blog!)

Hue (San Francisco, CA) is an AI-powered video platform that helps brands generate and display short-form video reviews on e-commerce.

IndyGeneUS (Washington, D.C.) is a precision medicine company using next-generation sequencing technologies to identify unique gene variants in diseases that affect underrepresented populations.

Kwema (St. Louis, MO) is a smart badge reel for healthcare professionals that empowers clinicians to unobtrusively call for help when facing patient violence.

My Home Pathway (New York, NY) is a technology platform that guides first-time home buyers to approval faster by analyzing data and providing individualized recommendations.

Pagedip (Boulder, CO) is a no-code content publishing app that allows users to create beautifully efficient, powerfully effective and demonstrably measurable documents that work better for teams and their customers.

Plannly Health (Scottsdale, AZ) is a patent-pending risk management software dedicated to mitigating the risk of human errors in hospitals, by offering a digital health solution that addresses provider stress, burnout, and critical life events or changes.

Rivet (Chicago, IL) is an AI-driven platform that helps creator teams use machine learning to find and understand their high-potential fans and provides actions and automations to unlock more revenue from them.

Find more information at g.co/blackfoundersaccelerator.

How We Made SPACE INVADERS: World Defense, an AR game powered by ARCore

Posted by Dereck Bridie, Developer Relations Engineer, ARCore and Bradford Lee, Product Marketing Manager, Augmented Reality

To celebrate the 45th anniversary of “SPACE INVADERS,” we collaborated with TAITO, the Japanese developer of the original arcade game, and UNIT9 to launch “SPACE INVADERS: World Defense,” an immersive game that takes advantage of the most advanced location-based AR technology. Players around the world can go outside to explore their local neighborhoods, defend the Earth from virtual Space Invaders that spawn from nearby structures, and score points by taking them down – all with augmented reality.

The game is powered by our latest ARCore technology - Geospatial API, Streetscape Geometry API, and Geospatial Creator. We’re excited to show you behind the scenes of how the game was developed and how we used our newest features and tools to design the first of its kind procedural, global AR gameplay.

Geospatial API: Turn the world into a playground

Geospatial API enables you to attach content remotely to any area mapped by Google Street View and create richer and more robust immersive experiences linked to real-world locations on a global scale. SPACE INVADERS: World Defense is available in over 100 countries in areas with high Visual Positioning Service (VPS) coverage in Street View, adapting the gameplay to busy urban environments as well as smaller towns and villages.

For players who live in areas without VPS coverage, we have recently updated the game to include our new mode called Indoor Mode, which allows you to defend the Earth from Space Invaders in any setting or location - indoors or outdoors.

Indoor Mode
The new Indoor Mode in Space Invaders brings the immersive gameplay to any indoor building setting

Creating the initial user flow

ARCore Geospatial API uses camera images from the user’s device to scan for feature points and compares those to images from Google Street View in order to precisely position the device in real-world space.

Geospatial API
Geospatial API is based on VPS with tens of billions of images in Street View to enable developers to build world-anchored experiences remotely in over 100 countries

This requires the user to hold up their phone and pan around the area such that enough data is collected to accurately position the user. To do this, we employed a clever technique to get users to scan the area, by requiring them to track the spaceship in the camera’s field of view.

Start of Game spaceship
To get started, follow the spaceship to scan your local surroundings

Using this user flow, we continually check whether the Geospatial API has gathered enough data for a high quality experience:

if (earthManager.EarthTrackingState == TrackingState.Tracking) {         var yawAcc = earthManager.CameraGeospatialPose.OrientationYawAccuracy;         var horiAcc = earthManager.CameraGeospatialPose.HorizontalAccuracy;         bool yawIsAccurate = yawAcc <= 5;         bool horizontalIsAccurate = horiAcc <= 10; return yawIsAccurate && horizontalIsAccurate; }

Transforming the environment into the playground

After scanning the nearby area, the game uses mesh data from the Streetscape Geometry API to algorithmically make playing the game in different locations a unique experience. Every real-world location has its own topography and city layout, affecting the gameplay in its own unique way.

Space Invaders played in diferent locations
Gameplay is varied depending on your location - from towns in Czech Republic (left) to cities in New York (right)

In the game, SPACE INVADERS can spawn from buildings, so we constructed test cases using building geometry obtained from different parts of the world. This ensures that the game would perform optimally in diverse environments from local villages to bustling cities.

Portal Placement
A visualization of how the algorithm would place portals in the real-world

Entering the Invader’s dimension

From our research studies, we learned that it can be tiring for users to keep holding their hands up for a prolonged period of time for an augmented reality experience. This knowledge influenced our gameplay development - we created the Invader’s dimension to give players a chance to relax their phone arm and improve user comfort.

Our favorite ‘wow’ moment that really shows you the power of the Geospatial API is the transition between real-world AR and virtually generated, 3D dimensions.

Transition AR to 3D
Gameplay transition from real-world AR to 3D dimension

This effect is achieved by blending the camera feed with the virtual environment shader that renders the buildings and terrain in the distinct wireframe style.

Portal Transition Editor
The Invader’s dimension appears around the player in the Unity Editor, seamlessly transitioning between the two modes

After the player enters the Invader’s dimension, the player’s spaceship flies through an algorithmically generated path through their local neighborhood. This is done by creating a depth image of the user’s environment from an overhead camera. In this image, the red channel represents buildings and the blue channel represents space that could potentially be used for the flight path. This image is then used to generate a grid with points that the path should follow, and an A* search algorithm is used to solve for a path that follows all the points.

Finally, the generated A-Star path is post-processed to smooth out any potential jittering, sharp turns and collisions.

To smooth out the spaceship’s pathway, the jitter is removed by sampling the path over a set interval of nodes. Then, we determine if there are any sharp turns on a path by analyzing the angles along the path. If a sharp turn is present, we introduce two additional points to round it out. Lastly we see if this smoothed path would collide with any obstacles, and adjust it to fly over them if detected.

Depth Composite on the left and 3D Path on the right
A visualization of the depth map and a generated sample path in the Invader’s dimension

Creating a global gaming experience

A key takeaway from building the game was that the complexity of the contextual generation required worldwide testing. With Unity, we brought multiple environments into test cases, which allowed us to rapidly iterate and validate changes to these algorithms. This gave us confidence to deploy the game globally.

Visualizing SPACE INVADERS using Geospatial Creator

We used Geospatial Creator, powered by ARCore and Photorealistic 3D Tiles from Google Maps Platform, to validate how virtual content, such as Space Invaders, would appear next to specific landmarks within Tokyo in Unity.

Japan 3D Tiles
With Photorealistic 3D Tiles, we were able to visualize Invaders in specific locations, including the Tokyo Tower in Japan

Future updates and releases

Since the game’s launch, we have heard our players’ feedback and have been actively updating and improving the gameplay experience.

  • We have added a new gameplay mode, Indoor Mode, which allows all players without VPS coverage or players who do not want to use AR mode to experience the game.
  • To encourage users to play the game in AR, scores have been rebalanced to reward players who play outside more than players who play indoors.

Download the game on Android or iOS today and join the ranks of an elite Earth defender force to compete in your neighborhood for the highest score. To hear the latest game updates, follow us on Twitter (@GoogleARVR) to hear how we are improving the game. Plus, visit our ARCore and Geospatial Creator websites to learn how to get started building with Google’s AR technology.

Latest ARTwork on hundreds of millions of devices

Posted by Serban Constantinescu, Product Manager

Wouldn’t it be great if each update improved start-up times, execution speed, and memory usage of your apps? Google Play system updates for the Android Runtime (ART) do just that. These updates deliver performance improvements, the latest security fixes, and unify the core OpenJDK APIs across hundreds of millions of devices, including all Android 12+ devices and soon Android Go.

ART is the engine behind the Android operating system (OS). It provides the runtime and core APIs that all apps and most OS services rely on. Both Java and Kotlin are compiled down to bytecode executed by ART. Improvements in the runtime, compiler and core API benefit all developers making app execution faster and bytecode compilation more efficient.

While parts of Android are customizable by device manufacturers, ART is the same for all devices and Google Play system updates enable a path to modular updates.

Modularizing the OS

Android was originally designed for monolithic updates, which meant that OS components did not need to have clear API boundaries. This is because all dependent software would be built together. However, this made it difficult to update ART independently of the rest of the OS. Our first challenge was to untangle ART's dependencies and create clear, well-defined, and tested API boundaries. This allowed us to modularize ART and make it independently updatable.

Illustration of a racecar with an engine part hovering above the hood. A curved arrow points to where this part should go

As a core part of the OS, ART had to blaze new trails and engineer new OS boundaries. These new boundaries were so extensive that manually adding and updating them would be too time-consuming. Therefore, we implemented automatic generation of those through introspection in the build system.

Another example is stack unwinding, which reports the functions last executed when an issue is detected. Before modularizing the OS, all stack unwinding code was built together and could change across Android versions. This made the transition even more challenging, since there is only one version of ART that is delivered to many versions of Android, we had to create a new API boundary as well as design it to be forward-compatible with newer versions of the ART APEX module on devices that are no longer getting full OS updates.

Recently, for Android 14, we refactored the interface between the Package Manager, the service that determines how to install and update apps, and ART. This moves the OS boundary from the ART dex2oat command line to a well-defined interface that enables future optimizations, such as finer-grained control over the compilation mode.

ART updatability also introduced new challenges. For example, the collection of Java libraries, referred to as the Boot Classpath, had to be securely recompiled to ensure good performance. This required introducing a new secure state for compilation during boot as well as a fallback JIT compilation mode.

On older devices, the secure compilation happens on the first reboot after an ART update. On newer devices that support the Android Virtualization Framework, the compilation happens while the device is idle, in an enclave called Isolated Compilation – saving up to 20 seconds of boot-time.

Testing the ART APEX module

The ART APEX module is a complex piece of software with an order of magnitude more APIs than any other APEX module. It also backs a quarter of the developer APIs available in the Android SDK. In addition, ART has a compiler that aims to make the most of the underlying hardware by generating chipset-specific instructions, such as Arm SVE. This, together with the multiple OS versions on which the ART APEX module has to run, makes testing challenging.

We first modularized the testing framework from per-platform release (e.g. Android CTS) to per module. We did this by introducing an ART-specific Mainline Test Suite (MTS), which tests both compiler and runtime, as well as core OpenJDK APIs, while collecting code coverage statistics.

Our target is 100% API coverage and high line coverage, especially for new APIs. Together with HWASan and fuzzing, all of the tests described above contribute to a massive test load that needs to be sharded across multiple devices to ensure that it completes in a reasonable amount of time.

Illustration of modularized testing framework

We test the upcoming ART release every day by compiling over 18 million APKs and running app compatibility tests, and startup, performance, and memory benchmarks on a variety of Android devices that replicate the diversity of our ecosystem as closely as possible. Once tests pass with all possible compilation modes, all Garbage Collector algorithms, and supported OS versions, we begin gradually rolling out the next ART release.

Benefits of ART Google Play system updates

By updating ART independently of OS updates, users get the latest performance optimizations and security fixes as quickly as possible, while developers get OpenJDK improvements and compiler optimisations that benefit both Java and Kotlin.

As shown in the graph below, the runtime and compiler optimizations in the ART 13 update delivered real-world app start-up improvements of up to 30% on some devices.

Graph of average app startup time showing startup time in milliseconds with improvement up to 30% across 12 weeks on devices running the latest ART Google Play system update

ART updates allow us to frequently deploy fixes with little additional effort from our ecosystem partners. They include propagating upstream OpenJDK fixes to Android devices as quickly as possible, as well as runtime and compiler security fixes, such as CVE-2022-20502, which was detected by our automated fuzzing tests.

For developers, ART updates mean that you can now target the latest programming features. ART 13 delivered OpenJDK 11 core language features, which was the fastest-ever adoption of a new OpenJDK release on Android devices.

What’s next

In the coming months, we'll be releasing ART 14 to all compatible devices. ART 14 includes OpenJDK 17 support along with new compiler and runtime optimizations that improve performance while reducing code size. Stay tuned for more details on ART 14!

Java and OpenJDK are trademarks or registered trademarks of Oracle and/or its affiliates.