Category Archives: Google Developers Blog

News and insights on Google platforms, tools and events

Behind the scenes: How the Google I/O photo booth was made

Posted by the Google Developers team

A closer look at building a Flutter web app with Google Developer products

If you attended Google I/O this year, you probably stopped by the Google I/O photo booth for a selfie with our Google Developer mascots: Flutter’s Dash, Android Jetpack, Chrome’s Dino, and Firebase’s Sparky. If you didn’t, it’s not too late to jump in, take a selfie, and share it on social media! We loved seeing all of the pictures you posted and your favorite props! Want to learn more about building a camera plugin, layouts, and gestures used in a photo booth for Flutter on the web?

Android, Dino, Dash, and Sparky all gathered around the photo booth

It took a combination of Google developer products to make the photo booth successful. The Flutter and Firebase teams joined forces to build a best in class example of Flutter on the web that used Firebase for hosting, auth, performance monitoring, and social sharing. Take a closer look at how the photo booth was built here and then grab the open source code on Github!

Flutter team members having fun in the photo booth

Flutter team members having fun in the photo booth!

Control your Mirru prosthesis with MediaPipe hand tracking

Guest post by the Engineering teams at Mirru and Tweag

What is the Mirru App?

Mirru App logo

Mirru is a free and open source Android app under development with which one can control robotic prosthetic hands via hand tracking. With our app, a user can instantly mirror grips from their sound hand onto a robotic one, which could be 3D-printed and self-assembled at low cost. With Mirru, we want to provide a cheap, intuitive and open end-to-end alternative to existing, costly, cumbersome and proprietary technology.

A demonstration of using MediaPipe hand tracking to move a robotic hand’s fingers with the Mirru app.

Figure 1: A demonstration of using MediaPipe hand tracking to move a robotic hand’s fingers with the Mirru app.

The Mirru team is a collaboration between Violeta López and Vladimir Hermand, two independent designers and technologists currently based in Paris. To kickstart the project, the team took part in Tweag’s Open Source Fellowship program which provided funding, mentorship and data engineering expertise from one of their engineers, Dorran Howell. The fellowship helped get Mirru launched from the ground-up.

Our goal for the 3-month fellowship was to develop an initial version of the Android app that can control any bluetooth open source hand using computer vision techniques, and make the app available for free on the Google Play store so anyone can print their own hand, assemble it, and download the app. With the help of MediaPipe, we were able to quickly prototype our app without having to build our own machine learning model, as we didn’t have the resources or training data to do so.

Why use hand tracking?

Using your phone and a front-facing camera with hand tracking opens up a new, affordable, accessible, and versatile way to control prosthetics.

Let's say I'm a left hand amputee who owns a robotic prosthesis. Every day, I need my prosthetic hand to actuate a lot of different grip patterns. For example, I need to use a pinch or tripod grip to pick up small objects, or a fist grip to pick up objects like a fruit or a cup. I change and execute these grip patterns via myoelectric muscle sensors that allow me to, for example, open and close a grip by flexing and unflexing my upper limb muscles. These myoelectric muscle sensors are the main interface between my body and the prosthesis.

However, living with them is not as easy as it seems. Controlling the myoelectric sensors can take a lot of time to get used to, and many never do. It can also be quite expensive to get these sensors fitted by a prosthetist, especially for people in developing countries or anyone without health insurance. Finally, the number of grips on many devices currently on the market is limited to less than ten, and only few models come with ways to create custom grips, which are often cumbersome.

Mirru provides an alternative interface. Using just their phone, a tool many have access to, a user can digitally mirror their sound hand in real-time and communicate with their prosthesis in an intuitive way. This removes the expensive need to be fitted by a prosthetist and enables the user to quickly program an unlimited amount of grips. For now, Mirru stays away from electromyography altogether as reliable muscle sensors are expensive. The programmed grips therefore need to be triggered via the android phone, which is why this first version of our app is more suited for activities like sweeping, holding a book while reading it, or holding a cup or shopping bag. In the future we hope to combine myoelectric sensors with hand tracking to get the benefits of both.

Programming a grip with the Mirru app looks like the following: Let’s say that I want to grab an object with my robotic hand. I bring my prosthesis near the object and I then form the desired grip with my sound hand in front of my android phone and Mirru mirrors it in real-time to the prosthesis. I then lock my prosthesis into this new grip and free up my vsound hand. Finally I might save this grip for later use and add it to my library of grips.

A user tester using hand-tracking on their phone to program their prosthesis’s grip to pick up a measuring tape and measure with the other hand.

Figure 2: A user tester using hand-tracking on their phone to program their prosthesis’s grip to pick up a measuring tape and measure with the other hand.

The Brunel Hand and the Mirru Arduino Sketch

In order to accomplish our goal of allowing as many people as possible to print, assemble, and control their own hand, we designed the Mirru android app to be compatible with any robotic hand that is controlled by a bluetooth-enabled Arduino board and servo motors.

For our project, we printed and assembled an open source robotic hand called the Brunel Hand made by Open Bionics. First, we 3D printed the Brunel Hand’s 3D printable files that are made available under the CC Attribution-Sharealike 4.0 International License. We then bought the necessary servos, springs, and screws to assemble the hand. In combination with printing and buying the servos, the hand costs around €500 to purchase and assemble.

The Brunel Hand comes with myoelectric-based firmware and a PCB board developed by Open Bionics, but since the hand is in essence just 4 servo motors, any microcontroller could be used. We ended up using an Adafruit ESP32 feather board for its bluetooth capabilities and created an Arduino sketch that can be downloaded, customized, and uploaded for anyone who is printing and assembling their own hand. They could then download the Mirru app to use as the control-interface for their printed hand.

Hand-tracking with MediaPipe

There are many computer vision solutions available for hand tracking that could be used for this project, but we needed a fast, open source solution that didn’t require us to train our own model, and that could be used reliably on a portable device such as a phone.

MediaPipe provides great out of the box support for hand tracking, and since we didn’t have the training data or resources available to create a model from scratch, it was perfect for our team. We were able to build the Android example apps easily and were excited to find that the performance was promising. Even better, no tweaking on the ready-made hand tracking model or the graphs was necessary, as the hand landmark model provided all the necessary outputs for our prototype.

When testing the prosthesis on real users, we were happy to hear that many of them were impressed with how fast the app was able to translate their movements, and that nothing else exists on the market that allows you to make custom grips as fast and on-the-fly.

A user tester demonstrates how quickly the MediaPipe hand-tracking can translate her moving fingers to the movement of her prosthesis’s fingers.

Figure 3: A user tester demonstrates how quickly the MediaPipe hand-tracking can translate her moving fingers to the movement of her prosthesis’s fingers.

Translating 3D MediaPipe points into inputs for Robotics

To achieve the goals of the Mirru app, we need to use hand tracking to independently control each finger of the Brunel Hand in real-time. In the Brunel Hand, the index, middle, and ring fingers are actuated using servos that move at an angle from 0 to 180 degrees; 0 means the finger is fully upright and 180 means the finger is fully flexed down. As we lacked adequate training data to create a model from scratch that could calculate these servo angles for us, we opted to use a heuristic to relate the default hand tracking landmark outputs to the inputs required by our hardware for an initial version of our prototype.

In the lab testing the translation of the outputs to inputs with the app and the prototype.

Figure 4: In the lab testing the translation of the outputs to inputs with the app and the prototype.

We were initially unsure whether the estimated depth (Z) coordinate in the 3D landmarks would be accurate enough for the translation of inputs or if we would be limited to working in 2D. As an initial step, we recorded an example dataset and spun up a visualization of the points in a Jupyter Notebook with Plotly. We were immediately impressed by the quality and accuracy of the coordinates, considering that the technology only uses a single camera without any depth sensors. As noted in the MediaPipe documentation, The Z coordinates have a slightly different scale than the X/Y coordinates, but this didn’t seem to pose a significant challenge for our prototype.

A data visualization of the hand made up of 21 3D hand landmarks provided by MediaPipe.

Figure 5: A data visualization of the hand made up of 21 3D hand landmarks provided by MediaPipe.

Given the accuracy of the 3D landmarks, we opted to use a calculation in 3D for relating landmark outputs to the inputs required by the prosthesis. In our approach, we calculate the acute angles of the fingers in relation to the palm by calculating the angle between the finger direction and the normal of the plane defined by the palm. An angle of 0° corresponds to maximum closure of the finger, and an angle of 180° indicates a fully extended finger. We were able to calculate the finger direction by calculating the vector from the landmark at the base of the fingers to the landmark on the tip of the fingers.

Diagram showing the 3D landmarks and which ones we used to calculate the finger direction vector, the palm normal, and the angle that both form.

Figure 6: Diagram showing the 3D landmarks and which ones we used to calculate the finger direction vector, the palm normal, and the angle that both form.

We calculate the palm normal by selecting three points in the plane of the palm. Using Landmark 0 as the reference point, we calculate the vectors for side 1 and side 2, and compute the cross product of those vectors to give us the palm normal. Finally, we compute the angle of the finger direction and the palm normal. This returns an angle in radians that we use to calculate degrees.

We had to do some extra processing to match the degrees of freedom for the thumb on our prosthetic hand. The thumb moves in more complex ways than the rest of the fingers. In order to get our app to work with the thumb, we did similar calculations for thumb direction and the palm normal, but we used different landmarks.

Once we do the calculation of the servo angles on the android phone, we send those values via bluetooth to the Arduino board, and the Arduino board moves the servos to the proper position. Due to some noise in the model outputs, we add a smoothing step to the pipeline, which is important so that the movements of the robotic fingers aren’t too jittery for precise grips.

A user tester makes a pinch grip on her prosthesis with the Mirru app.

Figure 7: A user tester makes a pinch grip on her prosthesis with the Mirru app.

Summary

The Mirru app and Mirru Arduino Sketch are designed to allow anyone to control an open source prosthesis with their sound hand and an Android phone. This is a novel and frugal alternative to muscle sensors, and MediaPipe has proven that it is the right tool for the essential hand tracking component of the full application. The Mirru team was able to get started quickly with MediaPipe’s out of the box solutions without having to gather any training data or having to design a model from scratch. The speed of the real-time translation from hand tracking points to the robotic hand has especially excited all of our users in our testing sessions and opens up many possibilities for the future of prostheses.

We see exciting potential for combining the MediaPipe hand tracking features with existing myoelectric prostheses which could open powerful and advanced ways to create and save custom prosthesis grips in real-time. Also, with the help of MediaPipe, we have been able to provide an open source alternative to proprietary prostheses without the need for myoelectric sensors or a visit to a prosthetist, at a cost that is much lower than what is already on the market, and whose source code can be customized and built-upon by other developers. Our team is excited to see what other ideas the open source community might come up with, and to see what hand tracking can bring to users and manufacturers of prostheses.

As for the current state of the Mirru application, we have yet to implement the possibility of recording and saving moving gestures that are longer sequences compared to the static grip positions. For example, imagine being able to record the movement of the fingers to play a bass line on a piano, like a loopable animated gif. There is a realm of possibilities for prostheses that is waiting to be explored, and we’re really happy that MediaPipe gives us access to it.

We are looking for contributors. If you have ideas or comments about this application, please reach out to [email protected], or visit our GitHub.

This blog post is curated by Igor Kibalchich, ML Research Product Manager at Google AI.

A conversation with Hebe He, a developer from Guangzhou

Posted by Brian Shen, Program Manager, Google Developers

Google Developer Groups are one of the largest community networks of developers in the world. Every group has an organizer that helps curate events based on the interests of their local developer community.

As we continue to explore how different Google Developer Groups build their communities, we interviewed Hebe He, an organizer of Google Developer Group Guangzhou in China. Learn more about how she is building the developer scene in China, thinking up new events for her community, and more below.

Hebe He, an organizer of Google Developer Group Guangzhou in China.

Hebe He, an organizer of Google Developer Group Guangzhou in China.

Tell us about yourself.

I am Hebe from China and I'm a native of Guangzhou. I'm the organizer of GDG Guangzhou, as well as an ambassador for Women Techmakers (WTM). I work at one of China's new electric-vehicle brands, where I'm responsible for the intelligent business operation of the Internet of Vehicles. I'm relatively outgoing and active, so I really like to deal with different people, whether it's at work or in other activities.

How did you learn about Google Developer Groups?

In 2014, I participated in GDG Guangzhou DevFest for the first time by coincidence and met the founder of GDG Guangzhou. Afterward, I joined the founder's company and volunteered at many GDG programs. In 2017, I officially became an organizer after the existing organizers recognized my ability and desire to contribute more to the GDG Guangzhou community.

Tell us more about Guangzhou and the developer community there.

Our community members are talented, passionate, and amazing. I see all kinds of possibilities in them. They're always excited for every event we hold, keep a fanatical attitude toward Google's technological innovation, and are particularly interested in Android, Kotlin, and Flutter.

What are events like in your community?

We highly value feedback from event participants, who are interested in a wide range of topics. For this reason, we generally use 15% of every event to cover non-technical topics, such as entrepreneurship, business management, and careers. For more comprehensive activities, such as DevFest, we increase the amount of non-technical content to roughly 30%.

What is your Google Developer Group focused on right now?

We devote most of our energy to improving the quality of activities. We try to add more elements to the event to strengthen the interaction of participants in hopes of improving the feedback mechanism and gaining more valuable suggestions for future event optimization. We also try to improve the quality of guests and themes, and pay more attention to event details, such as event announcements, registration, and check-in.

What’s your favorite community memory from a Google Developer Group event?

The memory that touches me the most is the construction of WTM Guangzhou. From the first event with only 80 developers to the audience of more than 500 people in recent years, it represents the recognition of, and support for, our events. There are many people who come to participate every year; some are actively encouraging their friends to participate and others are even urging us to hold events. They feel honored to be invited to our events and their enthusiasm endured during the pandemic.

What's next for you and your Google Developer Group?

There's still lots of room to grow in our community. We hope that we can continue to develop a Google Developer Group that reflects the best of Guangzhou. We also hope to find better ways to accumulate the experience shared by speakers and the value of community users.

If you want to grow your career and coding knowledge with people like Hebe He, join a Google Developer Group near you.

Join us for Google for Games Developer Summit 2021

Posted by Greg Hartrell, Head of Product Management, Games on Android & Google Play

Google for Games Dev Summit header

With a surge of new gamers and an increase in time spent playing games in the last year, it’s more important than ever for game developers to delight and engage players. To help developers with this opportunity, the games teams at Google are back to announce the return of the Google for Games Developer Summit 2021 on July 12th-13th.

Hear from experts across Google about new game solutions they’re building to make it easier for you to continue creating great games, connecting with players, and scaling your business. Registration is free and open to all game developers.

Register for the free online event at g.co/gamedevsummit to get more details in the coming weeks. We can’t wait to share our latest innovations with the developer community.

What’s new for Android developers at Google I/O

Cross-posted on the Android Developers blog by Karen Ng, Director, Product Management & Jacob Lehrbaum, Director of Developer Relations, Android & Play

As Android developers, we are all driven by building experiences that delight people around the world. And with people depending on your apps more than ever, expectations are higher and your jobs as developers aren’t getting easier. Today, at Google I/O, we covered a few ways that we’re trying to help out, whether it be through Android 12 - one of the biggest design changes ever, Jetpack, Jetpack Compose, Android Studio, and Kotlin to help you build beautiful high quality apps. We’re also helping when it comes to extending your apps wherever your users go, like through wearables and larger-screened devices. You can watch the full Developer Keynote, but here are a few highlights:

Android 12: one of the biggest design updates ever.

The first Beta of Android 12 just started rolling out, and it’s packed with lots of cool stuff. From new user safety features like permissions for bluetooth and approximate location, enhancements to performance like expedited jobs and start up animations, to delightful experiences with more interactive widgets and stretch overscrolling, this release is one of the biggest design updates to Android ever. You can read more about what’s in Android 12 Beta 1 here, so you can start preparing your apps for the consumer release coming out later this year. Download the Beta and try it with your apps today!

Android 12 visual

Jetpack Compose: get ready for 1.0 in July!

For the last few years, we’ve been hard at work modernizing the Android development experience, listening to your feedback to keep the openness–a hallmark of Android, but becoming more opinionated about the right way to do things. You can see this throughout, from Android Studio, a performant IDE that can keep up with you, to Kotlin, a programming language that enables you to do more with less code, to Jetpack libraries that solve the hardest problems on mobile with backward compatibility.

The next step in this offering is Jetpack Compose - our modern UI toolkit to easily build beautiful apps for all Android devices. We announced Compose here at Google I/O two years ago and since then have been building it in the open, listening to your feedback to make sure we got it right. With the Compose Beta earlier this year, developers around the world have created some truly beautiful, innovative experiences in half the time, and the response to the #AndroidDevChallenge blew our socks off!

With the forthcoming update of Material You (which you can read more about here), we’ll be adding new Material components as well as further support for building for large screens, making it fast and easy to build a gorgeous UI. We’re pressure testing the final bits in Compose and will release 1.0 Stable in July—so get ready!

Android Studio Arctic Fox: Design, Devices, & Developer Productivity!

Android Studio Arctic Fox (2020.3.1) Beta, the latest release of the official powerful Android IDE, is out today to help you build quality apps easier and faster. We have delivered and updated the suite of tools to empower three major themes: accelerate your UI design, extend your app to new devices, and boost your developer productivity. With this latest release you can create modern UIs with Compose tooling, see test results across multiple devices, and optimize debugging databases and background tasks with the App Inspector. We’re also making your apps more accessible with the Accessibility Scanner and more performant with Memory Profiler. And for faster build speeds, we have the Android Gradle plugin 7.0, new DSL, and variant APIs. You can learn more about the Android Studio updates here.

Android Studio Arctic Fox

Kotlin: the most used language by professional Android devs

Kotlin is now the most used primary language by professional Android developers according to our recent surveys; in fact, over 1.2M apps in the Play Store use Kotlin, including 80% of the top 1000 apps. And here at Google, we love it too: 70+ Google apps like Drive, Home, Maps and Play use Kotlin. And with a brand-new native solution to annotation processing for Kotlin built from the ground up, Kotlin Symbol Processing is available today, a powerful and yet simple API for parsing Kotlin code directly, showing speeds up to 2x faster with libraries like Room.

Android Jetpack: write features, not boilerplate

With Android Jetpack, we built a suite of libraries to help reduce boilerplate code so you can focus on the code you care about. Over 84% of the top 10,000 apps are now using a Jetpack library. And today, we’re unpacking some new releases for Jetpack, including Jetpack Macrobenchmark (Alpha) to capture large interactions that affect your app startup and jank before your app is released, as well as a new Kotlin Coroutines API for persisting data more efficiently via Jetpack DataStore (Beta). You can read about all the updates in Android Jetpack here.

Now is the time: a big step for Wear

The best thing about modern Android development is that these tools have been purpose built to help make it easy for you to build for the next era of Android, which is all about enabling devices connected to your phone–TVs, cars, watches, tablets–to work better together.

Starting today, we take a huge step forward with wearables. First, we introduced a unified platform built jointly with Samsung, combining the best of Wear and Tizen. Second, we shared a new consumer experience with revamped Google apps. And third, a world-class health and fitness service from Fitbit is coming to the platform. As an Android developer, it means you’ll have more reach, and you’ll be able to use all of your existing skills, tools, and APIs that make your mobile apps great, to build for a single wearables platform used by people all over the world.

Whether it’s new Jetpack APIs for Wear tailored for small screens and designed to optimize battery life, to the Jetpack Tiles API, so you can create a custom Tile for all the devices in the Wear ecosystem, there are a number of new features to help you build on Wear. And with a new set of APIs for Health and Fitness, created in collaboration with Samsung, data collection from sensors and metrics computation is streamlined, consistent, and accurate–like heart rate to calories to daily distance–from one trusted source. All this comes together in new tooling, with the release of Android Studio Arctic Fox Beta, like easier pairing to test apps, and even a virtual heart rate sensor in the emulator. And when your app is ready, users will have a much easier time discovering the world of Wear apps on Google Play, with some big updates to discoverability. You can read more about all of the Wear updates here.

Tapping the momentum of larger screens, like tablets, Chrome OS and foldables

When it comes to larger screens -- tablets, foldables, and Chrome OS laptops-- there is huge momentum. People are increasingly relying on large screen devices to stay connected with family and friends, go to school, or work remotely. In fact, there are over 250 million active large screen Android devices. Last year, Chrome OS grew +92% year over year–5 times the rate of the PC market, making Chrome OS the fastest growing and the second-most popular desktop OS. To help you take advantage of this momentum, we’re giving you APIs and tools to make optimizing that experience easier: like having your content resize automatically to more space by using SlidingpaneLayout 1.2.0 and a new vertical navigation rail component, Max widths on components to avoid stretched UIs, as well as updates to the platform, Chrome OS, and Jetpack windowmanager, so apps work better by default. You can learn more here.

Google Duo's optimized experience for foldable devices

Google Duo's optimized experience for foldable devices

This is just a taste of some of the new ways we’re making it easier for you to build high quality Android apps. Later today, we’ll be releasing more than 20 technical sessions on Android and Play, covering a wide range of topics such as background tasks, privacy, and Machine Learning on Android, or the top 12 tips to get you ready for Android 12. If building for cars, TVs, and wearables is your thing, we got that covered, too. You can find all these sessions - and more - on the I/O website. Beyond the sessions and news, there’s a number of fun ways to virtually connect with Googlers and other developers at this year’s Google I/O. You can check out the Android dome in I/O Adventure, where you can see new blog posts, videos, codelabs, and more. Maybe even test out your Jetpack Compose skills or take a virtual tour of the cars inside our dome!

Google Pay introduces a Flutter plugin for payments

Posted by Jose Ugia, Developer Programs Engineer, Google Pay and Anthony Panissidi, Technical Writer, Google Developer Studio

Flutter and Firebase logos

We made it easier than ever to integrate Google Pay in Flutter apps!

Our open source Flutter plugin simplifies the addition of payments to Flutter apps on iOS and Android.

The plugin gives you the ability to add functionality to your apps across platforms with a single and familiar codebase written in Dart.

It adapts common steps required to facilitate payments that adhere to how Flutter constructs components, works with the user interface of the app, and exchanges information between the native and Dart ends.

Now, as a Flutter developer, you can easily reap the benefits of Google Pay, which lets you provide users with a secure and fast checkout experience that increases conversions, and frees you from the need to manage credit cards and payments.

How it works

To use the plugin, add pay as a dependency in your pubspec.yaml file. For more information, see Adding a package dependency to an app.

To configure a payment, load a payment profile with the desired configuration, either with a local file or one retrieved from a remote server. For a complete list of all configuration options, see the PaymentDataRequest object.

Here's an example of a JSON file that defines payment options:

sample_payment_configuration.json

{
"provider": "google_pay",
"data": {
"environment": "TEST",
"apiVersion": 2,
"apiVersionMinor": 0,
"allowedPaymentMethods": [{
"type": "CARD",
"tokenizationSpecification": {
"type": "PAYMENT_GATEWAY",
"parameters": {
"gateway": "example",
"gatewayMerchantId": "gatewayMerchantId"
}
},
"parameters": {
"allowedCardNetworks": ["VISA", "MASTERCARD"],
"allowedAuthMethods": ["PAN_ONLY", "CRYPTOGRAM_3DS"],
"billingAddressRequired": true,
"billingAddressParameters": {
"format": "FULL",
"phoneNumberRequired": true
}
}
}],
"merchantInfo": {
"merchantId": "01234567890123456789",
"merchantName": "Example Merchant Name"
},
"transactionInfo": {
"countryCode": "US",
"currencyCode": "USD"
}
}
}

For more examples of JSON files that define payment options, take a look at the example/assets/ folder.

Now you can use this configuration to add the Google Pay button to your app and forward the payment method selected by your users.

Here's an example of a Dart file:

import 'package:pay/pay.dart';

const _paymentItems = [
PaymentItem(
label: 'Total',
amount: '99.99',
status: PaymentItemStatus.final_price,
)
];

// In your Widget build() method
GooglePayButton(
paymentConfigurationAsset: 'sample_payment_configuration.json',
paymentItems: _paymentItems,
style: GooglePayButtonStyle.black,
type: GooglePayButtonType.pay,
onPaymentResult: onGooglePayResult,
),


// In your Stateless Widget class or State
void onGooglePayResult(paymentResult) {
// Send the resulting Google Pay token to your server or PSP
}

How to use it

The best part of this news is that you can use the plugin today. To get started with it, check out the pay package on pub.dev. We also want to hear your thoughts and feature requests, and look forward to your contributions on GitHub.

Learn more

Want to learn more about Google Pay? Here's what you can do:

A new open source content library from Google

Posted by Sebastian Trzcinski-Clément, Program Manager, Developer Relations

Developers around the world are constantly creating open source tools and tutorials but have a hard time getting them discovered. The content published often spanned many different sites - from GitHub to Medium. Therefore we decided to create a space where we can highlight the best projects related to Google technologies in one place - introducing the Developer Library.

GIF scrolling through Developer Library

The platform showcases blog posts and open source tools with easy-to-use navigation. Content is categorized by product areas; Machine Learning, Flutter, Firebase, Angular, Cloud, Android, with more to come.

What makes the Developer Library unique is that each piece featured on the site is reviewed, in detail, by a team of Google experts for accuracy and relevancy, so you know when you view the content on the site it has the stamp of approval from Google.

To demonstrate the breadth of content on the site here are some examples of published content pieces and video interviews with the developers who authored these posts:

There are two ways you can help us grow the Developer Library.

Firstly, If you have great content that you would like to see published on the Developer Library, please submit it for review here.

Secondly, the team welcomes feedback, so if you have anything you’d like to see added or changed on the Developer Library site, do complete this short feedback form or just file an issue on GitHub.

We can't wait to see what you build together!

Unlock new use cases and increase developer velocity with the latest ARCore updates

Posted by Ian Zhang, Product Manager, AR & Zeina Oweis, Product Manager, AR

Two phones showing animated screens

ARCore was created to provide developers with simple yet powerful tools to seamlessly blend the digital and physical worlds. Over the last few years, we’ve seen developers create apps that entertain, engage, and help people in different ways–from letting fans interact with their favorite characters, to placing virtual electronics and furniture for the perfect home setup and beyond.

At I/O this year, we continue on the mission of improving and building AR developer tools. With the launch of ARCore 1.24, we’re introducing the Raw Depth API and the Recording and Playback API. These new APIs will enable developers to create new types of AR experiences and speed up their development cycles.

Increase AR realism and precision with depth

When we launched the Depth API last year, hundreds of millions of Android devices gained the ability to generate depth maps in real time without needing specialized depth sensors. Data in these depth maps was smoothed, filling in any gaps that would otherwise occur due to missing visual information, making it easy for developers to create depth effects like occlusion.

The new ARCore Raw Depth API provides more detailed representations of the geometry of objects in the scene by generating “raw” depth maps with corresponding confidence images. These raw depth maps include unsmoothed data points, and the confidence images provide the confidence of the depth estimate for each pixel in the raw depth map.

4 examples of ARCore Raw Depth API

Improved geometry from the Raw Depth API enables more accurate depth measurements and spatial awareness. In the ARConnect app, these more accurate measurements give users a deeper understanding of their physical surroundings. The AR Doodads app utilizes raw depth’s spatial awareness to allow users to build realistic virtual Rube Goldberg machines.

ARConnect by PHORIA (left) and AR Doodads by Jam3 (right) use the improved geometry from the Raw Depth AP

ARConnect by PHORIA (left) and AR Doodads by Jam3 (right) use the improved geometry from the Raw Depth API

The confidence image in the Raw Depth API allows developers to filter depth data in real time. For example, TikTok’s newest effect enables users to upload an image and wrap it onto real world objects. The image conforms to surfaces where there is high confidence in the underlying depth estimate. The ability for developers to filter for high confidence depth data is also essential for 3D object and scene reconstruction. This can be seen in the 3D Live Scanner app, which enables users to scan their space and create, edit, and share 3D models.

TikTok by TikTok Pte. Ltd. (left) and  3D Live Scanner by Lubos Vonasek Programmierung (right) use confidence images from the ARCore Raw Depth API

TikTok by TikTok Pte. Ltd. (left) and 3D Live Scanner by Lubos Vonasek Programmierung (right) use confidence images from the ARCore Raw Depth API

We’re also introducing a new type of hit-test that uses the geometry from the depth map to provide more hit-test results, even in low-texture and non-planar areas. Previously, hit-test worked best on surfaces with lots of visual features.

Hit Results with Planes (left)

Works best on horizontal, planar surfaces with 

good texture

Hit Results with Depth (right)

Gives more results, even on non-planar or
low-texture areas

The lifeAR app uses this improved hit-test to bring AR to video calls. Users see accurate virtual annotations on the real-world objects as they tap into the expertise of their social circle for instant help to tackle everyday problems.

lifeAR by TeamViewer uses the improved depth hit-test

As with the previous Depth API, these updates leverage depth from motion, making them available on hundreds of millions of Android devices without relying on specialized sensors. Although depth sensors such as time-of-flight (ToF) sensors are not required, having them will further improve the quality of your experiences.

In addition to these apps, the ARCore Depth Lab has been updated with examples of both the Raw Depth API and the depth hit-test. You can find those and more on the Depth API documentation page and start building with Android and Unity today.

Increase developer velocity and post-capture AR

A recurring pain point for AR developers is the need to continually test in specific places and scenarios. Developers may not always have access to the location, lighting will change, and sensors won’t catch the exact same information during every live camera session.

The new ARCore Recording and Playback API addresses this by enabling developers to record not just video footage, but also IMU and depth sensor data. On playback, this same data can be accessed, enabling developers to duplicate the exact same scenario and test the experience from the comfort of their workspace.

DiDi used the Recording and Playback API to build and test AR directions in their DiDi-Rider app. They were able to save 25% on R&D and testing costs, 60% on travel costs, and accelerated their development cycle by 6 months.

DiDi-Rider by Didi Chuxing saves on development resources with the Recording and Playback API

DiDi-Rider by Didi Chuxing saves on development resources with the Recording and Playback API

In addition to increasing developer velocity, recording and playback unlocks opportunities for new AR experiences, such as post-capture AR. Using videos enables asynchronous AR experiences that remove time and place constraints. For instance, when visualizing AR furniture, users no longer have to be in their home. They can instead pull up a video of their home and accurately place AR assets, enabling them to “take AR anywhere”.

Jump AR by SK Telecom uses the Recording and Playback API to transport scenes from South Korea right into users’ homes to augment with culturally relevant volumetric and 3D AR content.

JumpAR by SKT uses Recording and Playback to bring SouthKorea to your home

JumpAR by SKT uses Recording and Playback to bring SouthKorea to your home

VoxPlop! by Nexus Studios is experimenting with the notion of Spatial Video co-creation, where users can reach in and interact with a recorded space rather than simply placing content on top of a video. The Recording and Playback API enables users to record videos, drop in 3D characters and messages, and share them with family and friends.

VoxPlop! by Nexus Studios uses the Recording and Playback API to experiment with Spatial Video co-creation

VoxPlop! by Nexus Studios uses the Recording and Playback API to experiment with Spatial Video co-creation

Learn more and get started with the Recording and Playback API docs.

Get started with ARCore today

These latest ARCore updates round out a robust set of powerful developer tools for creating engaging and realistic AR experiences. With over a billion lifetime installs and 850 million compatible devices, ARCore makes augmented reality accessible to nearly everyone with a smartphone. We're looking forward to seeing how you innovate and reach more users with ARCore. To learn more and get started with the new APIs, visit the ARCore developer website.

New for I/O: Assistant tools and features for Android apps and Smart Displays

Posted by Rebecca Nathenson, Director of Product for the Google Assistant Developer Platform

New Assistant tools at Google IO header

Today at I/O, we shared some exciting new product announcements to help you more easily bring Google Assistant to your Android apps and create more engaging content on smart displays.

Assistant development made easy with new Android APIs

App Actions helps you easily bring Google Assistant to your Android app and complete user queries of all kinds, from booking a ride to posting a message on social media. Companies such as MyFitnessPal and Twitter are already using App Actions to help their users get things done, just by using their voice. You can enable App Actions in Android Studio by mapping built-in intents to specific features and experiences within your apps. Here are new ways you can help users easily navigate your content through voice queries and proactive suggestions.

Better support for Assistant built-in intents with Capabilities

Capabilities is a new framework API available in beta today that lets you declare support for common tasks defined by built-in intents. By leveraging pre-built requests from our catalog of intents, you can offer users ways to jump to specific activities within your app.

For example, the Yahoo Finance app uses Capabilities to let users jump directly to the Verizon stock page just by saying “Hey Google, show me Verizon’s stock on Yahoo Finance.” Similarly, Snapchat users can use their voice to add filters and send them to friends: “Hey Google, send a snap with my Curry sneakers.”

Improved user discoverability with Shortcuts in Android 12

App shortcuts are already a popular way to automate most common tasks on Android. Thanks to the new APIs for Shortcuts in Android 12, it’s now easier to find all the Assistant queries that are supported with apps. If you build an Android Shortcut, it will automatically show up in the Assistant Shortcuts gallery, so users can choose to set up a personal voice command in your app, when they say “Hey Google, shortcuts.”

3 phones showing shortcuts from Assistant

Google Assistant can also suggest relevant shortcuts to help drive traffic to your app. For example, when using the eBay app, people will see a suggested Google Assistant Shortcut appear on the screen and have the option to create a shortcut for "show my bids."

We also introduced the Google Shortcuts Integration library, which identifies shortcuts pushed by Shortcuts Jetpack Module and makes them available to Assistant for use in managing related voice queries. By doing so, Google Assistant can suggest relevant shortcuts to users and help drive traffic to your app.

Get immediate answers and updates right from Assistant using Widgets, coming soon

Improvements to Android 12 also makes it easier to discover glanceable content with Widgets by mapping them to specific built-in intents using the Capabilities API. We're also looking at how to easily bring driving optimized widgets to Android Auto in the future. The integration with Assistant will enable one shot answers, quick updates and multi-step interactions with the same widget.

For example, with Dunkin’s widget implementation, you can say “Hey Google, reorder from Dunkin’ to select from previous drinks and place the order. Strava’s widget helps a user track how many miles they ran in a week by saying “Hey Google, check my miles on Strava”, and it will show up right on the lock screen.

Strava widget showing how many miles ran in a week

Build high quality Conversational Actions for smart displays

Last year, we introduced a number of improvements to the Assistant platform for smart displays, such as Actions Builder, Actions SDK and new built-in intents to improve the experience for both developers and users. Here are more improvements rolling out soon to make building conversational actions on smart displays even better.

New features to improve the developer experience

Interactive Canvas helps you build touch- and voice-controlled games and storytelling experiences for the Assistant using web technologies like HTML, CSS, and JavaScript. Companies such as CoolGames, Zynga, and GC Turbo have already used Canvas to build games for smart displays.

Since launch, we've gotten great feedback from developers that it would be simpler and faster to implement core logic in web code. To enable this, the Interactive Canvas API will soon provide access to text-to-speech (TTS), natural language understanding (NLU), and storage APIs that will allow developers to trigger these capabilities from client-side code. These APIs will provide experienced web developers with a familiar development flow and enable more responsive Canvas actions.

We’re also giving you a wider set of options around how to release your actions. Coming soon, in the Actions Console, you will be able to manage your releases by launching in stages. For example, you can launch to one country first and then expand to more later, or you can launch to just a smaller percentage and gradually roll out over time.

Improving the user experience on smart displays

You'll also see improvements that will enhance visual experiences on the smart display. For example, you can now remove the persistent header, which allows you to utilize full real estate of the device and provide users with fully immersive experiences.

Before Interactive Canvas brought customized touch interfaces to the Smart Display, we provided a simple way to stop TTS from playing by tapping anywhere on the screen of the device. However, with more multi-modal experiences being released on Smart Displays, there are use cases where it is important to continue playing TTS while the user touches the display. Developers will soon have the option to enable persistent TTS for their actions.

We’ve also added support for long-form media sessions with updates to the Media API so you can start playback from a specific moment, resume where a previous session stopped, and adapt conversational responses based on media playback context.

Easier transactions for your voice experiences

We know how important it is to have the tools you need to build a successful business on our platform. In October of last year, we made a commitment to make it easier for you to add seamless voice-based and display-based monetization capabilities to your experience. On-device CVC and credit card entry will soon be available on smart displays. Both of these features make on-device transactions much easier reducing the need to redirect users to their mobile devices.

We hope you are able to leverage all these new features to build engaging experiences and reach your users easily, both on mobile and at home. Check out our technical sessions, workshops and more from Google I/O on YouTube and get started with App Actions and Conversational Actions today!

New for I/O: Assistant tools and features for Android apps and Smart Displays

Posted by Rebecca Nathenson, Director of Product for the Google Assistant Developer Platform

New Assistant tools at Google IO header

Today at I/O, we shared some exciting new product announcements to help you more easily bring Google Assistant to your Android apps and create more engaging content on smart displays.

Assistant development made easy with new Android APIs

App Actions helps you easily bring Google Assistant to your Android app and complete user queries of all kinds, from booking a ride to posting a message on social media. Companies such as MyFitnessPal and Twitter are already using App Actions to help their users get things done, just by using their voice. You can enable App Actions in Android Studio by mapping built-in intents to specific features and experiences within your apps. Here are new ways you can help users easily navigate your content through voice queries and proactive suggestions.

Better support for Assistant built-in intents with Capabilities

Capabilities is a new framework API available in beta today that lets you declare support for common tasks defined by built-in intents. By leveraging pre-built requests from our catalog of intents, you can offer users ways to jump to specific activities within your app.

For example, the Yahoo Finance app uses Capabilities to let users jump directly to the Verizon stock page just by saying “Hey Google, show me Verizon’s stock on Yahoo Finance.” Similarly, Snapchat users can use their voice to add filters and send them to friends: “Hey Google, send a snap with my Curry sneakers.”

Improved user discoverability with Shortcuts in Android 12

App shortcuts are already a popular way to automate most common tasks on Android. Thanks to the new APIs for Shortcuts in Android 12, it’s now easier to find all the Assistant queries that are supported with apps. If you build an Android Shortcut, it will automatically show up in the Assistant Shortcuts gallery, so users can choose to set up a personal voice command in your app, when they say “Hey Google, shortcuts.”

3 phones showing shortcuts from Assistant

Google Assistant can also suggest relevant shortcuts to help drive traffic to your app. For example, when using the eBay app, people will see a suggested Google Assistant Shortcut appear on the screen and have the option to create a shortcut for "show my bids."

We also introduced the Google Shortcuts Integration library, which identifies shortcuts pushed by Shortcuts Jetpack Module and makes them available to Assistant for use in managing related voice queries. By doing so, Google Assistant can suggest relevant shortcuts to users and help drive traffic to your app.

Get immediate answers and updates right from Assistant using Widgets, coming soon

Improvements to Android 12 also makes it easier to discover glanceable content with Widgets by mapping them to specific built-in intents using the Capabilities API. We're also looking at how to easily bring driving optimized widgets to Android Auto in the future. The integration with Assistant will enable one shot answers, quick updates and multi-step interactions with the same widget.

For example, with Dunkin’s widget implementation, you can say “Hey Google, reorder from Dunkin’ to select from previous drinks and place the order. Strava’s widget helps a user track how many miles they ran in a week by saying “Hey Google, check my miles on Strava”, and it will show up right on the lock screen.

Strava widget showing how many miles ran in a week

Build high quality Conversational Actions for smart displays

Last year, we introduced a number of improvements to the Assistant platform for smart displays, such as Actions Builder, Actions SDK and new built-in intents to improve the experience for both developers and users. Here are more improvements rolling out soon to make building conversational actions on smart displays even better.

New features to improve the developer experience

Interactive Canvas helps you build touch- and voice-controlled games and storytelling experiences for the Assistant using web technologies like HTML, CSS, and JavaScript. Companies such as CoolGames, Zynga, and GC Turbo have already used Canvas to build games for smart displays.

Since launch, we've gotten great feedback from developers that it would be simpler and faster to implement core logic in web code. To enable this, the Interactive Canvas API will soon provide access to text-to-speech (TTS), natural language understanding (NLU), and storage APIs that will allow developers to trigger these capabilities from client-side code. These APIs will provide experienced web developers with a familiar development flow and enable more responsive Canvas actions.

We’re also giving you a wider set of options around how to release your actions. Coming soon, in the Actions Console, you will be able to manage your releases by launching in stages. For example, you can launch to one country first and then expand to more later, or you can launch to just a smaller percentage and gradually roll out over time.

Improving the user experience on smart displays

You'll also see improvements that will enhance visual experiences on the smart display. For example, you can now remove the persistent header, which allows you to utilize full real estate of the device and provide users with fully immersive experiences.

Before Interactive Canvas brought customized touch interfaces to the Smart Display, we provided a simple way to stop TTS from playing by tapping anywhere on the screen of the device. However, with more multi-modal experiences being released on Smart Displays, there are use cases where it is important to continue playing TTS while the user touches the display. Developers will soon have the option to enable persistent TTS for their actions.

We’ve also added support for long-form media sessions with updates to the Media API so you can start playback from a specific moment, resume where a previous session stopped, and adapt conversational responses based on media playback context.

Easier transactions for your voice experiences

We know how important it is to have the tools you need to build a successful business on our platform. In October of last year, we made a commitment to make it easier for you to add seamless voice-based and display-based monetization capabilities to your experience. On-device CVC and credit card entry will soon be available on smart displays. Both of these features make on-device transactions much easier reducing the need to redirect users to their mobile devices.

We hope you are able to leverage all these new features to build engaging experiences and reach your users easily, both on mobile and at home. Check out our technical sessions, workshops and more from Google I/O on YouTube and get started with App Actions and Conversational Actions today!