Tag Archives: I/O

Get ready for Google I/O

Posted by Timothy Jordan, Director, Developer Relations & Open Source

I/O is just a few days away and we couldn’t be more excited to share the latest updates across Google’s developer products, solutions, and technologies. From keynotes to technical sessions and hands-on workshops, these announcements aim to help you build smarter and ship faster.

Here are some helpful tips to maximize your experience online.


Start building your personal I/O agenda

Starting now, you can save the Google and developer keynotes to your calendar and explore the program to preview content. Here are just a few noteworthy examples of what you’ll find this year:

What's new in Android
Get the latest news in Android development: Android 14, form factors, Jetpack + Compose libraries, Android Studio, and performance.
What’s new in Web
Explore new features and APIs that became stable across browsers on the Web Platform this year.
What’s new in Generative AI
Discover a new suite of tools that make it easy for developers to leverage and build on top of Google's large language models.
What’s new in Google Cloud
Learn how Google Cloud and generative AI will help you develop faster and more efficiently.

For the best experience, create or connect a developer profile and start saving content to My I/O to build your personal agenda. With over 200 sessions and other learning material, there’s a lot to cover, so we hope this will help you get organized.

This year we’ve introduced development focus filters to help you navigate content faster across mobile, web, AI, and cloud technologies. You can also peruse content by topic, type, or experience level so you can find what you’re interested in, faster.


Connect with the community

After the keynotes, you can talk to Google experts and other developers online in I/O Adventure chat. Here you can ask questions about new releases and learn best practices from the global developer community.

If you’re craving community now, visit the Community page to meet people with similar interests in your area or find a watch party to attend.

We hope these updates are useful, and we can’t wait to connect online in May!

Android Studio Arctic Fox (2020.3.1) Beta

Posted by Paris Hsu, Product & Design, Android

Android Studio Arctic Fox splash screen

Android Studio Arctic Fox splash screen

Note: As we announced late last year, we've changed our version numbering scheme to match the number for the IntelliJ IDE that Android Studio is based on, 2020.3, plus our own patch number, as well as a handy code name to make it easier to remember and refer to. We'll be using code names in alphabetical order; the first is Arctic Fox, now in beta, and the next is Bumblebee, now in canary.

Today, we are excited to unveil Android Studio Arctic Fox (2020.3.1) Beta ❄️?: the latest release of the official Android IDE focuses on Design, Devices, and Developer Productivity. It is available for download now on the beta channel for you to try out all the new features launched this week during Google I/O 2021!

Inspired by developer communities around the world, who despite having to adjust to challenges this past year still continue to create amazing and innovative apps, we have delivered and updated the suite of tools to empower three major themes:

  • Rapid UI design - with Jetpack Compose, it's never been easier to create modern UIs, and we have tools to help complete that journey: you can create previews in different configurations and navigate your code with Compose Preview, test it in isolation with Deploy Preview to Device, and inspect the full app with Layout inspector. Throughout iterations, you can quickly edit strings and numbers and see immediate updates. Moreover, with the Accessibility Scanner in Layout Editor, your View based layouts are audited for accessibility problems.
  • New devices, both large and small - reimagine and extend your app beyond phones--whether it's for Wear OS, Google TV, or Android Auto, we have prepared new emulators and system images, and even authentic simulations for different testing scenarios: pair your watch and phone emulators with Wear OS Pairing, take a virtual run with Wear OS heart rate sensors, switch channels with GoogleTV Remote Control, and drive with Automotive OS Sensor Replay.
  • Developer productivity boost - we want to ensure your workspace and environment are ready for the latest systems and optimized for speed and quality. Now you can enjoy a whole slew of new features and improvements that come with a major update to Intellij 2020.3, test your app with what Android 12 has to offer, improve your app performance with the updated UI for Memory Profiler, understand background task relationships with WorkManager Inspector, and use Non-Transitive R classes IDE Refactoring to increase build speed.

In short, this is an upgrade you do not want to miss! ✨ There are many more features and improvements surrounding these themes you can find in this Beta version, so read or watch below for further highlights. Or, skip the reading, download Android Studio Arctic Fox (2020.3.1) Beta in the beta channel and try out the latest features yourselves today! Give us feedback and help us to continue to focus on the areas you care about most in the next version of Android Studio.

What's new in Android development tools (I/O 2021)


What’s in Android Studio Arctic Fox (2020.3.1) Beta

Below is a full list of new features in Android Studio Arctic Fox (2020.3.1) Beta, organized by the three major themes:

Design

  • Compose Preview - You can create previews of your Compose UI with Compose Preview! By using the @Preview annotation, Compose previews can be made to visualize multiple components at once in different configurations (i.e themes, device) as well as create a mental mapping for you to navigate your code.
Compose Preview

Compose Preview

  • Layout Inspector for Compose - You can now inspect layouts written in Compose with Layout Inspector. Whether your app uses layouts fully written in Compose or layouts that use a hybrid of Compose and Views, the Layout Inspector helps you understand how your layouts are rendered on your running device or emulator, obtain rich details (such as parameters and modifiers passed to each composable), and debug issues that might arise. As you interact with the app, you now also have the option to either enable Live Updates to constantly stream data from your device, or reduce performance impact on your device by disabling live updates and clicking the Refresh action as needed.
Compose Layout Inspector

Compose Layout Inspector

  • Deploy Preview to Device - Use this feature to deploy a snippet of your UI to a device or emulator. This will help to test small parts of your code in the device without having to start the full application. Your preview will benefit the same context (permissions, resources) as your application. You can click the Deploy to device icon on the top of any Compose preview or next to the @Preview annotation in the code editor gutter and Android Studio will deploy that @Preview to your connected device or emulator.
Using Deploy to device from preview and gutter icon

Using Deploy to device from preview and gutter icon

  • Live Edit of literals - Live Editing of literals allows developers using Compose to quickly edit literals (strings, numbers, booleans) in their code and see the results immediately without needing to wait for compilation. The goal of the feature is to increase your productivity by having code changes appear near instantaneously in the previews, emulator, or physical device.
Editing numbers and strings update immediately in the preview and on device

Editing numbers and strings update immediately in the preview and on device

  • Accessibility Scanner for Layout Editor - Android Studio now integrates with the Android Accessibility Test Framework to help you find accessibility issues in your layouts. When using the Layout Editor, click on the error report button to launch the panel. The tool will report accessibility related issues and also offers suggested fixes for some common problems (e.g. missing content descriptions, or low contrast)
Accessibility Test Framework Scanner in Layout Editor

Accessibility Test Framework Scanner in Layout Editor

Devices

  • Wear OS Pairing - We created a new Wear OS pairing assistant to guide developers step by step through pairing Wear OS emulators with physical or virtual phones directly in Android Studio! You can start by going to device dropdown > Wear OS emulator pairing assistant. Note that this will currently pair with Wear OS 2 companion, and Wear OS 3 will be coming soon. Learn more.
Wear OS emulator pairing assistant dialog

Wear OS emulator pairing assistant dialog

Phone + Watch emulators paired successful state

Phone + Watch emulators paired successful state

  • New Wear OS system images - a developer preview of the Wear OS 3 system image is now available so that you can use and play with the newest version of Wear OS!
Wear OS system image

Wear OS system image

  • Heart Rate Sensor for Wear OS Emulators - To help you test your Wear OS apps, the Android Emulator now has support for the Heart Rate Sensor API when you run the Wear OS emulator. Make sure you are running at least Android Emulator v30.4.5 downloaded via the Android Studio SDK Manager
Heart Rate Sensor for Wear OS Emulators

Heart Rate Sensor for Wear OS Emulators

  • Google TV Remote Control - On top of running the new Google TV UI, we now have an updated Remote control panel, which has mapping for the new Google TV remote controls features like: user profile, and settings.
Google TV remote controls

Google TV remote controls

  • New Google TV system images - We have updated the system images to reflect the new Google TV experience allowing you to freely explore the UI.
Google TV system image

Google TV system image

  • Automotive OS Sensor Replay - You can now use the Android Automotive emulator to simulate driving scenarios, with the ability to replay car sensor data (e.g. speed, gear), completing your development and testing workflow.
Android Automotive OS Sensor replay

Android Automotive OS Sensor replay

Developer Productivity

  • IntelliJ Platform Update - Android Studio Arctic Fox (2020.3.1) Beta includes the IntelliJ 2020.3 platform release ?, which has many new features such as Debugger interactive hints, new Welcome screen, and a ton of new code editor enhancements to speed up your workflow. Learn more.
  • Android 12 lint checks - We’ve added lint checks that are specific to building your app for Android 12 so that you can get guidance in context. To name a few -- we have built checks for custom declarations of splash screens, coarse location permission for fine location usage, media formats, and high sensor sampling rate permission.
  • Non-transitive R classes Refactoring - Using non-transitive R classes with the Android Gradle Plugin can lead to faster builds for applications with multiple modules. It prevents resource duplication by ensuring that each module only contains references to its own resources, without pulling references from dependencies. You can access this feature by going to Refactor > Migrate to Non-transitive R Classes.
  • Apple Silicon Support Preview - For those using MacOS on Apple Silicon (arm64) hardware, Android Studio Arctic Fox provides preview support for this new architecture.  The arm64 platform support is still under active development, but we wanted to provide you a release order to get your feedback. Since this is a preview release for the arm64 architecture, you will have to separately download this version from the Android Studio download archive page and look for Mac (Apple Silicon).
  • Extended controls in the Emulator tool window - Developers now have access to all extended emulator controls when the emulator is opened in a tool window. The extended controls will give developers powerful tools for testing their apps such as navigation playback, virtual sensors, and snapshots all within Android studio. To launch the Emulator within Android Studio go to Android Studio's Preferences > Tools > Emulator and select “Launch in a tool window."
Extended controls in the Emulator tool window

Extended controls in the Emulator tool window

  • Background Task Inspector - You can now utilize the Background Task Inspector to visualize, monitor, and debug your app's background workers when using WorkManager library 2.5.0 or higher. You can access it by going to View > Tool Windows > App Inspection from the menu bar. When you deploy an app on a device running API level 26 and higher, you should see active workers in the Background Task Inspector tab, as shown below. Learn more.
Background Task Inspector

Background Task Inspector

  • Parallel device testing with Test Matrix - Instrumentation tests can now be run across multiple devices in parallel and investigated using a new specialized instrumentation test results panel, called the Test Matrix, which streams the test results in real time. Learn more
Test matrix running tests across multiple devices in parallel

Test matrix running tests across multiple devices in parallel

  • Memory Profiler new recording UI - We have consolidated the Memory Profiler UI for different recording activities, such as capturing a heap dump and recording Java, Kotlin, and native memory allocations.
Memory Profiler: recorded Java / Kotlin Allocations

Memory Profiler: recorded Java / Kotlin Allocations

  • Updated system requirements - In order to ensure that we provide the best experience for Android developers, we are updating the system requirements when using Android Studio. These requirements also represent the configurations we use to thoroughly test Android Studio to maintain high quality and performance, and we plan to update them more frequently going forward. So, while you’re still able to use systems that fall below the requirements, we can’t guarantee compatibility or support when doing so. You can see the updated system requirements on the official developer site.

To recap, Android Studio Arctic Fox (2020.3.1) Beta includes these new enhancements & features:

Design

  • Compose Preview
  • Compose Layout Inspector
  • Deploy Preview to Device
  • Live Edit of literals
  • Accessibility Scanner in Layout Editor

Devices

  • Wear OS Pairing
  • Heart Rate Sensor
  • New Wear OS system images
  • Google TV Remote Control
  • Google TV system Images
  • Automotive OS Sensor Replay

Productivity

  • Intellij 2020.3.1
  • Android 12 lint checks
  • Non-transitive R classes Refactoring
  • Apple Silicon Support Preview
  • Android Emulator Extended Controls
  • Background Task Inspector
  • Test matrix
  • Memory Profiler new recording UI

You might also have seen other new features at I/O which are not included in the list above; they are included in Android Studio (2021.1.1) Bumblebee Canary since these features were not quite ready for a beta channel release:

Design

  • Interactive Compose preview
  • Compose Animation preview
  • Preview Configuration Picker
  • Animated vector drawable preview
  • Compose Blueprint Mode
  • Compose Constraints Preview for ConstraintLayout

Devices

  • Automotive OS USB Passthrough - Coming soon
  • Automotive OS Rotary Controls - Coming soon

Productivity

  • Kotlin Coroutines debugger
  • Device Manager
  • Gradle Instrumented Test Runner Integration in Android Studio
  • Gradle Managed Devices

Sessions at Google I/O 2021

With this exciting release, the Android Studio team also presented a series of sessions about Android Studio. Watch the following videos to see the latest features in action and to get tips & tricks on how to use Android Studio ?:


Getting Started

Android Studio Arctic Fox (2020.3.1) is a big release, and now is a good time to download and check out the Beta release to incorporate the new features into your workflow. The beta release is near stable release quality, but as with any beta release, bugs may still exist, so, if you do find an issue, let us know so we can work to fix it. If you’re already using Android Studio, you can check for updates on the Beta channel from the navigation menu (Help > Check for Update [Windows/Linux] , Android Studio > Check for Updates [OS X]). When you update to beta, you will get access to the new version of Android Studio and Android Emulator.

As always, we appreciate any feedback on things you like, and issues or features you would like to see. If you find a bug or issue, please file an issue. Follow us -- the Android Studio development team -- on Twitter and on Medium.

Unlock new use cases and increase developer velocity with the latest ARCore updates

Posted by Ian Zhang, Product Manager, AR & Zeina Oweis, Product Manager, AR

Two phones showing animated screens

ARCore was created to provide developers with simple yet powerful tools to seamlessly blend the digital and physical worlds. Over the last few years, we’ve seen developers create apps that entertain, engage, and help people in different ways–from letting fans interact with their favorite characters, to placing virtual electronics and furniture for the perfect home setup and beyond.

At I/O this year, we continue on the mission of improving and building AR developer tools. With the launch of ARCore 1.24, we’re introducing the Raw Depth API and the Recording and Playback API. These new APIs will enable developers to create new types of AR experiences and speed up their development cycles.

Increase AR realism and precision with depth

When we launched the Depth API last year, hundreds of millions of Android devices gained the ability to generate depth maps in real time without needing specialized depth sensors. Data in these depth maps was smoothed, filling in any gaps that would otherwise occur due to missing visual information, making it easy for developers to create depth effects like occlusion.

The new ARCore Raw Depth API provides more detailed representations of the geometry of objects in the scene by generating “raw” depth maps with corresponding confidence images. These raw depth maps include unsmoothed data points, and the confidence images provide the confidence of the depth estimate for each pixel in the raw depth map.

4 examples of ARCore Raw Depth API

Improved geometry from the Raw Depth API enables more accurate depth measurements and spatial awareness. In the ARConnect app, these more accurate measurements give users a deeper understanding of their physical surroundings. The AR Doodads app utilizes raw depth’s spatial awareness to allow users to build realistic virtual Rube Goldberg machines.

ARConnect by PHORIA (left) and AR Doodads by Jam3 (right) use the improved geometry from the Raw Depth AP

ARConnect by PHORIA (left) and AR Doodads by Jam3 (right) use the improved geometry from the Raw Depth API

The confidence image in the Raw Depth API allows developers to filter depth data in real time. For example, TikTok’s newest effect enables users to upload an image and wrap it onto real world objects. The image conforms to surfaces where there is high confidence in the underlying depth estimate. The ability for developers to filter for high confidence depth data is also essential for 3D object and scene reconstruction. This can be seen in the 3D Live Scanner app, which enables users to scan their space and create, edit, and share 3D models.

TikTok by TikTok Pte. Ltd. (left) and  3D Live Scanner by Lubos Vonasek Programmierung (right) use confidence images from the ARCore Raw Depth API

TikTok by TikTok Pte. Ltd. (left) and 3D Live Scanner by Lubos Vonasek Programmierung (right) use confidence images from the ARCore Raw Depth API

We’re also introducing a new type of hit-test that uses the geometry from the depth map to provide more hit-test results, even in low-texture and non-planar areas. Previously, hit-test worked best on surfaces with lots of visual features.

Hit Results with Planes (left)

Works best on horizontal, planar surfaces with 

good texture

Hit Results with Depth (right)

Gives more results, even on non-planar or
low-texture areas

The lifeAR app uses this improved hit-test to bring AR to video calls. Users see accurate virtual annotations on the real-world objects as they tap into the expertise of their social circle for instant help to tackle everyday problems.

lifeAR by TeamViewer uses the improved depth hit-test

As with the previous Depth API, these updates leverage depth from motion, making them available on hundreds of millions of Android devices without relying on specialized sensors. Although depth sensors such as time-of-flight (ToF) sensors are not required, having them will further improve the quality of your experiences.

In addition to these apps, the ARCore Depth Lab has been updated with examples of both the Raw Depth API and the depth hit-test. You can find those and more on the Depth API documentation page and start building with Android and Unity today.

Increase developer velocity and post-capture AR

A recurring pain point for AR developers is the need to continually test in specific places and scenarios. Developers may not always have access to the location, lighting will change, and sensors won’t catch the exact same information during every live camera session.

The new ARCore Recording and Playback API addresses this by enabling developers to record not just video footage, but also IMU and depth sensor data. On playback, this same data can be accessed, enabling developers to duplicate the exact same scenario and test the experience from the comfort of their workspace.

DiDi used the Recording and Playback API to build and test AR directions in their DiDi-Rider app. They were able to save 25% on R&D and testing costs, 60% on travel costs, and accelerated their development cycle by 6 months.

DiDi-Rider by Didi Chuxing saves on development resources with the Recording and Playback API

DiDi-Rider by Didi Chuxing saves on development resources with the Recording and Playback API

In addition to increasing developer velocity, recording and playback unlocks opportunities for new AR experiences, such as post-capture AR. Using videos enables asynchronous AR experiences that remove time and place constraints. For instance, when visualizing AR furniture, users no longer have to be in their home. They can instead pull up a video of their home and accurately place AR assets, enabling them to “take AR anywhere”.

Jump AR by SK Telecom uses the Recording and Playback API to transport scenes from South Korea right into users’ homes to augment with culturally relevant volumetric and 3D AR content.

JumpAR by SKT uses Recording and Playback to bring SouthKorea to your home

JumpAR by SKT uses Recording and Playback to bring SouthKorea to your home

VoxPlop! by Nexus Studios is experimenting with the notion of Spatial Video co-creation, where users can reach in and interact with a recorded space rather than simply placing content on top of a video. The Recording and Playback API enables users to record videos, drop in 3D characters and messages, and share them with family and friends.

VoxPlop! by Nexus Studios uses the Recording and Playback API to experiment with Spatial Video co-creation

VoxPlop! by Nexus Studios uses the Recording and Playback API to experiment with Spatial Video co-creation

Learn more and get started with the Recording and Playback API docs.

Get started with ARCore today

These latest ARCore updates round out a robust set of powerful developer tools for creating engaging and realistic AR experiences. With over a billion lifetime installs and 850 million compatible devices, ARCore makes augmented reality accessible to nearly everyone with a smartphone. We're looking forward to seeing how you innovate and reach more users with ARCore. To learn more and get started with the new APIs, visit the ARCore developer website.

Google I/O 2021: Being helpful in moments that matter

 

It’s great to be back hosting our I/O Developers Conference this year. Pulling up to our Mountain View campus this morning, I felt a sense of normalcy for the first time in a long while. Of course, it’s not the same without our developer community here in person. COVID-19 has deeply affected our entire global community over the past year and continues to take a toll. Places such as Brazil, and my home country of India, are now going through their most difficult moments of the pandemic yet. Our thoughts are with everyone who has been affected by COVID and we are all hoping for better days ahead.

The last year has put a lot into perspective. At Google, it’s also given renewed purpose to our mission to organize the world's information and make it universally accessible and useful. We continue to approach that mission with a singular goal: building a more helpful Google, for everyone. That means being helpful to people in the moments that matter and giving everyone the tools to increase their knowledge, success, health, and happiness. 

Helping in moments that matter

Sometimes it’s about helping in big moments, like keeping 150 million students and educators learning virtually over the last year with Google Classroom. Other times it’s about helping in little moments that add up to big changes for everyone. For example, we’re introducing safer routing in Maps. This AI-powered capability in Maps can identify road, weather, and traffic conditions where you are likely to brake suddenly; our aim is to reduce up to 100 million events like this every year. 

Reimagining the future of work

One of the biggest ways we can help is by reimagining the future of work. Over the last year, we’ve seen work transform in unprecedented ways, as offices and coworkers have been replaced by kitchen countertops and pets. Many companies, including ours, will continue to offer flexibility even when it’s safe to be in the same office again. Collaboration tools have never been more critical, and today we announced a new smart canvas experience in Google Workspace that enables even richer collaboration. 

Smart Canvas integration with Google Meet

Responsible next-generation AI

We’ve made remarkable advances over the past 22 years, thanks to our progress in some of the most challenging areas of AI, including translation, images and voice. These advances have powered improvements across Google products, making it possible to talk to someone in another language using Assistant’s interpreter mode, view cherished memories on Photos, or use Google Lens to solve a tricky math problem. 

We’ve also used AI to improve the core Search experience for billions of people by taking a huge leap forward in a computer’s ability to process natural language. Yet, there are still moments when computers just don’t understand us. That’s because language is endlessly complex: We use it to tell stories, crack jokes, and share ideas — weaving in concepts we’ve learned over the course of our lives. The richness and flexibility of language make it one of humanity’s greatest tools and one of computer science’s greatest challenges. 

Today I am excited to share our latest research in natural language understanding: LaMDA. LaMDA is a language model for dialogue applications. It’s open domain, which means it is designed to converse on any topic. For example, LaMDA understands quite a bit about the planet Pluto. So if a student wanted to discover more about space, they could ask about Pluto and the model would give sensible responses, making learning even more fun and engaging. If that student then wanted to switch over to a different topic — say, how to make a good paper airplane — LaMDA could continue the conversation without any retraining.

This is one of the ways we believe LaMDA can make information and computing radically more accessible and easier to use (and you can learn more about that here). 

We have been researching and developing language models for many years. We’re focused on ensuring LaMDA meets our incredibly high standards on fairness, accuracy, safety, and privacy, and that it is developed consistently with our AI Principles. And we look forward to incorporating conversation features into products like Google Assistant, Search, and Workspace, as well as exploring how to give capabilities to developers and enterprise customers.

LaMDA is a huge step forward in natural conversation, but it’s still only trained on text. When people communicate with each other they do it across images, text, audio, and video. So we need to build multimodal models (MUM) to allow people to naturally ask questions across different types of information. With MUM you could one day plan a road trip by asking Google to “find a route with beautiful mountain views.” This is one example of how we’re making progress towards more natural and intuitive ways of interacting with Search.

Pushing the frontier of computing

Translation, image recognition, and voice recognition laid the foundation for complex models like LaMDA and multimodal models. Our compute infrastructure is how we drive and sustain these advances, and TPUs, our custom-built machine learning processes, are a big part of that. Today we announced our next generation of TPUs: the TPU v4. These are powered by the v4 chip, which is more than twice as fast as the previous generation. One pod can deliver more than one exaflop, equivalent to the computing power of 10 million laptops combined. This is the fastest system we’ve ever deployed, and a historic milestone for us. Previously to get to an exaflop, you needed to build a custom supercomputer. And we'll soon have dozens of TPUv4 pods in our data centers, many of which will be operating at or near 90% carbon-free energy. They’ll be available to our Cloud customers later this year.

(Left) TPU v4 chip tray; (Right) TPU v4 pods at our Oklahoma data center 

It’s tremendously exciting to see this pace of innovation. As we look further into the future, there are types of problems that classical computing will not be able to solve in reasonable time. Quantum computing can help. Achieving our quantum milestone was a tremendous accomplishment, but we’re still at the beginning of a multiyear journey. We continue to work to get to our next big milestone in quantum computing: building an error-corrected quantum computer, which could help us increase battery efficiency, create more sustainable energy, and improve drug discovery. To help us get there, we’ve opened a new state of the art Quantum AI campus with our first quantum data center and quantum processor chip fabrication facilities.

Inside our new Quantum AI campus.

Safer with Google

At Google we know that our products can only be as helpful as they are safe. And advances in computer science and AI are how we continue to make them better. We keep more users safe by blocking malware, phishing attempts, spam messages, and potential cyber attacks than anyone else in the world.

Our focus on data minimization pushes us to do more, with less data. Two years ago at I/O, I announced Auto-Delete, which encourages users to have their activity data automatically and continuously deleted. We’ve since made Auto-Delete the default for all new Google Accounts. Now, after 18 months we automatically delete your activity data, unless you tell us to do it sooner. It’s now active for over 2 billion accounts.

All of our products are guided by three important principles: With one of the world’s most advanced security infrastructures, our products are secure by default. We strictly uphold responsible data practices so every product we build is private by design. And we create easy to use privacy and security settings so you’re in control.

Long term research: Project Starline

We were all grateful to have video conferencing over the last year to stay in touch with family and friends, and keep schools and businesses going. But there is no substitute for being together in the room with someone. 

Several years ago we kicked off a project called Project Starline to use technology to explore what’s possible. Using high-resolution cameras and custom-built depth sensors, it captures your shape and appearance from multiple perspectives, and then fuses them together to create an extremely detailed, real-time 3D model. The resulting data is many gigabits per second, so to send an image this size over existing networks, we developed novel compression and streaming algorithms that reduce the data by a factor of more than 100. We also developed a breakthrough light-field display that shows you the realistic representation of someone sitting in front of you. As sophisticated as the technology is, it vanishes, so you can focus on what’s most important. 

We’ve spent thousands of hours testing it at our own offices, and the results are promising. There’s also excitement from our lead enterprise partners, and we’re working with partners in health care and media to get early feedback. In pushing the boundaries of remote collaboration, we've made technical advances that will improve our entire suite of communications products. We look forward to sharing more in the months ahead.

A person having a conversation with someone over Project Starline.

Solving complex sustainability challenges

Another area of research is our work to drive forward sustainability. Sustainability has been a core value for us for more than 20 years. We were the first major company to become carbon neutral in 2007. We were the first to match our operations with 100% renewable energy in 2017, and we’ve been doing it ever since. Last year we eliminated our entire carbon legacy. 

Our next ambition is our biggest yet: operating on carbon free energy by the year 2030. This represents a significant step change from current approaches and is a moonshot on the same scale as quantum computing. It presents equally hard problems to solve, from sourcing carbon-free energy in every place we operate to ensuring it can run every hour of every day. 

Building on the first carbon-intelligent computing platform that we rolled out last year, we’ll soon be the first company to implement carbon-intelligent load shifting across both time and place within our data center network. By this time next year we’ll be shifting more than a third of non-production compute to times and places with greater availability of carbon-free energy. And we are working to apply our Cloud AI with novel drilling techniques and fiber optic sensing to deliver geothermal power in more places, starting in our Nevada data centers next year.

Investments like these are needed to get to 24/7 carbon-free energy, and it’s happening in Mountain View, California, too. We’re building our new campus to the highest sustainability standards. When completed, these buildings will feature a first- of- its- kind, dragonscale solar skin, equipped with 90,000 silver solar panels and the capacity to generate nearly 7 megawatts. They will house the largest geothermal pile system in North America to help heat buildings in the winter and cool them in the summer. It’s been amazing to see it come to life.

(Left) Rendering of the new Charleston East campus in Mountain View, California; (Right) Model view with dragon scale solar skin.

A celebration of technology

I/O isn’t just a celebration of technology but of the people who use it, and build it — including the millions of developers around the world who joined us virtually today. Over the past year we’ve seen people use technology in profound ways: to keep themselves healthy and safe, to learn and grow, to connect, and to help one another through really difficult times. It’s been inspiring to see and has made us more committed than ever to being helpful in the moments that matter. 

I look forward to seeing everyone at next year’s I/O — in person, I hope. Until then, be safe and well.

Posted by Sundar Pichai, CEO of Google and Alphabet

I/O 2019: New features to help you develop, release, and grow your business on Google Play

Posted by Kobi Glick, Product Lead, Google Play

Play and #io19 logos with geometric shapes

Over the last 10 years, we’ve worked together to build an incredible ecosystem with more than 2.5 billion active users in over 190 countries. This would not be possible without you and all the fantastic apps and games you’ve built that entertain, help, and educate people around the world.

Every month, you upload more than 750,000 APKs and app bundles to the Play Console. We’ve been amazed by your enthusiasm, and it’s been our privilege to help you grow your business. This year, we want to help you go even further. So today at Google I/O, we're announcing new tools and features to help you develop, release, and grow your apps and games — many of them based on your feedback and suggestions.

Efficient, modular apps and customizable feature delivery

Last year we introduced Android's new publishing format, the Android App Bundle, and an entirely new dynamic delivery framework on Google Play. There are now over 80,000 apps and games using app bundles in production, with an average size savings of 20%. As a result of those savings, apps have seen up to 11% install uplift. As the future of app delivery, we’re excited to share these latest enhancements to the Android App Bundle.

Dynamic features are out of beta and available to all developers, including these new delivery options:

  • On-demand delivery — install features when they’re needed or in the background, instead of delivering them at install time, and reduce the size of your app.
  • Conditional delivery — control which parts of your app to deliver at the time of install based on the user’s country, device features, or minimum SDK version.
  • Instant experiences — now fully supported, so you only need to upload one artifact for your installed app and Google Play Instant experiences.

During our beta program, many developers implemented interesting use cases with dynamic features. Netflix, for example, now delivers their customer support functionality as a dynamic feature to users who visit the support center. By making functionality available only to users who need it, Netflix reported a 33% reduction in app size. You can learn more in the video below.

Seamless internal testing and increased security

We heard you loud and clear: testing bundles is hard. But with the new internal app sharing, you can now share test builds in a matter of seconds. Just upload your app bundle to Google Play and get a download URL to share with your testers. You don’t need to worry about version codes, signing keys, or most other validations that your production releases need to conform to.

In addition to efficiency and modularity, the Android App Bundle also now offers increased security with the launch of app signing key upgrade for new installs. With this feature, you can upgrade the cryptographic strength of your signing key for new installs and their updates on Google Play. Many developers sign their apps with keys generated a long time ago, and this new feature is the only backwards-compatible way to increase their strength.

Easier for users to update

Although auto-updates reach many users, you told us it was still challenging to get some users to update your apps. Now that our new in-app updates API is in general availability, users will be able to update without ever leaving your app. During our early access program, many developers used our API to create a polished upgrade flow, resulting in a median acceptance rate of about 50%.

The API currently supports two flows:

  • The “immediate” flow is a full-screen user experience that guides the user from download to update before they can use your app.
  • The “flexible flow” allows users to download the update while continuing to use your app.
Two iPhones side by side. The first on displaying Immediate update flow with a pop up recommending an update. The second displaying Flexible update flow with a pop up recommending an update.

Stronger decision-making with new Google Play Console data

The right data can help you improve your app performance and grow your business. That’s why we’re excited to tell you about new metrics and insights that will help you better measure your app health and analyze your performance.

  • Core metrics refresh — better understand your acquisition and churn, including data on returning users, automatic change analysis, install method (such as pre-installs and peer-to-peer sharing), metric benchmarking, and the ability to aggregate and dedupe over periods from hours to quarters.
  • App size metrics and reports — gain insights about your app size in Android vitals, including download size, size on device (at install time), changes compared to peers over time, and tailored optimization recommendations.
  • Developer-selected peer benchmarks — create a custom set of 8-12 peers to compare your app to, then see the median value of the set and the difference between your app and its peers for Android vitals data as well as for public metrics like your rating.
  • Market insights with curated peersets — in the coming months, you’ll also be able to compare your growth against an automatically generated, curated peerset of around 100 apps similar to yours for business-sensitive metrics like conversion rate and uninstall rate.
Android Vitals Overview dashboard on Peer group screen

Making it easier to respond to and improve user reviews

We’re also making big changes to another key source of performance data: your user reviews. Many of you told us that you want a rating that reflects a more current version of your app, not what it was years ago — and we agree. So instead of a lifetime cumulative value, your Google Play Store rating will be recalculated to give more weight to your most recent ratings. Users won’t see the updated rating in the Google Play Store until August, but you can preview your new rating in the Google Play Console today.

Every day, developers respond to more than 100,000 reviews in the Play Console, and when they do, we’ve seen that users update their rating by +0.7 stars on average. So in addition to the ratings change, we're making it easier to respond to reviews with suggested replies. When you go to respond to a user, you’ll see three suggested replies which have been created automatically based on the content of the review. You can choose to send one as-suggested, customize a suggestion for more personalization, or create your own message from scratch. Suggested replies are available in English now with additional languages coming later.

Google user review with suggested replies in Beta.

Better Google Play Store listing targeting and customization

Your store listing is where users come to learn more about your app or game and decide whether to install. It’s important real estate, so we’re releasing new features that let you optimize your Google Play Store to address different moments in the user lifecycle.

  • Following the launch of custom listings by country at GDC, we’re announcing a new early access program that lets you create custom listings by install state. Increase acquisition, retention, and re-engagement by providing customized marketing messages for users who haven’t installed your app, users who have your app, and users who have uninstalled your app. If you’re interested in joining the program, sign up here.
  • Now that pre-registration is available to all developers, we’re launching two new features to help you make the most of it: custom listing pages for pre-registration and pre-registration rewards, which let you incentivize players for signing up for notifications before you launch.

Learn more about these and other Google Play features at Google I/O. Join us live or watch later on the Android Developers YouTube channel.

You can also take your skills and knowledge to the next level with our e-learning courses on Google Play’s Academy for App Success, and sign up for our newsletter to stay up to date with our latest features and updates.

How useful did you find this blog post?

Flutter: a Portable UI Framework for Mobile, Web, Embedded, and Desktop

Posted by the Flutter Team

Today marks an important milestone for the Flutter framework, as we expand our focus from mobile to incorporate a broader set of devices and form factors. At I/O, we’re releasing our first technical preview of Flutter for web, announcing that Flutter is powering Google’s smart display platform including the Google Home Hub, and delivering our first steps towards supporting desktop-class apps with Chrome OS.

From Mobile to Multi-Platform

For a long time, the Flutter team mission has been to build the best framework for developing mobile apps for iOS and Android. We believe that mobile development is ripe for improvement, with developers today forced to choose between building the same app twice for two platforms, or making compromises to use cross-platform frameworks. Flutter hits the sweet spot of enabling a single codebase to deliver beautiful, fast, tailored experiences with high developer productivity for both platforms, and we’ve been excited to see how our early efforts have flourished into one of the most popular open source projects.

As we started to home in on our 1.0 release last year, we began experimenting with broadening the scope of Flutter to other platforms. This was triggered both by internal teams within Google who are increasingly relying on Flutter, as well as the latent potential of the Dart platform for delivering portable experiences. In particular, a small team who were already building a web framework for Dart for internal usage started an exploratory project (codename “Hummingbird”) to evaluate the technical merits of porting the Flutter engine to support the standards-based web.

The results of this project were startling, thanks in large part to the rapid progress in web browsers like Chrome, Firefox, and Safari, which have pervasively delivered hardware-accelerated graphics, animation, and text as well as fast JavaScript execution. Within a few months of beginning the project, we had the core Flutter framework primitives working, and soon after we had demos running on mobile and desktop browsers. Along with Dart’s long pedigree of compiling for the web, this proved that we could also bring the Flutter framework and apps to run on the web.

In parallel, the core Flutter project has been making progress to enable desktop-class apps, with input paradigms such as keyboard and mouse, window resizing, and tooling for Chrome OS app development. The exploratory work that we did for embedding Flutter into desktop-class apps running on Windows, Mac and Linux has also graduated into the core Flutter engine.

A Portable UI Framework for All Screens

Flutter Mobile, Web, Desktop, and Embedded

It’s worth pausing for a moment to acknowledge the business potential of a high-performance, portable UI framework that can deliver beautiful, tailored experiences to such a broad variety of form factors from a single codebase.

For startups, the ability to reach users on mobile, web, or desktop through the same app lets them reach their full audience from day one, rather than having limits due to technical considerations. Especially for larger organizations, the ability to deliver the same experience to all users with one codebase reduces complexity and development cost, and lets them focus on improving the quality of that experience.

With support for mobile, desktop, and web apps, our mission expands: we want to build the best framework for developing beautiful experiences for any screen.

Flutter for Web

This week, we are releasing the first technical preview of Flutter for the web. While this technology is still in development, we are ready for early adopters to try it out and give us feedback. Our initial vision for Flutter on the web is not as a general purpose replacement for the document experiences that HTML is optimized for; instead we intend it as a great way to build highly interactive, graphically rich content, where the benefits of a sophisticated UI framework are keenly felt.

To showcase Flutter for the web, we worked with the New York Times to build a demo. In addition to world-class news coverage, the New York Times is famous for its crossword and other puzzle games. Since avid puzzlers want to play on whatever device they’re using at the time, their development team was attracted to Flutter as a potential solution for their needs. Discovering that they could reach the web with the same code was a huge boon. At Google I/O this week, you can get a sneak peek of their newly refreshed KENKEN puzzle game, which runs with the same code on Android, iOS, web, Mac, and Chrome OS.

ken-gratulations puzzle

Here’s what Eric von Coelln, Executive Director of Puzzles at the New York Times has to say about their experiences with Flutter:

"The New York Times Crossword has more than 400,000 stand-alone subscriptions and is a daily ritual for puzzle solvers. Along with the Crossword, we’ve grown our portfolio of digital puzzles that reaches more than two million solvers each month.

We were already beginning to explore Flutter as a potential solution to the challenge of quickly developing engaging, high-quality mobile experiences. Now the addition of being able to publish to web makes Flutter an even more appealing option to quickly deploy across all of our user platforms. This update of our old Flash-based KenKen game into a multi-platform playable experience is something we’re excited to bring to our solvers this year.”

There’s lots more to say about Flutter for web than we have space for here, so check out the dedicated article about Flutter for web on the Flutter blog.

At this early stage, we’re eager to get your feedback on how you’d like to use Flutter for web. We expect to rapidly evolve the code, with a particular focus on performance, and harmonizing the codebase with the rest of the Flutter project.

Flutter for Mobile Devices

The core Flutter framework also receives an upgrade this week, with the immediate availability of Flutter 1.5 in our stable channel. Flutter 1.5 includes hundreds of changes in response to developer feedback, including updates for new App Store iOS SDK requirements, updates to the iOS and Material widgets, engine support for new device types, and Dart 2.3 featuring new UI-as-code language features.

As the framework itself matures, we’re investing in building out the supporting ecosystem. The architectural model of Flutter has always prioritized a small core framework, supplemented by a rich package community. In the last few months, Google has contributed production-quality packages for web views, Google Maps, and Firebase ML Vision, and this week, we’re adding initial support for in-app payments. And with over 2,000 open source packages available for Flutter, there are options available for most scenarios.

One particularly exciting project that we’re announcing this week at I/O is the ML Kit Custom Image Classifier. Built using Flutter and Firebase, it offers an easy-to-use app-based workflow for creating custom image classification models. You can collect training data using the phone's camera, invite others to contribute to your datasets, trigger model training, and use trained models, all from the same app.

Flutter ML Kit: create datasets, collaborate to collect data, train model, run inference

Flutter continues to grow in popularity and adoption. A growing roster of demanding customers including eBay, Sonos, Square, Capital One, Alibaba and Tencent are developing apps with Flutter. And they’re having fun! Here’s what Larry McKenzie, a senior developer at eBay had to say about Flutter:

“Flutter is fast! Features that once took us multiple days to implement can be finished in a single day. Many problems we used to spend a lot of time on, simply no longer occur. Our team can now focus on creating more polished user experiences and delivering functionality. Flutter is enabling us to exceed expectations!”

More broadly, LinkedIn recently conducted a study that showed Flutter is the single fastest-growing skill among software engineers, based on site members claiming it on their profile over the last 12 months. And in the recent 2019 StackOverflow developer survey, Flutter was listed as one of the most-loved developer frameworks.

Flutter for Desktop

Flutter is also being used on the desktop. For some months, we’ve been working on the desktop as an experimental project. But now we’re graduating this into Flutter engine, integrating this work directly into the mainline repo. While these targets are not production-ready yet, we have published early instructions for developing Flutter apps to run on Mac, Windows, and Linux.

Another quickly growing Flutter platform is Chrome OS, with millions of Chromebooks being sold every year, particularly in education. Chrome OS is a perfect environment for Flutter, both for running Flutter apps, and as a developer platform, since it supports execution of both Android and Linux apps. With Chrome OS, you can use Visual Studio Code or Android Studio to develop a Flutter app that you can test and run locally on the same device without an emulator. You can also publish Flutter apps for Chrome OS to the Play Store, where millions of others can benefit from your creation.

Flutter for Embedded Devices

As the final example of Flutter’s portability, we offer Flutter embedded on other devices. We recently published samples that demonstrate Flutter running directly on smaller-scale devices like Raspberry Pi, and we offer an embedding API for Flutter that allows it to be used in scenarios including home, automotive and beyond.

Perhaps one of the most pervasive embedded platforms where Flutter is already running is on the smart display operating system that powers the likes of Google Home Hub.

Within Google, some Google-built features for the Smart Display platform are powered by Flutter today. And the Assistant team is excited to continue to expand the portfolio of features built with Flutter for the Smart Display in the coming months; the goal this year is to use Flutter to drive the overall system UI.

Other Resources

We often get asked by developers how they can get started with Flutter. We are pleased today to announce a comprehensive new training course for Flutter, built by The App Brewery, authors of the highest-rated iOS training course on Udemy. Their new course has over thirty hours of content for Flutter, including videos, demos and labs, and with Google’s sponsorship, they are announcing today a time-limited discount of this course from the retail price of $199 to just $10.

Many developers are creating inspiring apps with Flutter. In the run-up to Google I/O, we ran a contest called Flutter Create to encourage developers to see what they could build with Flutter in 5KB or less of Dart code. We had over 750 unique entries from around the world, with some amazing examples that pushed what we imagine would be possible in such a small size.

Today, we’re announcing the winners, which can be found on flutter.dev/create. Congratulations to the overall winner, Zebiao Hu, who wins a fully-loaded iMac Pro worth over $10,000!

Flutter is no longer a mobile framework, but a multi-platform framework that can help you reach your users wherever they are. We can’t wait to see what you’ll build with Flutter on the web, desktop, mobile, and beyond!

Creating AR Experiences for I/O: Our Process

Posted by Karin Levi, Product Marketing, ARCore

A few weeks ago at Google I/O we released a major update to ARCore, Google's AR development platform. We added new APIs like Cloud Anchors, that enable multi-user, collaborative AR experiences and Augmented Images that enable activation of 2D images into 3D objects. All of these updates are going to change the way we use AR today and enable developers to create richer, more immersive AR apps.

With these new capabilities, we decided to put our platform to the test. So we built real experiences to showcase how these all come to life. All demos were presented at the I/O AR & VR sandbox area. We open sourced them to make sure you can see how simple it is to build these experiences. We're pretty happy with how they turned out and would love to share with you some learning and insights from behind the scenes.

Light Board - Multiplayer game

Light Board is an AR multiplayer tabletop game where two players on floating game boards launch colored projectiles at each other.

While building Light Board it was important for us to keep in mind who the end users are. We wanted it to be a simple/fun game for developers to try out while visiting the I/O sandbox. The developers would only have a couple minutes to play while passing through, so it needed to allow players (even non-gamers) to pick it up and play with very little setup.

The artwork for Light Board was a major focus. Our mission for the look of the game was to align with the design and decor of I/O 2018. This way, our app would feel like an extension of everything the attendees saw around them. As a result, our design philosophy had 3 goals; bright accent colors, simple graphic shapes and natural physical materials.

Left: Design for AR/VR Sandbox at I/O 2018. Right: Key art for Light Board game boards

The artwork was created in Maya and Cinema 4D. We created physically based materials for our models using Substance Painter. Just as continuous iteration is crucial for engineering, it is also important when creating art assets. With that in mind, we kept careful track of our content pipeline, even for this relatively simple project. This allowed us to quickly try out different looks and board styles before settling on our final design.

On the engineering front we selected the Unity game engine as our dev environment. Unity gives us a couple of important advantages. First, it is easy to get great looking 3D graphics up and running right away. Second, the engine component is already complete, so we could immediately start iterating on gameplay code. As with the artwork, this allowed us to test gameplay options before we made a final decision. Additionally, Unity gave us support for both Android and iOS with only a little extra work.

To handle the multiplayer aspect we used Firebase Realtime Database. We were concerned with network performance at the event, and felt that the persistent nature of a database would make it more tolerant of poor networks. As it turned out, it worked very well and we got the ability to quit and rejoin games for free!

We had a lot of fun building Light Board and we hope people can use it as an example of how easy it can be to not only build AR apps, but to use really cool features like Cloud Anchors. Please check out our open source repo and give Light Board a try!

Just a line - Draw with your friends

In March, we released Just a Line, an Android app that lets you draw in the air with your phone. It's a simple experiment meant to showcase the power of ARCore. At Google I/O, we added Cloud Anchors to the app so that two people can draw at once in the same space, even if one of them is using Android and the other iOS.

Both apps were built natively: The Android version was written in Android Studio, and the iOS version was built in xCode. ARCore's Cloud Anchors enable Just a Line to pair two phones, allowing users to draw simultaneously in a shared space. Pairing works across Android and iOS devices, and drawings are synchronized live through a Firebase Realtime Database. You can find the open-source code for iOS here and for Android here.

Illusive Images - Art exhibition comes to life

"Illusive Images" demo is an augmented gallery consisting of 3 artworks, each exploring a different augmented image use case and user experience. As one walks from side to side, around the object, or gazes in a specific direction, 2D artworks are married with 3D, inviting the viewer to enter into the space of the artwork spanning well beyond the physical frame.

Due to the visual design nature of our augmented images, we experimented a lot with creating databases with varying degrees of features. In order to get the best results, we iterated quickly by resizing the canvas for the artwork. We also moved and stretched the brightness and contrast levels. These variations helped to achieve the most optimal image without compromising design intent.

The app was built in Unity with ARCore, with the majority of assets created in Cinema 4D. Mograph animations were imported into Unity as an fbx, and driven entirely by the position of the user in relation to the artwork. An example project can be found here.

To make your development experience easier, we open sourced all the demos our team built. We hope you find this useful! You can also visit our website to learn more and start building AR experiences today.

Browse the updated Google I/O 2018 schedule and reserve seats for Sessions

The Google I/O 2018 schedule just got a big update!

Find additional Sessions and Codelabs, as well as new App Reviews, Office Hours, and After Hours events. Times and locations for all events are also now available, so you can start planning accordingly. New this year: we'll have a series of Keynote Sessions, which take a broader look at how the technology we build can impact the world around us! The I/O schedule is subject to change until the event, so check back often, and keep an eye out for scheduled Meetup events taking place at the Community Lounge to help you connect and network with other developers.

Attending in person

To help make it easier to attend your favorite talks and minimize lines, confirmed attendees will be able to reserve seats for Sessions in advance of I/O - as long as they’re signed in with the same email address used to register for the 2018 event. A portion of seats will still be available first-come, first-served onsite.

To reserve a seat:

  • Navigate to google.com/io/schedule, sign in, and click on the ticket icon for each Session you want to reserve.
  • If a particular Session has already reached the reservation capacity, you'll see an hourglass icon instead. If you've joined a waitlist and a spot opens up, we'll automatically change your reservation status to reserved.
  • You can reserve as many Sessions as you'd like per day, but only one reservation/waitlist per time slot is allowed.
  • Reservations will remain open until 1 hour before the start time for each Session.
  • NOTE: Reservations are only available for Sessions, not other event types listed on the schedule.

Reserve seats via the main Schedule page…

…Or via the Session detail pages.

Anyone who's signed in can also star all event types listed on the schedule as a way to easily find them later on or on other devices.

In addition to more than 160 technical and Keynote Sessions, onsite guests will have the chance to explore various Sandbox domes, covering product areas like Android, Assistant, Design, IoT, Web, just to name a few. Sandboxes are dedicated spaces to learn and play with our latest products and platforms via interactive demos, physical installations, and more.

You can also take advantage of 100+ Office Hours and App Reviews. Office Hours gives you a chance to meet one-on-one with Google experts to ask all your technical questions, and App Reviews will give you the opportunity to receive advice and tips on your specific app-related projects.

Don't forget to save time in your schedule for Codelabs. Here, you'll have everything you need to learn about the latest and greatest Google technologies via self-paced tutorials, or bring your own machine and take your work home with you. Google staff will be on hand for helpful advice and to provide direction if you get stuck.

Joining remotely?

Don't worry - you're not alone and you won't miss a thing! We'll be livestreaming the majority of our Keynotes and Sessions from Shoreline. If you prefer to watch I/O with your developer community, find an I/O Extended viewing party near you.

We'll also let you experience I/O firsthand via our I/O Guides who will be touring the venue and giving you eyes on the ground.

I/O is only 27 days away! We'll continue to share updates in the upcoming weeks to help you get ready and make the most of this year's event. Stay tuned!

What’s new from Firebase at Google I/O 2017

Originally posted on the Firebase Blog by Francis Ma, Firebase Group Product Manager

It's been an exciting year! Last May, we expanded Firebase into our unified app platform, building on the original backend-as-a-service and adding products to help developers grow their user base, as well as test and monetize their apps. Hearing from developers like Wattpad, who built an app using Firebase in only 3 weeks, makes all the hard work worthwhile.

We're thrilled by the initial response from the community, but we believe our journey is just getting started. Let's talk about some of the enhancements coming to Firebase today.

Integrating with Fabric

In January, we announced that we were welcoming the Fabric team to Firebase. Fabric initially grabbed our attention with their array of products, including the industry-leading crash reporting tool, Crashlytics. As we got to know the team better, we were even more impressed by how closely aligned our missions are: to help developers build better apps and grow successful businesses. Over the last several months, we've been working closely with the Fabric team to bring the best of our platforms together.

We plan to make Crashlytics the primary crash reporting product in Firebase. If you don't already use a crash reporting tool, we recommend you take a look at Crashlytics and see what it can do for you. You can get started by following the Fabric documentation.

Phone authentication comes to Firebase

Phone number authentication has been the biggest request for Firebase Authentication, so we're excited to announce that we've worked with the Fabric Digits team to bring phone auth to our platform. You can now let your users sign in with their phone numbers, in addition to traditional email/password or identity providers like Google or Facebook. This gives you a comprehensive authentication solution no matter who your users are or how they like to log in.

At the same time, the Fabric team will be retiring the Digits name and SDK. If you currently use Digits, over the next couple weeks we'll be rolling out the ability to link your existing Digits account with Firebase and swap in the Firebase SDK for the Digits SDK. Go to the Digits blog to learn more.

Introducing Firebase Performance Monitoring

We recognize that poor app performance and stability are the top reasons for users to leave bad ratings on your app and possibly churn altogether. As part of our effort to help you build better apps, we're pleased to announce the beta launch of Performance Monitoring.

Firebase Performance Monitoring is a new free tool that helps you understand when your user experience is being impacted by poorly performing code or challenging network conditions. You can learn more and get started with Performance Monitoring in the Firebase documentation.

More robust analytics

Analytics has been core to the Firebase platform since we launched last I/O. We know that understanding your users is the number one way to make your app successful, so we're continuing to invest in improving our analytics product.

First off, you may notice that you're starting to see the name "Google Analytics for Firebase" around our documentation. Our analytics solution was built in conjunction with the Google Analytics team, and the reports are available both in the Firebase console and the Google Analytics interface. So, we're renaming Firebase Analytics to Google Analytics for Firebase, to reflect that your app analytics data are shared across both.

For those of you who monetize your app with AdMob, we've started sharing data between the two platforms, helping you understand the true lifetime value (LTV) of your users, from both purchases and AdMob revenue. You'll see these new insights surfaced in the updated Analytics dashboard.

Many of you have also asked for analytics insights into custom events and parameters. Starting today, you can register up to 50 custom event parameters and see their details in your Analytics reports. Learn more about custom parameter reporting.

Firebase for all - iOS, games, and open source

Firebase's mission is to help all developers build better apps. In that spirit, today we're announcing expanded platform and vertical support for Firebase.

First of all, as Swift has become the preferred language for many iOS developers, we've updated our SDK to handle Swift language nuances, making Swift development a native experience on Firebase.

We've also improved Firebase Cloud Messaging by adding support for token-based authentication for APNs, and greatly simplifying the connection and registration logic in the client SDK.

Second, we've heard from our game developer community that one of the most important stats you monitor is frames per second (FPS). So, we've built Game Loop support & FPS monitoring into Test Lab for Android, allowing you to evaluate your game's frame rate before you deploy. Coupled with the addition of Unity plugins and a C++ SDK, which we announced at GDC this year, we think that Firebase is a great option for game developers. To see an example of a game built on top of Firebase, check out our Mecha Hamster app on Github.

Finally, we've taken a big first step towards open sourcing our SDKs. We believe in open source software, not only because transparency is an important goal, but also because we know that the greatest innovation happens when we all collaborate. You can view our new repos on our open sourceproject page and learn more about our decision in this blog post.

Dynamic Hosting with Cloud Functions for Firebase

In March, we launched Cloud Functions for Firebase, which lets you run custom backend code in response to events triggered by Firebase features and HTTP requests. This lets you do things like send a notification when a user signs up or automatically create thumbnails when an image is uploaded to Cloud Storage.

Today, in an effort to better serve our web developer community, we're expanding Firebase Hosting to integrate with Cloud Functions. This means that, in addition to serving static assets for your web app, you can now serve dynamic content, generated by Cloud Functions, through Firebase Hosting. For those of you building progressive web apps, Firebase Hosting + Cloud Functions allows you to go completely server-less. You can learn more by visiting our documentation.

Firebase Alpha program and what's next

Our goal is to build the best developer experience: easy-to-use products, great documentation, and intuitive APIs. And the best resource that we have for improving Firebase is you! Your questions and feedback continuously push us to make Firebase better.

In light of that, we're excited to announce a Firebase Alpha program, where you will have the opportunity to test the cutting edge of our products. Things might not be perfect (in fact, we can almost guarantee they won't be), but by participating in the alpha community, you'll help define the future of Firebase. If you want to get involved, please register your interest in the Firebase Alpha form.

Thank you for your support, enthusiasm, and, most importantly, feedback. The Firebase community is the reason that we've been able to grow and improve our platform at such an incredible pace over the last year. We're excited to continue working with you to build simple, intuitive products for developing apps and growing mobile businesses. To get started with Firebase today, visit our newly redesigned website. We're excited to see what you build!

Get ready for Google I/O 2017

Posted by Mónica Bagagem, Product Marketing Manager
We’re excited to be hosting Google I/O 2017 next week at the Shoreline Amphitheatre! The agenda for May 17-19 is packed with rich, technical content. Here are some tips to help you make the most of it.

Attending in person?

Everyone is guaranteed a spot for the keynotes but seating will be pre-assigned on a first-come, first-served basis during badge pick-up. Your seating section will be noted on your badge. Badge pick-up starts on Tuesday, May 16th, between 7AM - 7PM PDT at the Shoreline Amphitheatre. Plan to come by early to get the best seats!
Sessions start at 2PM after the Developer Keynote ends, and are roughly 40mins in length. To help make it easier for you to attend your favorite talks and minimize lines, you can reserve seats for sessions now via our web app, Android app and iOS app using your Google I/O registration email address. Additionally, App reviews and select Sandbox demos will be reservable onsite on a first-come, first-served basis at the beginning of each day.
Beyond attending technical Sessions, you’ll have the opportunity to check out our latest product demos and speak directly with Google engineers throughout the Sandbox space; during Codelabs where you can complete self-paced tutorials; and at Office Hours where you can get specific questions answered 1:1 with Googlers.
Remember to save some energy for the evening! On Day 1, we’ll host an After Hours Block Party from 7-10PM. It will include dinner, drinks, and lots of fun, interactive experiences throughout the Sandbox space: our very own comedy club, an international food market & pizza party, several musical performances, a VR drive-in, a Museum of Developer Art, to name just a few! On Day 2, we’ll have an After Hours Concert from 8-10PM (don’t worry, we’ll feed you dinner, too!). Stay tuned - we’ll be announcing the talent closer to I/O.
Don’t forget to to check the Mountain View weather forecast for each day; we recommend bringing a jacket for the evening festivities as it can get chilly after dark. Although all Sessions and Sandboxes will take place in climate controlled structures, Shoreline Amphitheatre is an outdoor venue - so come prepared for whatever mother nature might have in store!
Finally, you can find directions, shuttle schedules, biking, parking, and carpooling info here.

Attending remotely?

Even if you’re not at Shoreline, you can still participate in I/O from afar! Here’s how:
  • I/O Extended: Find an I/O Extended event near you to watch the keynotes with your community, participate in hackathons, codelabs, and much more.
  • Livestream: Tune into the livestream throughout the 3 day festival on desktop and mobile.
  • I/O Live Widget: If you want to bring the livestream and the #io17 social conversation to your audience, you can customize and embed our I/O Live widget on your site or blog.
  • I/O Guide: Follow our Guide, Timothy Jordan, as he tours the venue and gets the inside scoop. You can find him on any of our livestream channels throughout the event, in-between sessions.
  • #io17request: Between May 17-19, send us your questions about I/O via English-language tweets that include the #io17request hashtag. A team of Googlers across Android, Chrome, Assistant, VR, Machine Learning, and more will track down answers to your burning questions.
  • I/O in photos: Be sure to follow out our real-time I/O photo album from Shoreline!

Check out our FAQ page if you need more info and join the conversation at #io17. See you veryyyyy soon!