Tag Archives: Announcements

What’s new from Android, at Android Dev Summit ‘22

Posted by Matthew McCullough, Vice President, Product Management, Android Developer

Just now, we kicked off the first day of Android Dev Summit in the Bay Area, where my team and I covered a number of ways we’re helping you build excellent experiences for users by leveraging Modern Android Development, which can help you extend those apps across the many devices Android has to offer - across all screen sizes from the one on your wrist, to large screens like a tablet or foldables.

Here’s a recap of what we covered, and don’t forget to watch the full keynote!

Modern Android Development: Compose October ‘22

A few years ago, we introduced a set of libraries, tools, services, and guidance we call Modern Android Development, or MAD. From Android Studio, Kotlin, Jetpack libraries and powerful Google & Play Services, our goal is to make it faster and easier for you to build high quality apps across all Android devices.

For building rich, beautiful UIs, we introduced Jetpack Compose several years ago - this is our recommended UI framework for new Android applications.

We’re introducing a Gradle Bill of Materials (BOM) specifying the stable version of each Compose library. The first BOM release, Compose October 22, contains Material Design 3 components, lazy staggered grids, variable fonts, pull to refresh, snapping in lazy lists, draw text in canvas, URL annotations in text, hyphenation, and LookAheadLayout. The team at Lyft has benefited from using Compose. They shared “Over 90% of all new feature code is now developed in Compose.”

We want Compose to help you take advantage of the entire Android ecosystem of devices, Compose for Wear OS hit its 1.0 stable release several weeks ago making it the recommended way to build UI for Wear. Today we announced that we’re growing the offering with the first alpha release of Compose for Android TV. Components like featured carousel and immersive list are already available, with more components coming soon. So if you're learning Android or starting a new app, Jetpack Compose is ready for you!

Modern Android Development comes to life in Android Studio, our official IDE that provides powerful tools for building apps on every type of Android device. Today, we’re releasing a number of new features for you to test out, including updated templates that are Compose by default and feature Material 3, Live Edit on by default for Compose, Composition Tracing, Android SDK Upgrade Assistant, App Quality Insights Improvements and more. Download the latest preview version of Android Studio Flamingo to try out all the features and to give us feedback.


Moving image of Android  and Jetpack updates with customer feedback

Wear OS: the time is now!

A key device that users are turning to is the smallest and most personal — the watch. We launched our joint platform – Wear OS – with Samsung just last year, and this year, we have seen 3X as many device activations, with amazing new devices hitting the market, like Samsung Galaxy Watch 5 and Google Pixel Watch. Compose for Wear OS, which makes it faster and easier to build Apps for Wear OS, went to 1.0 this summer, and is our recommended approach for building user interfaces for Wear OS apps. More than 20 UI components specifically designed for Wearables, with built-in material theming and accessibility.

Today, we’re sharing updated templates for Wear OS in Android Studio, as well as a stable Android R emulator system image for WearOS.

With personalized data from a wearable, it’s important to keep the data completely private and safe, which is why we’ve been working on a solution to make this easier – Health Connect. It’s an API that we built in close collaboration with Samsung for storing and sharing health data - all with a single place for users to easily manage permissions.

Developers who invest in Wear OS are seeing big results: Todoist increased their install growth rate by 50% since rebuilding their app for Wear 3, and Outdooractive reduced development time by 30% using Compose for Wear OS. Now is the time to bring a unique, engaging experience to your users on Wear OS!



Making your app work great on tablets & large screens

As you heard earlier this year: Google is all in on tablets, foldables, and ChromeOS. With amazing new hardware–like Samsung Galaxy Z Fold4, Lenovo P12 Tab Pro, and Google’s upcoming Pixel Tablet, there has never been a better time to review your apps and get them ready for large screens. We’ve been hard at work, with updates to Android, improved Google apps and exciting changes to the Play store making optimized Tablet apps more discoverable.

We’ve made it easier than ever to test your app on the large screen in Android Studio Electric Eel, including resizable and desktops emulators and visual linting to help you adhere to best practices on any sized screen.

We’ve also heard that we can help you by providing more design and layout guidance for these devices. To help today, we added new layout guidance for apps by vertical to developer.android.com, as well as developer guidance for Canonical layouts with samples.

Apps that invest in large screen features are seeing that work pay off when it comes to engagement; take Concepts, which enables amazing stylus interactions like drawing and shape guides for ChromeOS and stylus devices, and saw a 70% higher usage for tablets compared to phones!

Be on the lookout for more updates on our improvements to Android Studio, Window Manager Jetpack, and more with the Form Factors track, broadcast live on November 9.



Making it easier to take advantage of platform features in Android 13

At the heart of a successful platform is the operating system, and Android 13, released in August, brings developer enhancements too many facets of the platform, including personalization, privacy, security, connectivity, and media.

For example per-app language preferences, improve the experience for multilingual users, allowing people to experience their device in different languages in different contexts.

The new Photo picker is a permission free way to allow the user to browse and select photos and videos they explicitly want to share with your app, a great example of how Android is focused on privacy.

To help you target new API levels, we're introducing the Android SDK Upgrade Assistant tool within the latest preview of Android Studio Flamingo, which gives you step-by-step documentation for the most important changes to look for when updating the target SDK of your app.

These are just a few examples of how we're making it easier than ever to adapt your app to platform changes, while enabling you to take advantage of the latest features Android has to offer.

Connecting with you around the world at Android Dev Summit

This is just the first day of Android Dev Summit - where we kicked off with the keynote and dove into our first track on Modern Android Development, we’ve still got a lot more over the coming weeks. Tune in on November 9, when we livestream our next track: Form Factors. Our final technical track will be Platform, livestreamed on November 14.

If you’ve got a burning question, tweet us using #AskAndroid; we’ll be wrapping up each track livestream with a live Q&A from the team, so you can tune in and hear your question answered live.
Modern Android Development Track @ Android Dev Summit October 24, 2022 at 9:00 AM PT 
Agenda 9:00 AM Keynote, 9:50 AM Custom Layouts and Graphics in Compose, 10:10 AM Making Apps Blazing Fast with Baseline Profiles, 10:30 State of the Art of Compose Tooling, 10:50 State Holders and State Production in the UI Layer, 11:10 AM 5 ways Compose Improves UI Testing, 11:15 AM 5 Android Studio Features You Don't Want to Miss, 11:30 AM Pre-recorded MAD Technical Talks, 12:20 PM Where to Hoist that State in Compose, 12:25 PM Material You in Compose Apps, 12:30 PM PM Compose Modifiers Deep Dive, 12:50 Practical Room Migrations, 12:55 PM Type Safe, Multi-Module Best Practices with Navigation, 1:00 PM What's New in Android Build, 1:20 PM From Views to Compose: Where Can I Start?, 1:25 PM Test at Scale with Gradle Managed Devices, 1:35 PM MAD #AskAndroid. Broadcast live on d.android.com/dev-summit & YouTube.
Form Factors Track @ Android Dev Summit November 9, 2022 
Sessions: Deep Dive into Wear OS App Architecture, Build Better Uls Across Form Factors with Android Studio, Designing for Large Screens: Canonical Layouts and Visual Hierarchy Compose: Implementing Responsive UI for Large Screens, Creating Helpful Fitness Experiences with Health Services and Health Connect, The Key to Keyboard and Mouse Support across Tablets and ChromeOS Your Camera App on Different Form Factors,  Building Media Apps on Wear OS,  Why and How to Optimize Your App for ChromeOS. 
Broadcast live on d.android.com/dev-summit & YouTube.
Platform Track @ Android Dev Summit November 14, 2022 
Sessions: Migrate Your Apps to Android 13,  Presenting a High-quality Media Experience for all Users, Improving Your Social Experience Quality with Android Camera, Building for a Multilingual World Everything About Storage on Android, Migrate to Play Billing Library 5: More flexible subscriptions on Google Play, Designing a High Quality App with the Latest Android Features, Hardware Acceleration for ML on-device, Demystifying Attestation, Building Accessibility Support for Compose. 
Broadcast live on d.android.com/dev-summit & YouTube.

This year, we’re also really excited to get the opportunity to meet with developers around the world in person, including today in the Bay Area. On November 9, Android Dev Summit moves to London. And the fun will continue in Asia in December with more roadshow stops: in Tokyo on December 16 (more details to come) at Android Dev Summit with Google DevFest, and in Bangalore in mid-December (you can express interest to join here).

Whether you’re tuning in online, or joining us in-person around the world, it’s feedback from developers like you that help us make Android a better platform. We thank you for the opportunity to work together with you, building excellent apps and delighting users across all of the different devices Android has to offer - enjoy your 2022 Android Dev Summit!

Bringing passkeys to Android & Chrome

Posted by Diego Zavala, Product Manager (Android), Christiaan Brand, Product Manager (Account Security), Ali Naddaf, Software Engineer (Identity Ecosystems), Ken Buchanan, Software Engineer (Chrome)

Explore passkeys on Android & Chrome starting today

Starting today, Google is bringing passkey support to both Android and Chrome.

Passkeys are a significantly safer replacement for passwords and other phishable authentication factors. They cannot be reused, don't leak in server breaches, and protect users from phishing attacks. Passkeys are built on industry standards and work across different operating systems and browser ecosystems, and can be used for both websites and apps.

Passkeys follow already familiar UX patterns, and build on the existing experience of password autofill. For end-users, using one is similar to using a saved password today, where they simply confirm with their existing device screen lock such as their fingerprint. Passkeys on users’ phones and computers are backed up and synced through the cloud to prevent lockouts in the case of device loss. Additionally, users can use passkeys stored on their phone to sign in to apps and websites on other nearby devices.

Today’s announcement is a major milestone in our work with passkeys, and enables two key capabilities:

  1. Users can create and use passkeys on Android devices, which are securely synced through the Google Password Manager.
  2. Developers can build passkey support on their sites for end-users using Chrome via the WebAuthn API, on Android and other supported platforms.

To try this today, developers can enroll in the Google Play Services beta and use Chrome Canary. Both features will be generally available on stable channels later this year.

Our next milestone in 2022 will be an API for native Android apps. Passkeys created through the web API will work seamlessly with apps affiliated with the same domain, and vice versa. The native API will give apps a unified way to let the user pick either a passkey or a saved password. Seamless, familiar UX for both passwords and passkeys helps users and developers gradually transition to passkeys.

Signing in to a website on an Android device with a passkey

For the end-user, creating a passkey requires just two steps: (1) confirm the passkey account information, and (2) present their fingerprint, face, or screen lock when prompted.

 

Signing in is just as simple: (1) The user selects the account they want to sign in to, and (2) presents their fingerprint, face, or screen lock when prompted.

 

Signing in to a website on a nearby computer with a passkey on an Android device

A passkey on a phone can also be used to sign in on a nearby device. For example, an Android user can now sign in to a passkey-enabled website using Safari on a Mac. Similarly, passkey support in Chrome means that a Chrome user, for example on Windows, can do the same using a passkey stored on their iOS device.

Since passkeys are built on industry standards, this works across different platforms and browsers - including Windows, macOS and iOS, and ChromeOS, with a uniform user experience.

We will continue to do our part for a passwordless future

We have worked with others in the industry, including Apple and Microsoft, and members within the FIDO Alliance and the W3C to drive secure authentication standards for years. We have shipped support for W3C Webauthn and FIDO standards since their inception.

Today is another important milestone, but our work is not done. Google remains committed to a world where users can choose where their passwords, and now passkeys, are stored. Please stay tuned for more updates from us in the next year as we introduce changes to Android, enabling third party credential managers to support passkeys for their users.

Build smarter and ship faster with the latest updates across our ecosystem

Posted by Jeanine Banks, VP/GM, Developer X and DevRel

At last week’s Made by Google launch event, we announced several new hardware products including the Pixel 7 and Pixel 7 Pro, Google Pixel Watch, and Google Pixel Tablet—a suite of innovative products that we’re excited about. While sure to delight users, it got me thinking—what will these changes mean for developers?

It’s hard to build experiences that let users enjoy the best that their devices have to offer. Undoubtedly this brings a level of complexity for developers who will need to build and test against multiple OS updates and new features. That’s the thing about development—the environment is constantly evolving. We want to cut through the complexity and make it simpler to choose the technology you use, whether for an app on one device or across large and small screens.

Earlier this year at Google I/O, we shared our focus on making developer tools work better together, and providing more guidance and best practices to optimize your end-to-end workflow. For example, we announced the new App Quality Insights window in Android Studio that shows crash data from Firebase Crashlytics directly inside the IDE to make it easier to discover, investigate, and fix offending lines of code.

But our work doesn’t stop once I/O ends. We work all year round to offer increasingly flexible, open and integrated solutions so you can work smarter, ship faster, and confidently set up your business for the future.

That’s why we’re excited to connect with you again—both in person and virtually—to share more recent product updates. Over the next three months, we have over 200 events in more than 50 countries reaching thousands of developers through product summits, community events, industry conferences, and more. Here are a few:

DevFest | Now - December
Local Google Developer Groups (GDG) organize these technology conferences according to the needs and interests of the region's developer community, and in the local language. Tune in virtually or join in person.

Chrome | Multiple dates
This year the Chrome team will meet you at your favorite regional developer conferences and events, in addition to online forums across time zones. Join us on the journey to build a better web. Check out the calendar.

Google Cloud Next | October 11-13
Learn how to transform with Google Cloud to build apps faster and make smarter business decisions.

Firebase Summit | October 18
Join this hybrid event online or in person in New York City to hear how Firebase can help you accelerate app development, run your app with confidence, and scale your business.

Android Dev Summit | Beginning October 24
Learn from the source about building excellent apps across devices, coming to you online and around the world. We’ll be sharing the sessions live on YouTube in three tracks spread across three weeks, including Modern Android Development on Oct 24, form factors on Nov 9, and platform on Nov 14.

BazelCon | November 16-17
Hosted by Bazel and Google Open Source, BazelCon connects you with the team, maintainers, contributors, users, and friends to learn how Bazel automates software builds and tests on Android and iOS.

Women in ML Symposium | Coming in December
Join open source communities, seek out leadership opportunities, share knowledge, and speak freely about your career development with other women and gendered minorities in a safe space. Catch up on last year’s event.

Flutter Event | Coming in December/January
Hear exciting product updates on Google’s open source framework for building beautiful, natively compiled, multi-platform applications from a single codebase. In the meantime, re-live last year’s event.


We look forward to the chance to meet with you to share technical deep dives, give you hands-on learning opportunities, and hear your feedback directly. After you have heard what we’re up to, make sure to access our comprehensive documentation, training materials, and best practices to help speed up your development and quickly guide you towards success.

Mark your calendars and register now to catch the latest updates.

Announcing an Experimental Preview of Jetpack Multiplatform Libraries

Posted by Márton Braun, Developer Relations Engineer

Since we announced Kotlin support for Android in 2017, developers have been excited about writing their Android apps using Kotlin. We’ve continuously expanded this support for the language over the years, going Kotlin-first with Jetpack libraries and documentation, and then further investing into Kotlin with Jetpack Compose. We’ve also seen the interest of the community in Kotlin’s multiplatform capabilities.

Kotlin Multiplatform Mobile from JetBrains is now in beta, and we have been experimenting with this technology to see how it can enable code sharing across platforms. As part of these experiments, we are now sharing a preview of Kotlin Multiplatform libraries in Jetpack.

The libraries available for multiplatform as part of this experimental preview are Collections and DataStore. These were chosen as they evaluate several important aspects of converting an existing library to multiplatform:

  • Collections is an example of a library written in the Java programming language that has no Android-specific dependencies, but implements Java collection APIs.
  • DataStore is written entirely in Kotlin, and it uses coroutines in both its implementation and APIs. It also depends on Java IO and Android platform APIs.

With this preview, we’re looking for your feedback about using these Jetpack libraries in multiplatform projects targeting Android and iOS applications. Keep in mind that these dev builds are experimental and should not be used in production. They are published outside the regular release cycle of these libraries, and they are not guaranteed to graduate to stable.

The libraries are available from Google’s Maven repository. To start using them, add the following dependencies to your Kotlin Multiplatform project:

val commonMain by getting {
  dependencies {
      implementation("androidx.collection:collection:1.3.0-dev01")

      // Lower-level APIs with support for custom serialization
      implementation("androidx.datastore:datastore-core-okio:1.1.0-dev01")
      // Higher-level APIs for storing values of basic types
      implementation("androidx.datastore:datastore-preferences-core:1.1.0-dev01")
  }
}

You can learn more about the available APIs by checking out our sample app which uses DataStore on Android and iOS, or in the preview API reference documentation available for both libraries.

To provide feedback about your experience with the multiplatform Jetpack libraries, or to show your interest in Kotlin Multiplatform, join the conversation in the Kotlinlang #multiplatform channel. You can also open bugs on the issue tracker for DataStore or for Collections.

*Java is a trademark or registered trademark of Oracle and/or its affiliates.

Android Dev Summit ‘22: Coming to you, online and around the world!

Posted by Yasmine Evjen, Community Lead, Android Developer Relations Android Dev Summit is back, and this year, we’re coming to you! Whether you’re tuning in online or–for the first time since 2019–joining in person at locations around the world, we can’t wait to see you! It’s your opportunity to learn from the source, about building excellent apps across devices.


Android Dev Summit ‘22 kicks off on October 24 with the keynote, your opportunity to hear directly from the Android team. We’ll cover the latest in Modern Android Development, innovations in our core platform, and how to take advantage of Android’s momentum across devices, including wearables and large screens. This technical keynote will be packed with demos, and it kicks off at 9AM PT on October 24, live on YouTube.

One of the most important parts of ADS are the deeply technical sessions, a huge part of what we look forward to at ADS. This year, we’ll be sharing the sessions live on YouTube in three tracks spread across three weeks:
  • Modern Android Development, live on Oct 24
  • Form Factors, live on Nov 9
  • Platform, live on Nov 14
ADS is a place where we get to connect directly with you - hearing what’s most important to you and how we can make it easier for you to build on Android. And there’s no better way to do that than connecting in-person. Since travel is still tough for many of you, we’re doing our best this year to come to you, with events popping up around the world. The first stop for ADS will be in the San Francisco Bay Area on October 24 (if you’re local, you can apply to come here). Next, ADS’22 moves to London on November 9 (apply here if you’re in London). The fun will continue in Asia in December with several roadshow stops (more details to come!).

If you’re not able to join us in person, we’d still love to connect! At the end of each of our session tracks, we’ll be hosting a live Q&A – #AskAndroid - for you to get your burning questions answered. Post your questions to Twitter or comment in the YouTube livestream using #AskAndroid, and tune in to see if we answer your question live.

Over the coming weeks, we’ll be dropping more info around ADS’22 on the website; check back when we release the full session track details and more, or sign up for updates through the Android Developer newsletter.

We can’t wait to see you soon!

**Hi everyone, I'm Yasmine Evjen, the new Community Lead for Android Developer Relations. I'm so excited to connect with you all in-person and virtually at #AndroidDevSummit. Bringing two of my favorite things together, exciting new tech and the developers that bring it to life.

Listen to our major Text to Speech upgrades for 64 bit devices.

Posted by Rakesh Iyer, Staff Software Engineer and Leland Rechis, Group Product Manager

We are upgrading the Speech Services by Google speech engine in a big way, providing clearer, more natural voices. All 421 voices in 67 languages have been upgraded with a new voice model and synthesizer.

If you already use TTS and the Speech Services by Google engine, there is nothing to do – everything will happen behind the scenes as your users will have automatically downloaded the latest update. We’ve seen a significant side by side quality increase with this change, particularly in respects to clarity and naturalness.

With this upgrade we will also be changing the default voice in en-US to one that is built using fresher speaker data, which alongside our new stack, results in a drastic improvement. If your users have not selected a system voice, and you rely on system defaults, they will hear a slightly different speaker. You can hear the difference below

Speaker change and upgrade for EN-US

Sample Current Speaker

Sample Upgraded Speaker


Speaker upgrades in a few other languages

Current

Upgraded

HI-IN

HI-IN

PT-BR

PT-BR

ES-US

ES-US


This update will be rolling out to all 64 bit Android devices via the Google Play Store over the next few weeks as a part of the Speech Services by Google apk. If you are concerned your users have not updated this yet, you can check for the minimum version code ,210390644 on the package com.google.android.tts.

If you haven't used TTS in your projects yet, or haven’t given your users the ability to choose a voice within your app, it's fairly straightforward, and easy to experiment with. We’ve included some sample code to get you started. 

Here’s an example of how to set up voice synthesis, get a list of voices, and set a specific voice. We finally send a simple utterance to the synthesizer.

public class MainActivity extends AppCompatActivity {
  private static final String TAG = "TextToSpeechSample";

  private TextToSpeech tts;

  private final UtteranceProgressListener progressListener =
new UtteranceProgressListener() {
    @Override
    public void onStart(String utteranceId) {
      Log.d(TAG, "Started utterance " + utteranceId);
    }

    @Override
    public void onDone(String utteranceId) {
      Log.d(TAG, "Done with utterance " + utteranceId);
    }

    @Override
    public void onError(String s) { }
  };

  private final TextToSpeech.OnInitListener initListener = new OnInitListener() {
    @Override
    public void onInit(int status) {
      tts.setOnUtteranceProgressListener(progressListener);
      for (Voice voice : tts.getVoices()) {
        if (voice.getName().equals("en-us-x-iog-local")) {
          tts.setVoice(voice);
          tts.speak("1 2 3", TextToSpeech.QUEUE_ADD, new Bundle(), "utteranceId");
          break;
        }
      }
    }
  };

  @Override
  protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_main);
    tts = new TextToSpeech(this, initListener);
  }

  @Override
  protected void onDestroy() {
    if (tts != null) {
      tts.shutdown();
    }
    super.onDestroy();
  }
}


We are excited to see this upgraded experience in your app!

Listen to our major Text to Speech upgrades for 64 bit devices.

Posted by Rakesh Iyer, Staff Software Engineer and Leland Rechis, Group Product Manager

We are upgrading the Speech Services by Google speech engine in a big way, providing clearer, more natural voices. All 421 voices in 67 languages have been upgraded with a new voice model and synthesizer.

If you already use TTS and the Speech Services by Google engine, there is nothing to do – everything will happen behind the scenes as your users will have automatically downloaded the latest update. We’ve seen a significant side by side quality increase with this change, particularly in respects to clarity and naturalness.

With this upgrade we will also be changing the default voice in en-US to one that is built using fresher speaker data, which alongside our new stack, results in a drastic improvement. If your users have not selected a system voice, and you rely on system defaults, they will hear a slightly different speaker. You can hear the difference below

Speaker change and upgrade for EN-US

Sample Current Speaker

Sample Upgraded Speaker


Speaker upgrades in a few other languages

Current

Upgraded

HI-IN

HI-IN

PT-BR

PT-BR

ES-US

ES-US


This update will be rolling out to all 64 bit Android devices via the Google Play Store over the next few weeks as a part of the Speech Services by Google apk. If you are concerned your users have not updated this yet, you can check for the minimum version code ,210390644 on the package com.google.android.tts.

If you haven't used TTS in your projects yet, or haven’t given your users the ability to choose a voice within your app, it's fairly straightforward, and easy to experiment with. We’ve included some sample code to get you started. 

Here’s an example of how to set up voice synthesis, get a list of voices, and set a specific voice. We finally send a simple utterance to the synthesizer.

public class MainActivity extends AppCompatActivity {
  private static final String TAG = "TextToSpeechSample";

  private TextToSpeech tts;

  private final UtteranceProgressListener progressListener =
new UtteranceProgressListener() {
    @Override
    public void onStart(String utteranceId) {
      Log.d(TAG, "Started utterance " + utteranceId);
    }

    @Override
    public void onDone(String utteranceId) {
      Log.d(TAG, "Done with utterance " + utteranceId);
    }

    @Override
    public void onError(String s) { }
  };

  private final TextToSpeech.OnInitListener initListener = new OnInitListener() {
    @Override
    public void onInit(int status) {
      tts.setOnUtteranceProgressListener(progressListener);
      for (Voice voice : tts.getVoices()) {
        if (voice.getName().equals("en-us-x-iog-local")) {
          tts.setVoice(voice);
          tts.speak("1 2 3", TextToSpeech.QUEUE_ADD, new Bundle(), "utteranceId");
          break;
        }
      }
    }
  };

  @Override
  protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_main);
    tts = new TextToSpeech(this, initListener);
  }

  @Override
  protected void onDestroy() {
    if (tts != null) {
      tts.shutdown();
    }
    super.onDestroy();
  }
}


We are excited to see this upgraded experience in your app!

Register now for Firebase Summit 2022!

Posted by Grace Lopez, Product Marketing Manager

One of the best things about Firebase is our community, so after three long years, we’re thrilled to announce that our seventh annual Firebase Summit is returning as a hybrid event with both in-person and virtual experiences! Our 1-day, in-person event will be held at Pier 57 in New York City on October 18, 2022. It will be a fun reunion for us to come together to learn, network, and share ideas. But if you’re unable to travel, don’t worry, you’ll still be able to take part in the activities online from your office/desk/couch wherever you are in the world.

Join us to learn how Firebase can help you accelerate app development, run your app with confidence, and scale your business. Registration is now open for both the physical and virtual events! Read on for more details on what to expect.


Keynote full of product updates

In-person and livestreamed

We’ll kick off the day with a keynote from our leaders, highlighting all the latest Firebase news and announcements. With these updates, our goal is to give you a seamless and secure development experience that lets you focus on making your app the best it can be.

#AskFirebase Live

In-person and livestreamed

Having a burning question you want to ask us? We’ll take questions from our in-person and virtual attendees and answer them live on stage during a special edition of everyone’s favorite, #AskFirebase.

NEW! Ignite Talks

In-person and livestreamed

This year at Firebase Summit, we’re introducing Ignite Talks, which will be 7-15 minute bitesize talks focused on hot topics, tips, and tricks to help you get the most out of our products.

NEW! Expert-led Classes

In-person and will be released later

You’ve been asking us for more technical deep dives, so this year we’ll also be running expert-led classes at Firebase Summit. These platform-specific classes will be designed to give you comprehensive knowledge and hands-on practice with Firebase products. Initially, these classes will be exclusive to in-person attendees, but we’ll repackage the content for self-paced learning and release them later for our virtual attendees.

We can’t wait to see you

In addition, Firebase Summit will be full of all the other things you love - interactive demos, lots of networking opportunities, exciting conversations with the community…and a few surprises too! The agenda is now live, so don't forget to check it out! In the meantime, register for the event, subscribe to the Firebase YouTube channel, and follow us on Twitter and LinkedIn to join the conversation using #FirebaseSummit

Android Studio Dolphin

Posted by Yuri Blaise, Product Manager, Android  

The Android Studio team took a deep dive into making it easier to make high quality apps with the latest stable release of Android Studio Dolphin ?(2021.3.1). This release focuses on three key themes: Jetpack Compose, Wear OS, and development productivity.

For Jetpack Compose, Android Studio Dolphin now features reliable tools to preview multiple screens and easily preview animations. Additionally, as you debug the user interface of your app, we introduce a handy Compose UI counter within the Layout Inspector to keep track of when your UI recomposes.

With Android Studio Dolphin, we added a range of Wear OS features to help get your Wear apps, tiles, and watch faces ready for all of the Wear OS 3 devices. With an updated Wear OS Emulator, an intuitive Pairing Assistant, and new deployment flows for launch tiles and watch faces, it's easier and more efficient than ever to make great apps for WearOS.

Lastly, to make you even more productive when using Android Studio, we enabled Gradle Managed Virtual Devices to centrally manage your test devices.

To take a swim in our latest update, download it today.

Read on or watch below to get a detailed description of the new features introduced in Android Studio Dolphin.

Development tools

  • Intellij 2021.3 Platform Update - Android Studio Dolphin includes the Intellij 2021.3 release, which has features such as an improved Find Usages flow, Kotlin debugger updates, constant conditions inspection for Kotlin, improved code intentions previews and more. Learn More.
  • Gradle Managed Virtual Devices - If you've wanted to automate your app testing using emulators, but found it cumbersome to coordinate, and setup, then Gradle Managed Virtual Devices is for you. Just describe the virtual devices you need for your automated tests as a part of your build, and let Gradle take care of the rest. From SDK downloading, to device provisioning and setup, to test execution and teardown, Gradle manages the lifecycle of your virtual devices during instrumentation tests. Gradle further optimizes your test execution with new features that enable snapshot management, test caching, and test sharding to ensure your tests run efficiently, quickly, and consistently.
Gradle Managed Virtual Devices also introduces a completely new type of device, called the Automated Test Device, which is optimized for automated tests, resulting in significant reduction in CPU and memory usage during test execution. Learn more.
Gradle Managed Virtual Devices

Jetpack Compose

  • Compose Animation Inspector - You can now see all supported animations at once and coordinate them with the Animation Preview inspector. You can also freeze a specific animation or scrub through an entire animation frame-by-frame. Animation Preview currently supports animations built with updateTransition and AnimatedVisibility.
Compose Animation Inspector
  • Compose Multipreview Annotations - Instead of copying and pasting the same @Preview code everywhere across your app, you can define an annotation class that includes multiple preview definitions; that new annotation generates all those previews at once, allowing you to preview multiple devices, fonts, and themes at the same time without repeating those definitions for every single composable. Learn more.
Multipreview annotations
    • Compose Recomposition Counts in Layout Inspector - To help debug your Jetpack Compose UI, you can now view recomposition counts for a Compose app in the Layout Inspector. With this tool, you can understand if your UI is updating too often, or just as you expect. For easier usage, recomposition counts and skip counts can optionally be shown in the Component Tree and Attributes panels. Learn more.
    Compose Recomposition Counts

    Wear OS

    • Wear OS Emulator Pairing Assistant - the Wear OS Emulator Pairing Assistant allows you to see Wear OS devices in the Device Manager, pairing one or more watch emulators with a single phone without having to navigate through multiple ADB commands and device combinations. As a bonus, you also don't have to re-pair devices as often because Android Studio remembers pairings after it closes. Learn more.
    Wear OS Emulator Pairing Assistant
      • Wear OS Emulator Toolbar - The Android Emulator toolbar now has new buttons and interactions that align to Wear OS physical devices; use it to trigger actions such as the palm gesture or to simulate tilting the device. To learn more, see common actions in the emulator.
      Wear OS Emulator Side Toolbar
      • Wear OS Direct Surface Launch - When deploying your WearOS app, it can be tricky at times to get your app to the right state, especially if you are working on features like Watch Face complications. With Android Studio Dolphin, you can create Run/Debug configurations for Wear OS tiles, watch faces, and complications that can be launched directly from Android Studio. Learn more.
      New Wear OS Run/Debug configuration types

      To recap, Android Studio Dolphin includes these new enhancements and features:

      Development Tools

      • Intellij 2021.3 Platform Update
      • Gradle Managed Virtual Devices
      Jetpack Compose Tools
      • Compose Animation Inspector
      • Compose MultiPreview Annotations
      • Compose Recomposition Counts in Layout Inspector
      WearOS
      • Wear OS Emulator Pairing Assistant
      • Wear OS Emulator Toolbar
      • Wear OS Direct Surface Launch

      Check out the Android Studio release notes, Android Gradle plugin release notes, and the Android Emulator release notes for more details.


      Getting Started

      Download

      Download the latest version of Android Studio Dolphin from the download page. If you are using a previous release of Android Studio, you can simply update to the latest version of Android Studio. If you want to maintain a stable version of Android Studio, you can run the stable release version and canary release versions of Android Studio at the same time. Learn more.

      To use the mentioned Android Emulator features make sure you are running at least Android Emulator v31.3.0 downloaded via the Android Studio SDK Manager.

      We appreciate any feedback on things you like, and issues or features you would like to see. If you find a bug or issue, feel free to file an issue. Follow us -- the Android Studio development team on Twitter and on Medium.

      Come to the Tag1 & Google Performance Workshop at DrupalCon Europe 2022, Prague

      Posted by Andrey Lipattsev, EMEA CMS Partnerships Lead

      TL;DR: If you’re attending @DrupalConEur submit your URL @ https://bit.ly/CWV-DrupalCon-22 to get your UX & performance right on #Drupal at the Tag1 & Google interactive workshop.


      Getting your User Experience right, which includes performance, is critical for success. It’s a key driver of many success metrics (https://web.dev/tags/web-vitals) and a factor taken into account by platforms, including search engines, that surface links to your site (https://developers.google.com/search/docs/advanced/experience/page-experience).

      Quantifying User Experience is not always easy, so one way to measure, track and improve it is by using Core Web Vitals (CWV, https://web.dev/vitals/). Building a site with great CWV on Drupal is easier than on many platforms on average (https://bit.ly/CWV-tech-report) and yet there are certain tips and pitfalls you should be aware of.

      In this workshop the team from Tag1 and Google (Michael Meyers, Andrey Lipattsev and others) will use real life examples of Drupal-based websites to illustrate some common pain points and the corresponding solutions. If you would like us to take a look at your website and provide actionable advice, please submit the URL via this link (https://bit.ly/CWV-DrupalCon-22). The Workshop is interactive, so bring your laptop - we'll get you up and running and teach you hands-on how to code for the relevant improvements.

      We cannot guarantee that all the submissions will be analysed as this depends on the number of submissions and the time that we have. However, we will make sure that all the major themes cutting across the submitted sites will be covered with relevant solutions.

      See you in Prague!

      Date & Time: Wednesday 21.09.2022, 16:15-18:00