Tag Archives: Android

Meet the class of 2024 for Google Play’s Indie Games Accelerator

Posted by Leticia Lago – Developer Marketing

Today, we’re excited to reveal Google Play’s Indie Games Accelerator class of 2024.

Selected game studios from around the world will take part in the 10-week accelerator designed to take their businesses to the next level on Google Play. The program includes:

    • A series of online masterclasses, talks and gaming workshops hosted by industry leaders
    • Mentorship sessions covering a broad variety of topics including technical development, gameplay and team leadership
    • Access to gaming experts from Google and leading studios

Due to the number of impressive applications, we’ve doubled this year’s class size from 30 to 60 studios. Without further ado, meet the class of 2024 and join us in congratulating them!

moving image of IGA winners class 2024

Americas

Logisk Studio
Attu
Sprocket Games
Blu Studios
Highpoint Games
D20 Studios
Supernova Games
Cafund
ó Creative Studio
Hyperthought Games Inc.
North Star Digital Studios
Theia Studios
Aurecas Games
Mana Burn
67 Bits
Retora Games
Ocarina Studios
WonderLegend Games
EiP Game Studio
Asia Pacific

CLOVER-FI Games
Crimzen Red Studios
QueseraGames Co., Ltd.
Gonggamore Contents Inc.
ONDOT INC
LiberalDust
Studio Boxcat
Whoyaho Corp.
Blackhammer
Algorocks
Own Games 
Kudos Games
Appspace Solutions
Lentera Nusantara
Dunali Games
Hexpion
Dreams Studio
Panthera Studio
Lunarite Studio
Npckc
ONDI Games
Playdew
Niku Games Studio
Avian Hearts Studios Pvt. Ltd
WASD Interactive
Europe, Middle East & Africa

First Pick Studios
Pank0
Big Loop Studios
BaldrickSoft
RPG games
Airapport
Post Physical
WALKME MOBILE SOLUTIONS
Iteration One
Veryo Studios
Monster League
TERAHYPE
3Hills
Gravity Code
Torpor Games
Nordic Stone Studio
TruePlayers
Pineapple on Pizza Studios

Congratulations again to all the founders selected; we can’t wait to see your games grow on our platform.

We’re committed to helping app and game businesses of all sizes reach their full potential. Discover more about Google Play’s programs, resources and tools for indie games developers.

Enhanced screen sharing capabilities in Android 14 (and Google Meet) improve meeting productivity

Posted by Francesco Romano – Developer Relations Engineer on Android

App screen sharing improves privacy and productivity

Android 14 QPR2 brings exciting advancements in user privacy and streamlined multitasking with app screen sharing. No longer do users have to broadcast their entire screen while screen sharing or casting, ensuring they share exactly what they want to share.

Leverage the new MediaProjection APIs to customize the screen sharing experience and deliver even greater utility to your users.

What is app screen sharing?

Prior to Android 14, users could only share or record their entire screen on Android devices, which could expose private information in other apps or notifications.

App screen sharing is a new platform feature that lets users restrict sharing and recording to a single app window, mitigating the risk of oversharing private messages or notifications. With app screen sharing, the status bar, navigation bar, notifications, and other system UI elements are excluded from the shared display. Only the content of the selected app is shared.

This not only enhances security for screen sharing, but also enables new use cases on large screens. Users can improve multitasking productivity – such as screen sharing while attending a meeting – by taking advantage of extra screen space on these larger devices.

How does it work?

There are three different entry points for users to start app screen sharing:

    1. Start casting from Quick Settings
    2. Start screen recording from Quick Settings
    3. Launch from an app with screen sharing or recording capabilities via the MediaProjection API

Let’s consider an example where a host user wants to share a single app to the participants of a video call.

The host user starts screen sharing as usual, but now in Android 14 they are presented with an updated dialog that allows them to choose whether to share a single app instead of their entire screen.

The host user decides to share a single app, and they select the app from the App Selector.

During screen sharing, the video call participants can see only the content from the selected app.

The host user can end the screen capture in a few ways: from the app where sharing started, in the notification shade, by closing the app being shared, or by ending the video call.

visual journey of host sharing a single app to the participants in a video call across four panels

How to support app screen sharing?

Apps that use the MediaProjection APIs are capable of starting app screen sharing without any code changes. However, it’s important to test your app to ensure that the screen sharing experience works as intended, since the user flow changes with this new behavior. Previously, the user would stay in the host app after the permission dialog. With app screen sharing the user is not returned to the host app, but the target app to be shared is launched instead. If the target app was already running in foreground (e.g. in multi window mode), then it simply becomes the top focused app.

Android 14 also introduces two callback methods to empower you to customize the sharing experience:

MediaProjection.Callback#onCapturedContentResize(width, height) is invoked immediately after capture begins or when the size of the captured region changes. The method arguments provide the accurate sizing for the streamed capture.

Note: The given width and height correspond to the same width and height that would be returned from android.view.WindowMetrics#getBounds() of the captured region.

If the recorded content has a different aspect ratio from either the VirtualDisplay or output Surface, the captured stream has black bars around the recorded content. The application can avoid the black bars around the recorded content by updating the size of both the VirtualDisplay and output Surface:

override fun onCapturedContentResize(width: Int, height: Int): String {
    // VirtualDisplay instance from MediaProjection#createVirtualDisplay().
    virtualDisplay.resize(width, height, dpi)

    // Create a new Surface with the updated size.
    val textureName: Int // the OpenGL texture object name
    val surfaceTexture = SurfaceTexture(textureName)
    surfaceTexture.setDefaultBufferSize(width, height)
    val surface = Surface(surfaceTexture)

    // Ensure the VirtualDisplay has the updated Surface to send the capture to.
    virtualDisplay.setSurface(surface)
}

The other API is MediaProjection.Callback#onCapturedContentVisibilityChanged(isVisible), which is invoked after capture begins or when the visibility of the captured region changes. The method argument indicates the current visibility of the captured region.

The callback is triggered when:

    • The captured region becomes invisible (isVisible==False).This may happen when the projected app is not topmost anymore, like when another app entirely covers it, or the user navigates away from the captured app.
    • The captured region becomes visible again (isVisible==True).This may happen if the user moves the covering app to show at least some portion of the captured app (for example, the user has multiple apps visible in multi-window mode).

Applications can take advantage of this callback by showing or hiding the captured content from the output Surface based on whether the captured region is currently visible to the user. You should pause or resume the sharing accordingly in order to conserve resources.

How Google Meet is improving meeting productivity

“App screen sharing enables users to share specific information in a Meet call without oversharing private information on the screen like messages and notifications. Users can choose specific apps to share, or they can share the whole screen as before. Additionally, users can leverage split-screen mode on large screen devices to share content while still seeing the faces of friends, families, coworkers, and other meeting participants.” - Product Manager at Google Meet

Let’s see app screen sharing in action during a video call, in this coming-soon version of Google Meet!

moving image of app screen sharing in action during a video call on Google Meet

Window on the world

App screen sharing opens doors (and windows) for more focused and secure app experiences within the Android ecosystem.

This new feature enhances several use cases:

    • Collaboration apps can facilitate focused discussion on specific design elements, documents, or spreadsheets without including distracting background details.
    • Tech support agents can remotely view the user's problem app without seeing potentially sensitive content in other areas.
    • Video conferencing tools can share a presentation window selectively rather than the entire screen.
    • Educational apps can demonstrate functionality without compromising student privacy, and students can share projects without fear of showing sensitive information.

By thoughtfully implementing app screen sharing, you can establish your app as a champion of user privacy and convenience.

Better, faster, stronger time zone updates on Android

Posted by Almaz Mingaleev – Software Engineer and Masha Khokhlova – Technical Program Manager

It's that time of year again when many of us move our clocks! Oh wait, your Android devices did it automatically, didn’t they? For Android users living in many countries, this may not be surprising. For example, the US, EU and UK governments haven't changed their time legislation in a while*, so users wake up every morning to see the correct time.

But, what happens when time laws change? If you look globally, governments can and do change their time laws, sometimes every year, and Android devices have to keep up to support our global user base.

To implement a region’s time legislation, Android devices have to follow a set of encoded rules. What are these rules? Let’s start with why rules are needed in the first place. Clearly, 7am in Los Angeles and 7am in London are not the same time. Moreover, if you are in London and want to know the time in Los Angeles, you have to know how many hours to subtract, and this is not fixed throughout the year**. So to tell local time (time your watches should show) it is convenient to have a reference clock that everybody on the planet agrees on. This clock is named UTC, coordinated universal time. Local time in London during winter matches UTC, during summer it is calculated by adding one hour to UTC, usually referred to as UTC+1. For Los Angeles local time during summer is UTC-8 (8 hours behind, UTC offset is -8 hours) and during winter it is UTC-7 correspondingly. When a region changes from one offset to another, we call that a “transition”. Combination of these offsets and rules when a transition happens (such as “last Sunday of March” or “first Sunday on or after 8th March”) defines a time zone. For some countries, the time zone rules can be very simple and primarily determined by their chosen UTC offset: “no transitions, we don’t move our clocks forwards and backwards”.

Governments can decide to change the UTC offset for regions, introduce new time zone regions, or alter the day that daylight saving transitions occur. When governments do this, the time zone rules on every Android device needs to be updated, otherwise the Android device will continue to follow the old rules, which can lead to an incorrect local time being shown to users in the affected areas.

Android is not alone in needing to keep track of this information. Fortunately, there is a database supported by IANA (Internet Assigned Numbers Authority) and maintained by a small group of volunteers known as the TZDB (Time Zone Database) which is used as a basis for local timekeeping on most modern operating systems. The TZDB contains most of the information that Android needs.

There is no schedule, but typically the TZDB releases a new update 4-5 times a year. The Android team wants to release updates that affect its devices as soon as possible.

How do these changes reach your devices?

    1.    Government signs a law / decree.

    2.    Someone lets IANA know about these changes

    3.    Depending on how much lead time was given and changes announced by other countries IANA publishes a new TZDB release.

    4.    The Android team incorporates the TZDB release (along with a small amount additional information we obtain from related projects and derive ourselves) into our codebase.

    5.    We roll-out these updates to your devices. How the roll-out happens depends on the type and age of the Android device.

        a.    Many mobile Android devices are covered by Google’s Project Mainline, which means that Google sends updates to devices directly.

        b.    Some devices are handled by the device’s manufacturer who takes the Android team’s source code updates and releases them to devices themselves according to their own update schedule.

As you can see, there are quite a few steps. Applying, testing and releasing an update can take weeks. And it is not just Android and other computer operating systems like it who need to take action. There are usually telecoms, banks, airlines and software companies that have to make adjustments to their own systems and time tables. Citizens of a country need to be made aware of changes so they know what to expect, especially if they are using older devices that might not receive necessary updates. And it all takes time and can cause problems for countless people if it isn’t handled well. The amount of disruption caused by a change is usually determined by the clarity of the legislation and notice period that governments provide. The TZDB volunteers are good at spotting changes, but it helps if the governments notify IANA directly, especially when it’s not clear the exact regions or existing laws affected. Unfortunately, many of the recent time zone changes were given with about a month or less notice time. Android has a set of recommendations for how much notice to provide. Other operating systems have similar recommendations.

Android is constantly evolving. One of such improvements, Project Mainline, introduced in Android 10, has made a big difference in how we update important parts of the Android operating system. It allows us to deliver select AOSP components directly through Google Play, making updates faster than a full OTA update and reducing duplication of efforts done by each OEM.

From the beginning, time zone rules were a component in Mainline, called Time Zone Data or tzdata module. This integration allowed us to react more quickly to government-mandated time zone changes than before. However until 2023 tzdata updates were still bundled with other Mainline changes, sometimes leading to testing complexities and slower deployment.

In 2023, we made further investments in Mainline's infrastructure and decoupled the tzdata module from the other components. With this isolation, we gained the ability to respond rapidly to time zone legislation changes — often releasing updates to Android users outside of the established release cadence. Additionally, this change means time zone updates can reach a far greater number of Android devices, ensuring you as Android users always see the correct time.

So while your Android phone may not be able to restore that lost hour of sleep, you can rest assured that it will show the accurate time, thanks to volunteers and the Android team.

Curious about the ever-changing world of time zones? Explore the IANA Time Zone Database and learn more about how time and time zones are managed on Android.


*In 2018-2019 there were changes in Alaska. This is a blogpost, not a technical documentation!

**Because the US and UK apply their daylight saving changes at different local times and on different days of the year.

Introducing the Fused Orientation Provider API: Consistent device orientation for all

Posted by Geoffrey Boullanger – Senior Software Engineer, Shandor Dektor – Sensors Algorithms Engineer, Martin Frassl and Benjamin Joseph – Technical Leads and Managers

Device orientation, or attitude, is used as an input signal for many use cases: virtual or augmented reality, gesture detection, or compass and navigation – any time the app needs the orientation of a device in relation to its surroundings. We’ve heard from developers that orientation is challenging to get right, with frequent user complaints when orientation is incorrect. A maps app should show the correct direction to walk towards when a user is navigating to an exciting restaurant in a foreign city!

The Fused Orientation Provider (FOP) is a new API in Google Play services that provides quality and consistent device orientation by fusing signals from accelerometer, gyroscope and magnetometer.

Although currently the Android Rotation Vector already provides device orientation (and will continue to do so), the new FOP provides more consistent behavior and high performance across devices. We designed the FOP API to be similar to the Rotation Vector to make the transition as easy as possible for developers.

In particular, the Fused Orientation Provider

    • Provides a unified implementation across devices: an API in Google Play services means that there is no implementation variance across different manufacturers. Algorithm updates can be rolled out quickly and independent of Android platform updates;
    • Directly incorporates local magnetic declination, if available;
    • Compensates for lower quality sensors and OEM implementations (e.g., gyro bias, sensor timing).

In certain cases, the FOP returns values piped through from the AOSP Rotation Vector, adapted to incorporate magnetic declination.

How to use the FOP API

Device orientation updates can be requested by creating and sending a DeviceOrientationRequest object, which defines some specifics of the request like the update period.

The FOP then outputs a stream of the device’s orientation estimates as quaternions. The orientation is referenced to geographic north. In cases where the local magnetic declination is not known (e.g., location is not available), the orientation will be relative to magnetic north.

In addition, the FOP provides the device’s heading and accuracy, which are derived from the orientation estimate. This is the same heading that is shown in Google Maps, which uses the FOP as well. We recently added changes to better cope with magnetic disturbances, to improve the reliability of the cone for Google Maps and FOP clients.

The update rate can be set by requesting a specific update period. The FOP does not guarantee a minimum or maximum update rate. For example, the update rate can be faster than requested if another app has a faster parallel request, or it can be slower as requested if the device doesn’t support the high rate.

For full specification of the API, please consult the API documentation:

Example usage (Kotlin)

package ...

import android.content.Context
import com.google.android.gms.location.DeviceOrientation
import com.google.android.gms.location.DeviceOrientationListener
import com.google.android.gms.location.DeviceOrientationRequest
import com.google.android.gms.location.FusedOrientationProviderClient
import com.google.android.gms.location.LocationServices
import com.google.common.flogger.FluentLogger
import java.util.concurrent.Executors

class Example(context: Context) {
  private val logger: FluentLogger = FluentLogger.forEnclosingClass()

  // Get the FOP API client
  private val fusedOrientationProviderClient: FusedOrientationProviderClient =
    LocationServices.getFusedOrientationProviderClient(context)

  // Create an FOP listener
  private val listener: DeviceOrientationListener =
    DeviceOrientationListener { orientation: DeviceOrientation ->
      // Use the orientation object returned by the FOP, e.g.
      logger.atFinest().log("Device Orientation: %s deg", orientation.headingDegrees)
    }

  fun start() {
    // Create an FOP request
    val request =
      DeviceOrientationRequest.Builder(DeviceOrientationRequest.OUTPUT_PERIOD_DEFAULT).build()

    // Create (or re-use) an Executor or Looper, e.g.
    val executor = Executors.newSingleThreadExecutor()

    // Register the request and listener
    fusedOrientationProviderClient
      .requestOrientationUpdates(request, executor, listener)
      .addOnSuccessListener { logger.atInfo().log("FOP: Registration Success") }
      .addOnFailureListener { e: Exception? ->
        logger.atSevere().withCause(e).log("FOP: Registration Failure")
      }
  }

  fun stop() {
    // Unregister the listener
    fusedOrientationProviderClient.removeOrientationUpdates(listener)
  }
}

Technical background

The Android ecosystem has a wide variety of system implementations for sensors. Devices should meet the criteria in the Android compatibility definition document (CDD) and must have an accelerometer, gyroscope, and magnetometer available to use the fused orientation provider. It is preferable that the device vendor implements the high fidelity sensor portion of the CDD.

Even though Android devices adhere to the Android CDD, recommended sensor specifications are not tight enough to fully prevent orientation inaccuracies. Examples of this include magnetometer interference from internal sources, and delayed, inaccurate or nonuniform sensor sampling. Furthermore, the environment around the device usually includes materials that distort the geomagnetic field, and user behavior can vary widely. To deal with this, the FOP performs a number of tasks in order to provide a robust and accurate orientation:

    • Synchronize sensors running on different clocks and delays;
    • Compensate for the hard iron offset (magnetometer bias);
    • Fuse accelerometer, gyroscope, and magnetometer measurements to determine the orientation of the device in the world;
    • Compensate for gyro drift (gyro bias) while moving;
    • Produce a realistic estimate of the compass heading accuracy.

We have validated our algorithms on comprehensive test data to provide a high quality result on a wide variety of devices.

Availability and limitations

The Fused Orientation Provider is available on all devices running Google Play services on Android 5 (Lollipop) and above. Developers need to add the dependency play-services-location:21.2.0 (or above) to access the new API.

Permissions

No permissions are required to use the FOP API. The output rate is limited to 200Hz on devices running API level 31 (Android S) or higher, unless the android.permissions.HIGH_SAMPLING_RATE_SENSORS permission was added to your Manifest.xml.

Power consideration

Always request the longest update period (lowest frequency) that is sufficient for your use case. While more frequent FOP updates can be required for high precision tasks (for example Augmented Reality), it comes with a power cost. If you do not know which update period to use, we recommend starting with DeviceOrientationRequest::OUTPUT_PERIOD_DEFAULT as it fits most client needs.

Foreground behavior

FOP updates are only available to apps running in the foreground.


Copyright 2023 Google LLC.
SPDX-License-Identifier: Apache-2.0

#TheAndroidShow: the latest from MWC, Gemini Nano, Android 15 and more!

Posted by Anirudh Dewani, Director of Android Developer Relations


Last week, Android device makers released a slew of new devices, and today we’re unpacking what that means for developers, as well as the latest in Gemini Nano, Android 15, Jetpack Compose and more, in another episode of our quarterly show, #TheAndroidShow:

The lastest wearables and foldables – get building!

Android device makers unveiled their latest wearables and foldables last week at Mobile World Congress, and we were on the ground in Barcelona taking a look at those new devices and how you can get started building on top of them. A few of our favorites:

    • Xiaomi Watch 2,the latest smart watch from the Xiaomi team. This device is powered by Wear OS by Google and provides upgraded camera, fitness, and sleep experiences to allow users to get the most from their device.
    • PORSCHE DESIGN HONOR Magic V2 RSR, the world’s thinnest inward foldable smartphone. This is the latest foldable for Android and was designed with the user experience at the forefront, including human-centric eye comfort technology.

Compose is an amazing way to build apps for your users across form factors. Compose for Wear OS and the upcoming adaptive layouts for large screens help devs bring their apps to life with less code, powerful tools, and intuitive APIs. Check out the Wear OS and Large Screen galleries, where you can find UX inspiration and design guidance tailored to your type of app.




Behind the scenes, with Gemini Nano and AICore

With all of the excitement around generative AI, it could be daunting to know where to start. So in today’s show, we’re taking you behind the scenes with Gemini Nano, Google’s most efficient model built for on-device tasks, and AICore, Android’s system service for on-device foundation models. And we’re spotlighting how the team that builds the Recorder app used Gemini Nano to help summarize users’ voice memos on-device and with privacy in mind. And here’s the best part: the team built the feature in a short time with only a small number of engineers.




Now in Android

We celebrated the 100th episode of Now in Android, covering the latest developer news, including:




And that’s a wrap on another episode of our quarterly show, #TheAndroidShow. But the conversation continues on YouTube, X and LinkedIn: tell us your favorite part, or what you’d like us to dive into next time on our quarterly episode. And before we sign off, you can watch the full playlist, with the latest in Android developer news, here.

Designing your account deletion experience with users in mind

Posted by Tatiana van Maaren – Global T&S Partnerships Lead, Privacy & Security, May Smith - Product Manager, and Anita Issagholyan – Policy Lead

With millions of developers relying on our platform, Google Play is committed to keeping our ecosystem safe for everyone. That’s why, in addition to our ongoing investments in app privacy and security, we also continuously update our policies to respond to new challenges and user expectations.

For example, we recently introduced a new account deletion policy with required disclosures within the Data Safety section on the Play Store. Deleting an account should be as easy as creating one, so the new policy requires developers to provide information and web resources that help users to manage their data and understand an app's deletion practices.

To help you build trust and design a user-friendly experience that helps meet our policy requirements, consider these 5 best practices when implementing your account deletion solution.

1.     Make it seamless

Users prefer a simple and straightforward account deletion flow. Although users know that more steps may follow (such as authentication) navigating multiple screens before the deletion page can be a significant barrier and create negative feelings for the user. Consider providing your account deletion option on an account settings page or place a prominent button on the home screen. Design the flow with discoverability in mind by taking the user directly to the deletion process.

2.     Allow automatic deletion

Users feel that if they can create an account without talking to a customer service agent, they should be able to delete their account online, too. If automation is not on your roadmap just yet, consider a step-by-step deletion request form or a dedicated page to connect users with customer support.

3.     Offer guidance and explain potential implications

Users delete their accounts for various reasons, some of which may be better resolved another way. Early in your deletion flow, point your users toward a Help Center article that explains how your deletion process works in simple terms, including any potential consequences. For example, make it clear if your users will need to pause their payment method before deleting the account, or download any account data they want to keep. Helping your users understand the process in advance can prevent accidental deletions. For those who do change their minds, consider offering a way to recover their accounts within a reasonable timeframe.

Here’s an example of how Play Store Developer, Canva, has designed the in-app deletion flow to explain potential consequences of account deletion:

user journey on the Canva app in three panels
User journey on the Canva app
“User data privacy has always been important for us. We’ve always been intentional about our approach in optimizing the Canva app so our users can have more transparency and control over their data. We’re welcoming these new requirements from the Play store as we know this new flow will elevate users’ trust in our app and benefit our business in the long term.” - Will Currie, Software Engineer, Canva

4.     Confirm account deletion

Sometimes users misunderstand whether the account itself or just data collected by the app was deleted in the deletion process. Users often think that the data your app stored in the cloud will automatically be deleted at the same time as account deletion. Since it may take time to remove account data from internal company systems or comply with data retention requirements in different regions, transparency about the process can help you maintain trust in your brand and make it more likely for users to return in the future.

Here’s SYBO Games, has designed their web deletion in-app deletion flow:

user journey on the Sybo Games web resource in four panels
User journey on the SYBO Games web resource
“We are always striving to ensure that our games provide a fun user experience, built on a solid data protection foundation. When we learned about the new account deletion update on Google Play, we thought this was a great step forward to ensure that the entire developer ecosystem optimizes for user safety. We encourage developers to carve out time to embrace these improvements and prioritize regular privacy check-ins.”  - Elizabeth Ann Schweitzer, Games Compliance Manager, SYBO Games

5.     Don’t forget user engagement

This is a great opportunity to connect with your users at a critical moment. Make sure users who have uninstalled your app can easily remove their accounts through a web resource without needing to reinstall the app. You can also invite them to complete a survey or provide feedback on their decision.

Protecting users' data is essential for building trust and loyalty. By updating the Data Safety section on Google Play and continuing to optimize user experience for account deletion, you can strengthen trust in your company while striving for the highest level of user data protection.


Thank you for your continued collaboration and feedback in developing this data transparency feature and in helping make Google Play safe for all.

Introducing a new Text-To-Speech engine on Wear OS

Posted by Ouiam Koubaa – Product Manager and Yingzhe Li – Software Engineer

Today, we’re excited to announce the release of a new Text-To-Speech (TTS) engine that is performant and reliable. Text-to-speech turns text into natural-sounding speech across more than 50 languages powered by Google’s machine learning (ML) technology. The new text-to-speech engine on Wear OS uses decreased prosody ML models to bring faster synthesis on Wear OS devices.

Use cases for Wear OS’s text-to-speech can range from accessibility services, coaching cues for exercise apps, navigation cues, and reading aloud incoming alerts through the watch speaker or Bluetooth connected headphones. The engine is meant for brief interactions, so it shouldn’t be used for reading aloud a long article, or a long summary of a podcast.

How to use Wear OS’s TTS

Text-to-speech has long been supported on Android. Wear OS’s new TTS has been tuned to be performant and reliable on low-memory devices. All the Android APIs are still the same, so developers use the same process to integrate it into a Wear OS app, for example, TextToSpeech#speak can be used to speak specific text. This is available on devices that run Wear OS 4 or higher.

When the user interacts with the Wear OS TTS for the first time following a device boot, the synthesis engine is ready in about 10 seconds. For special cases where developers want the watch to speak immediately after opening an app or launching an experience, the following code can be used to pre-warm the TTS engine before any synthesis requests come in.

private fun initTtsEngine() {
    // Callback when TextToSpeech connection is set up
    val callback = TextToSpeech.OnInitListener { status ->
        if (status == TextToSpeech.SUCCESS) {
            Log.i(TAG, "tts Client Initialized successfully")


            // Get default TTS locale
            val defaultVoice = tts.voice
            if (defaultVoice == null) {
                Log.w(TAG, "defaultVoice == null")
                return@OnInitListener
            }


            // Set TTS engine to use default locale
            tts.language = defaultVoice.locale




            try {
                // Create a temporary file to synthesize sample text
                val tempFile =
                        File.createTempFile("tmpsynthesize", null, applicationContext.cacheDir)


                // Synthesize sample text to our file
                tts.synthesizeToFile(
                        /* text= */ "1 2 3", // Some sample text
                        /* params= */ null, // No params necessary for a sample request
                        /* file= */ tempFile,
                        /* utteranceId= */ "sampletext"
                )


                // And clean up the file
                tempFile.deleteOnExit()
            } catch (e: Exception) {
                Log.e(TAG, "Unhandled exception: ", e)
            }
        }
    }


    tts = TextToSpeech(applicationContext, callback)
}

When you are done using TTS, you can release the engine by calling tts.shutdown() in your activity’s onDestroy() method. This command should also be used when closing an app that TTS is used for.

Languages and Locales

By default, Wear OS TTS includes 7 pre-loaded languages in the system image: English, Spanish, French, Italian, German, Japanese, and Mandarin Chinese. OEMs may choose to preload a different set of languages. You can check what languages are available by using TextToSpeech#getAvailableLanguages(). During watch setup, if the user selects a system language that is not a pre-loaded voice file, the watch automatically downloads the corresponding voice file the first time the user connects to Wi-Fi while charging their watch.

There are limited cases where the speech output may differ from the user’s system language. For example, in a scenario where a safety app uses TTS to call emergency responders, developers might want to synthesize speech in the language of the locale the user is in, not in the language the user has their watch set to. To synthesize text in a different language from system settings, use TextToSpeech#setLanguage(java.util.Locale)

Conclusion

Your Wear OS apps now have the power to talk, either directly from the watch’s speakers or through Bluetooth connected headphones. Learn more about using TTS.

We look forward to seeing how you use Text-to-speech engine to create more helpful and engaging experiences for your users on Wear OS!


Copyright 2023 Google LLC.
SPDX-License-Identifier: Apache-2.0

Embracing Android 14: Meta’s Early Adoption Empowered Enhanced User Experience

Posted by Terence Zhang – Developer Relations Engineer, Google; in partnership with Tina Ho - Partner Engineering, TPM and Kun Wang – Partner Engineering, Partner Engineer

With the first Developer Preview of Android 15 now released, another new Android release that brings new features and under-the-hood improvements for billions of users worldwide will be coming shortly. As Android developers, you are key players in this evolution; by staying on top of the targetSDK upgrade cycle, you are making sure that your users have the best possible experience.

The way Meta, the parent company of Instagram, Facebook, WhatsApp, and Messenger, approached Android 14 provides a blueprint for both developer success and user satisfaction. Meta improved their velocity towards targetSDK adoption by 4x, and so to understand more about how they built this, we spoke to the team at Meta, with an eye towards insights that all developers could build into their testing programs.

Meta’s journey on A14: A blueprint for faster adoption

When Android 11 launched, some of Meta’s apps experienced challenges with existing features, such as Chat Heads, and with new requirements, like scoped storage integration. Fixing these issues was complicated by slow developer tooling adoption and a decentralized app strategy. This experience motivated Meta to create an internal Android OS Readiness Program which focuses on prioritizing early and thorough testing throughout the Android release window and accelerating their apps’ targetSDK adoption.

The program officially launched last year. By compiling apps against each Android 14 beta and conducting thorough automated and smoke tests to proactively identify potential issues, Meta was able to seamlessly adopt new Android 14 features, like Foreground Service types and send timely feedback and bug reports to the Android team, contributing to improvements in the OS.

Meta also accelerated their targetSDK adoption for Android 14—updating Messenger, Facebook, and Instagram within one to two months of the AOSP release, compared to seven to nine months for Android 12 (an increase of velocity of more than 4x!). Meta’s newly created readiness program unlocked this achievement by working across each app to adopt latest Android changes while still maintaining compatibility. For example, by automating and simplifying their SDK release process, Meta was able to cut rollout time from three weeks to under three hours, enhancing cooperation between individual app teams by providing immediate access to the latest SDKs and allowing for rapid testing of new OS features. The centralized approach also meant Threads adopted Android 14 support quickly despite the fast-growing new app being supported by a minimal team.

Reaping the rewards: The impact on users

Meta's early targetSDK adoption strategy delivers significant benefits for users as well. Here's how:

    • Improved reliability and compatibility: Early adoption of Android previews and betas prevented surprises near the OS launch, guaranteeing a smooth day-one experience for users upgrading to the latest Android version. For example, with partial media permissions, Meta's extensive experimentation with permission flows ensured “users felt informed about the change and in control over their privacy settings,” while maximizing the app's media-sharing functionality.

    • Robust experimentation with new release features: Early Android release adoption gave Meta ample time to collaborate across privacy, design, and content strategy teams, enabling them to thoughtfully integrate the new Android features that come with every release. This enhanced the collaboration on other features, allowing Meta to roll out Ultra HDR image experience on Instagram within 3 months of platform release in an “Android first” manner is a great example of this, delighting users with brighter and richer colors with a higher dynamic range in their Instagram posts and stories.
Meta's adoption of Ultra HDR in Android 14 brings brighter colors and dynamic range to Instagram posts and stories.
Meta's adoption of Ultra HDR in Android 14 brings brighter colors and dynamic range to Instagram posts and stories.

Embrace the latest Android versions

Meta's journey highlights the compelling reasons for Android developers to adopt a similar forward-thinking mindset in working with the Android betas:

    • Test your apps early: Anticipate Android OS changes, ensuring your apps are prepared for the latest target SDK as soon as they become available to create a seamless transition for users who update to the newest Android version.

    • Utilize latest tools to optimize user experience: Test your apps thoroughly against each beta to identify and address any potential issues. Check the Android Studio Upgrade Assistant to highlight major breaking changes in each targetSDKVersion, and integrate the compatibility framework tool into your testing process to help uncover potential app issues in the new OS version.

    • Collaborate with Google: Provide your valuable feedback and bug reports using the Google issue tracker to contribute directly to the improvement of the Android ecosystem.

We encourage you to take full advantage of the Android Developer Previews & Betas program, starting with the newly-released Android 15 Developer Preview 1.

The team behind the success

A big thank you to the entire Meta team for their collaboration in Android 14 and in writing this blog! We’d especially like to recognize the following folks from Meta for their outstanding contributions in establishing a culture of early adoption:

    • Tushar Varshney - Partner Engineering, Partner Engineer
    • Allen Bae - Partner Engineering, EM
    • Abel Del Pino - Facebook, SWE
    • Matias Hanco - Facebook, SWE
    • Summer Kitahara - Instagram, SWE
    • Tom Rozanski - Messenger, SWE
    • Ashish Gupta - WhatsApp, SWE
    • Daniel Hill - Mobile Infra, SWE
    • Jason Tang - Facebook, SWE
    • Jane Li - Meta Quest, SWE

Android Studio Iguana is stable

Posted by Neville Sicard-Gregory – Senior Product Manager, Android Studio

Today we are launching Android Studio Iguana 🦎 in the stable release channel to make it easier for you to create high quality apps. With features like Version Control System support in App Quality Insights, to the new built-in support to create Baseline Profiles for Jetpack Compose apps, this version should enhance your development workflow as you optimize your app. Download the latest version today!

Check out the list of new features in Android Studio Iguana below, organized by key developer flows.

Debugging

Version control system integration in App Quality Insights

When your release build is several commits behind your local source code, line numbers in Firebase Crashlytics crash reports can easily go stale, making it more difficult to accurately navigate from crash to code when using App Quality Insights. If you’re using git for your version control, there’s now a solution to this problem.

When you build your app using Android Gradle Plugin 8.3 or later and the latest version of the Crashlytics SDK, AGP includes git commit information as part of the build artifact that is published to the Play Store. When a crash occurs, Crashlytics attaches the git information to the report, and Android Studio Iguana uses this information to compare your local checkout with the exact code that caused the crash from your git history.

After you build your app using Android Gradle Plugin 8.3 or higher with the latest Crashlytics SDK, and publish it, new crash reports in the App Quality Insights window let you either navigate to the line of code in your current git checkout or view a diff report between the current checkout and the version of your app codebase that generated the crash report. Learn more.

app quality insights with version control system integration in Android Studio
App Quality Insights with Version Control System Integration

View Crashlytics crash variants in App Quality Insights

app quality insights in Android Studio
Crash variants in App Quality Insights

Today, when you select a Crashlytics issue in App Quality Insights, you see aggregated data from events that share identical points of failure in your code, but may have different root causes. To aid in your analysis of the root causes of a crash, Crashlytics now groups events that share very similar stack traces as issue variants. You can now view events in each variant of a crash report in App Quality Insights by selecting a variant from the dropdown. Alternatively, you can view aggregate information for all variants by selecting All.

Design

Jetpack Compose UI Check

To help developers build adaptive and accessible UI in Jetpack Compose, Iguana introduces a new UI Check mode in Compose Preview. This feature works similarly to visual linting and accessibility checks integrations for views. Activate Compose UI check mode to automatically audit your Compose UI and check for adaptive and accessibility issues across different screen sizes, such as text that's stretched on large screens or low color contrast. The mode highlights issues found in different preview configurations and lists them in the problems panel.

Try it out by clicking the UI Check icon in Compose Preview.

UI Check entry point in Compose Preview
UI Check entry point in Compose Preview

UI Check results of Reply App in Compose Preview
UI Check results of Reply App in Compose Preview

Progressive rendering for Compose Preview

Compose Previews in Android Studio Iguana now implement progressive rendering, allowing you to iterate on your designs with less loading time. This feature automatically lowers the detail of out-of-view previews to boost performance, meaning you can scroll through even the most complex layouts without lag.

moving image showing progressive rendering in Compose
Progressive Rendering in Compose

Develop

Intellij Platform Update

Android Studio Iguana includes the IntelliJ 2023.2 platform release, which has many new features such as support for GitLab, text search in Search Everywhere, color customization updates to the new UI and a host of new improvements. Learn more.

Testing

Baseline Profiles module wizard

Many times when you run an Android app for the first time on a device, the app can appear to have a slow start time because the operating system has to run just-in-time compilation. To improve this situation, you can create Baseline Profiles that help Android improve aspects like app start-up time, scrolling, and navigation speed in your apps. We are simplifying the process of setting up a Baseline Profile by offering a new Baseline Profile Generator template in the new module wizard (File > New > New Module). This template configures your project to support Baseline Profiles and employs the latest Baseline Profiles Gradle plugin, which simplifies setup by automating required tasks with a single Gradle command.

Baseline Profile module wizard - Create New Module
Baseline Profile Generator

Furthermore, the template creates a run configuration that enables you to generate a Baseline Profile with a single click from the "Select Run/Debug Configuration" dropdown list.

Generate Baseline Profile drop-down menu
Generate Baseline Profile drop-down menu

Test against configuration changes with the Espresso Device API

Synchronous testing of window size changes using Espresso Device API
Synchronous testing of window size changes using Espresso Device API

Catch layout problems early and ensure your app delivers a seamless user experience across devices and orientations. The Espresso Device API simulates how your app reacts to configuration changes—such as screen rotation, device folding/unfolding, or window size changes—in a synchronous way on virtual devices. These APIs help you rigorously test and preemptively fix issues that frustrate users so you build more reliable Android apps with confidence. These APIs are built on top of new gRPC endpoints introduced in Android Emulator 34.2, which enables secure bidirectional data streaming and precise sensor simulation.

Pixel 8 and Pixel 8 Pro devices in Android Emulator (34.2)

Test your app on the latest Google Pixel device configurations with the updated Android Virtual Device definitions in Android Studio. With Android Studio Iguana and the latest Android Emulator (34.2+), access the Pixel Fold, Pixel Tablet, Pixel 7a, Pixel 8, and Pixel 8 Pro. Validating your app on these virtual devices is a convenient way to ensure that your app reacts correctly to a variety of screen sizes and device types.

New Pixel Android Virtual Devices in the Android Emulator
New Pixel Android Virtual Devices in the Android Emulator.

Build

Support for Gradle Version Catalogs

Android Studio Iguana streamlines dependency management with its enhanced support for TOML-based Gradle Version Catalogs. You'll benefit from:

    • Centralized dependency management: Keep all your project's dependencies organized in a single file for easier editing and updating.
    • Time-saving features: Enjoy seamless code completion, smart navigation within your code, and the ability to quickly edit project dependencies through the convenient Project Structure dialog.
    • Increased efficiency: Say goodbye to scattered dependencies and manual version updates. Version catalogs give you a more manageable, efficient development workflow.

New projects will automatically use version catalogs for dependency management. If you have an existing project, consider making the switch to benefit from these workflow improvements. To learn how to update to Gradle version catalogs, see Migrate your build to version catalogs.

Additional SDK insights: policy issues

Android Studio Iguana now proactively alerts you to potential Google Play policy violations through integration with the Google Play SDK Index. Easily see Play policy issues right in your build files and Project Structure Dialog. This streamlines compliance, helping you avoid unexpected publishing delays or rejections on the Google Play Store.

Android Studio's project structure dialog showing a warning from the Google Play SDK Index
A warning from the Google Play SDK Index in Android Studio’s Project Structure dialog

Android Studio compileSdk version support

Using Android Studio to develop a project that has an unsupported compileSdk version can lead to unexpected errors because older versions of Android Studio may not handle the new Android SDK correctly. To avoid these issues, Android Studio Iguana now explicitly warns you if your project’s intended compileSdk is for a newer version that it does not officially support. If available, it also suggests moving to a version of Android Studio that supports the compileSdk used by your project. Keep in mind that upgrading Android Studio might also require that you upgrade AGP.

Summary

To recap, Android Studio Iguana 🦎includes the following enhancements and features:

Debugging

Design

Develop

    • Intellij platform update

Testing

Build

Download Android Studio Today

Download Android Studio Iguana 🦎 today and take advantage of the latest features to streamline your workflow and help you make better apps. Your feedback is essential – check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn Medium, YouTube, or X (formerly known as Twitter). Let's build the future of Android apps together!

New ways to annotate Google Docs

What’s changing

We’re excited to announce a new feature, markups in Google Docs, which gives you more flexibility when providing feedback in a document. The new markups experience lets you add handwritten annotations to documents with a stylus or your finger when using an Android device. 

Markups can be useful in numerous scenarios, such as: 
  • Colleagues giving each other handwritten feedback on diagrams, charts, reports or proposals. 
  • Educators giving students feedback on their essays, reports, short stories and more. 
  • A homeowner providing ideas or updates on construction plans from their contractor. 
In order to add annotations, you must use an Android device. From there, you can: 
  • Enter the markups mode and annotate using pen or highlighter tools 
  • Hide/show markups 
  • Erase markups 
  • Insert suggested markups 
Markups on Android


On iOS devices, you can: 
  • View a document with markups 
  • Delete markups 
  • Hide/show markups 
markups on iOS

On desktop, you can: 
  • View a document with markups 
  • Delete markups 
  • Hide/show markups 
markups on desktop

Getting started 

  • Admins: There is no admin control for this feature. 
  • End users: To turn on markups, open a document > select the markups tool from the contextual toolbar > draw with your finger or stylus. 

Rollout pace 


Availability 

  • Available to all Google Workspace customers, Google Workspace Individual subscribers, and users with personal Google accounts