Tag Archives: developer tools

How Instagram enabled users to take stunning Low Light Photos

Posted by Donovan McMurray – Developer Relations Engineer

Instagram, the popular photo and video sharing social networking service, is constantly delighting users with a best-in-class camera experience. Recently, Instagram launched another improvement on Android with their Night Mode implementation.

As devices and their cameras become more and more capable, users expect better quality images in a wider variety of settings. Whether it’s a night out with friends or the calmness right after you get your baby to fall asleep, the special moments users want to capture often don’t have ideal lighting conditions.

Now, when Instagram users on Android take a photo in low light environments, they’ll see a moon icon that allows them to activate Night Mode for better image quality. This feature is currently available to users with any Pixel device from the 6 series and up, a Samsung Galaxy S24Ultra, or a Samsung Flip6 or Fold6, with more devices to follow.

Moving image showing the user experience of taking a photo of a shelf with plants, oranges, and decorative items in low light

Leveraging Device-specific Camera Technologies

Android enables apps to take advantage of device-specific camera features through the Camera Extensions API. The Extensions framework currently provides functionality like Night Mode for low-light image captures, Bokeh for applying portrait-style background blur, and Face Retouch for beauty filters. All of these features are implemented by the Original Equipment Manufacturers (OEMs) in order to maximize the quality of each feature on the hardware it's running on.

A quote by Nilesh Patel, Software Engineer, reads: 'For Meta's billions of users, having to write custom code for each new device is simply not scalable. It would also add unnecessary app size when Meta users download the app. Hence our guideline is ‘write once to scale to billions’, favoring platform APIs.' A headshot of Nilesh Patel is displayed to the right of the quote card.

Furthermore, exposing this OEM-specific functionality through the Extensions API allows developers to use a consistent implementation across all of these devices, getting the best of both worlds: implementations that are tuned to a wide-range of devices with a unified API surface. According to Nilesh Patel, a Software Engineer at Instagram, “for Meta’s billions of users, having to write custom code for each new device is simply not scalable. It would also add unnecessary app size when Meta users download the app. Hence our guideline is ‘write once to scale to billions’, favoring platform APIs.”

More and more OEMs are supporting Extensions, too! There are already over 120 different devices that support the Camera Extensions, representing over 75 million monthly active users. There’s never been a better time to integrate Extensions into your Android app to give your users the best possible camera experience.

Impact on Instagram

The results of adding Night Mode to Instagram have been very positive for Instagram users. Jin Cui, a Partner Engineer on Instagram, said “Night Mode has increased the number of photos captured and shared with the Instagram camera, since the quality of the photos are now visibly better in low-light scenes.”

A quote from Jin Cui, Partner Engineer, reads: 'Night Mode has increased the number of photos captured and shared with the Instagram camera, since the quality of the photos are now visibly better in low-light scenes.'  A photo of Jin Cui wearing glasses and a maroon hoodie is shown to the right of the quote card.

Compare the following photos to see just how big of a difference Night Mode makes. The first photo is taken in Instagram with Night Mode off, the second photo is taken in Instagram with Night Mode on, and the third photo is taken with the native camera app with the device’s own low-light processing enabled.

A 3x3 grid of photos compares low-light performance across different smartphone cameras and Instagram's night mode. The photos show a shelf with plants, oranges, and decorative items, taken with a Pixel 9 Pro, Samsung Galaxy S24 Ultra, and Pixel 6 Pro, both with and without night mode enabled.

Ensuring Quality through Image Test Suite (ITS)

The Android Camera Image Test Suite (ITS) is a framework for testing images from Android cameras. ITS tests configure the camera and capture shots to verify expected image data. These tests are functional and ensure advertised camera features work as expected. A tablet mounted on one side of the ITS box displays the test chart. The device under test is mounted on the opposite side of the ITS box.

Devices must pass the ITS tests for any feature that the device claims to support for apps to use, including the tests we have for the Night Mode Camera Extension.

Regular field-of-view (RFoV) ITS box Rev1b showing the device mounting brackets
Regular field-of-view (RFoV) ITS box Rev1b showing the device mounting brackets

The Android Camera team faced the challenge of ensuring the Night Mode Camera Extension feature functioned consistently across all devices in a scalable way. This required creating a testing environment with very low light and a wide dynamic range. This configuration was necessary to simulate real-world lighting scenarios, such as a city at night with varying levels of brightness and shadow, or the atmospheric lighting of a restaurant.

The first step to designing the test was to define the specific lighting conditions to simulate. Field testing with a light meter in various locations and lighting conditions was conducted to determine the target lux level. The goal was to ensure the camera could capture clear images in low-light conditions, which led to the establishment of 3 lux as the target lux level. The figure below shows various lighting conditions and their respective lux value.

Evaluation of scenes of varying lighting conditions measured with a Light Meter
Evaluation of scenes of varying lighting conditions measured with a Light Meter

The next step was to develop a test chart to accurately measure a wide dynamic range in a low light environment. The team developed and iterated on several test charts and arrived at the following test chart shown below. This chart arranges a grid of squares in varying shades of grey. A red outline defines the test area for cropping. This enables excluding darker external regions. The grid follows a Hilbert curve pattern to minimize abrupt light or dark transitions. The design allows for both quantitative measurements and simulation of a broad range of light conditions.

Low Light test chart displayed on tablet in ITS box
Low Light test chart displayed on tablet in ITS box

The test chart captures an image using the Night Mode Camera Extension in low light conditions. The image is used to evaluate the improvement in the shadows and midtones while ensuring the highlights aren’t saturated. This evaluation involves two criteria: ensure the average luma value of the six darkest boxes is at least 85, and ensure the average luma contrast between these boxes is at least 17. The figure below shows the test capture and chart results.

Night Mode Camera Extension capture and test chart result
Night Mode Camera Extension capture and test chart result

By leveraging the existing ITS infrastructure, the Android Camera team was able to provide consistent, high quality Night Mode Camera Extension captures. This gives application developers the confidence to integrate and enable Night Mode captures for their users. It also allows OEMs to validate their implementations and ensure users get the best quality capture.

How to Implement Night Mode with Camera Extensions

Camera Extensions are available to apps built with Camera2 or CameraX. In this section, we’ll walk through each of the features Instagram implemented. The code examples will use CameraX, but you’ll find links to the Camera2 documentation at each step.

Enabling Night Mode Extension

Night Mode involves combining multiple exposures into a single still photo for better quality shots in low-light environments. So first, you’ll need to check for Night Mode availability, and tell the camera system to start a Camera Extension session. With CameraX, this is done with an ExtensionsManager instead of the standard CameraManager.

private suspend fun setUpCamera() {
  // Obtain an instance of a process camera provider. The camera provider
  // provides access to the set of cameras associated with the device.
  // The camera obtained from the provider will be bound to the activity lifecycle.
  val cameraProvider = ProcessCameraProvider.getInstance(application).await()

  // Obtain an instance of the extensions manager. The extensions manager 
  // enables a camera to use extension capabilities available on the device.
  val extensionsManager = ExtensionsManager.getInstanceAsync(
    application, cameraProvider).await()

  // Select the camera.
  val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA

  // Query if extension is available. Not all devices will support 
  // extensions or might only support a subset of extensions.
  if (extensionsManager.isExtensionAvailable(cameraSelector, ExtensionMode.NIGHT)) {
    // Unbind all use cases before enabling different extension modes.
    try {
      cameraProvider.unbindAll()

      // Retrieve a night extension enabled camera selector
      val nightCameraSelector = extensionsManager.getExtensionEnabledCameraSelector(
        cameraSelector,
        ExtensionMode.NIGHT
      )

      // Bind image capture and preview use cases with the extension enabled camera
      // selector.
      val imageCapture = ImageCapture.Builder().build()
      val preview = Preview.Builder().build()
        
      // Connect the preview to receive the surface the camera outputs the frames
      // to. This will allow displaying the camera frames in either a TextureView
      // or SurfaceView. The SurfaceProvider can be obtained from the PreviewView.
      preview.setSurfaceProvider(surfaceProvider)

      // Returns an instance of the camera bound to the lifecycle
      // Use this camera object to control various operations with the camera
      // Example: flash, zoom, focus metering etc.
      val camera = cameraProvider.bindToLifecycle(
        lifecycleOwner,
        nightCameraSelector,
        imageCapture,
        preview
      )
    } catch (e: Exception) {
      Log.e(TAG, "Use case binding failed", e)
    }
  } else {
    // In the case where the extension isn't available, you should set up
    // CameraX normally with non-extension-enabled CameraSelector.
  }
}

To do this in Camera2, see the Create a CameraExtensionSession with the Camera2 Extensions API guide.

Implementing the Progress Bar and PostView Image

For an even more elevated user experience, you can provide feedback while the Night Mode capture is processing. In Android 14, we added callbacks for the progress and for post view, which is a temporary image capture before the Night Mode processing is complete. The below code shows how to use these callbacks in the takePicture() method. The actual implementation to update the UI is very app-dependent, so we’ll leave the actual UI updating code to you.

// When setting up the ImageCapture.Builder, set postviewEnabled and 
// posviewResolutionSelector in order to get a PostView bitmap in the
// onPostviewBitmapAvailable callback when takePicture() is called.
val cameraInfo = cameraProvider.getCameraInfo(cameraSelector)
val isPostviewSupported =
  ImageCapture.getImageCaptureCapabilities(cameraInfo).isPostviewSupported

val postviewResolutionSelector = ResolutionSelector.Builder()
  .setAspectRatioStrategy(AspectRatioStrategy(
    AspectRatioStrategy.RATIO_16_9_FALLBACK_AUTO_STRATEGY, 
    AspectRatioStrategy.FALLBACK_RULE_AUTO))
  .setResolutionStrategy(ResolutionStrategy(
    previewSize, 
    ResolutionStrategy.FALLBACK_RULE_CLOSEST_LOWER_THEN_HIGHER
  ))
  .build()

imageCapture = ImageCapture.Builder()
  .setTargetAspectRatio(AspectRatio.RATIO_16_9)
  .setPostviewEnabled(isPostviewSupported)
  .setPostviewResolutionSelector(postviewResolutionSelector)
  .build()

// When the Night Mode photo is being taken, define these additional callbacks
// to implement PostView and a progress indicator in your app.
imageCapture.takePicture(
  outputFileOptions,
  Dispatchers.Default.asExecutor(),
  object : ImageCapture.OnImageSavedCallback {
    override fun onPostviewBitmapAvailable(bitmap: Bitmap) {
      // Add the Bitmap to your UI as a placeholder while the final result is processed
    }

    override fun onCaptureProcessProgressed(progress: Int) {
      // Use the progress value to update your UI; values go from 0 to 100.
    }
  }
)

To accomplish this in Camera2, see the CameraFragment.kt file in the Camera2Extensions sample app.

Implementing the Moon Icon Indicator

Another user-focused design touch is showing the moon icon to let the user know that a Night Mode capture will happen. It’s also a good idea to let the user tap the moon icon to disable Night Mode capture. There’s an upcoming API in Android 16 next year to let you know when the device is in a low-light environment.

Here are the possible values for the Night Mode Indicator API:

      UNKNOWN

      • The camera is unable to reliably detect the lighting conditions of the current scene to determine if a photo will benefit from a Night Mode Camera Extension capture.

      OFF

      • The camera has detected lighting conditions that are sufficiently bright. Night Mode Camera Extension is available but may not be able to optimize the camera settings to take a higher quality photo.

      ON

      • The camera has detected low-light conditions. It is recommended to use Night Mode Camera Extension to optimize the camera settings to take a high-quality photo in the dark.

Next Steps

Read more about Android’s camera APIs in the Camera2 guides and the CameraX guides. Once you’ve got the basics down, check out the Android Camera and Media Dev Center to take your camera app development to the next level. For more details on upcoming Android features, like the Night Mode Indicator API, get started with the Android 16 Preview program.

Four Tips to Help You Build High-Quality, Engaging, and Age-Appropriate Apps

Posted by Mindy Brooks – Senior Director, Android Platform

App developers play a vital role in shaping how people of all ages interact with technology. Whether your app content is specifically designed for kids or simply attracts their attention, there is an added responsibility to ensure a safe and trusted experience. Google is here to support you in that work. Today, we’re sharing some important reminders and updates on how we empower developers to build high-quality, engaging, and age-appropriate apps across the Android ecosystem.

Help Determine Android User Age with Digital IDs

Understanding a user's age range can be critical for providing minors with safer and more appropriate app experiences, as well as complying with local age-related regulations. Android’s new Credential Manager API, now in Beta, addresses this challenge by helping developers verify a user’s age with a digital ID saved to any digital wallet application. Importantly, Android’s Credential Manager was built with both safety and privacy at its core – it minimizes data exposure by only sharing information necessary with developers and asks the user for explicit permission to share an age signal. We encourage you to try out the Beta API for yourself and look forward to hearing your feedback.

While digital IDs are still in their early days, we’re continuing to work with governments on further adoption to strengthen this solution. Android is also exploring how the API can support a range of age assurance methods, helping developers to safely confirm the age of their users, especially for users that can't or don't want to use a digital ID. Please keep in mind that ID-based solutions are just one tool that developers can use to determine age and the best approach will depend on your app.

A diagram showing the flow of information between a user, their Android device, and a developer's app when using the Credential Manager API. The diagram shows how a digital ID from a user's digital wallet is used to provide app information to the developer's app.

Shield Young Users from Inappropriate Content on Google Play

As part of our continued commitment to creating a safe and positive environment for children across the Play Store, we recently launched the Restrict Declared Minors (RDM) setting within the Google Play Console that allows developers to designate their app as inappropriate for minors. When enabled, Google Play users with declared ages below 18 will not be able to download or purchase the app nor will they be able to continue subscriptions or make new purchases if the app is already installed.

Beyond Play’s broader kids safety policies, this new setting gives developers an additional tool to proactively prevent minors from accessing content that may be unsuitable for them. It also empowers developers to take a more proactive role in ensuring their apps reach the appropriate audience. As a reminder, this feature is simply one tool of many to keep your apps safe and we are continuing to improve it based on early feedback. Developers remain solely responsible for compliance with relevant laws and regulations. You can learn more about opting in to RDM here.

Develop Teacher Approved Apps and Games on Google Play

Great content for kids can take many forms, whether that’s sparking curiosity, helping kids learn, or just plain fun. Google Play’s Teacher Approved program highlights high-quality apps that are reviewed and rated by teachers and child development specialists. Our team of teachers and experts across the world review and rate apps on factors like age-appropriateness, quality of experience, enrichment, and delight. For added transparency, we include information in the app listing about why the app was rated highly to help parents determine if the app is right for their child. Apps in the program also must meet strict privacy and security requirements.

Building a teacher-approved app not only helps raise app quality for kids – it can also increase your reach and engagement. All apps in this program are eligible to appear and be featured on Google Play's Kids tab where families go to easily discover quality apps and games. Please visit Google Play Academy for more information about how to design high-quality apps for kids.


Stay Updated With Google Play’s Families Policies

Google Play policies provide additional protections for children and families. Our Families policies require that apps and games targeted to children have appropriate content, show ads suitable for children, and meet other requirements including ones related to personally identifiable information. We frequently update and strengthen these policies to ensure that Google Play remains a place where families can find safe and high-quality content for their children. This includes our new Child Safety Standards Policy for social and dating apps that goes into effect in January.

Developers can showcase compliance with Play’s Families policies with a special badge on the Google Play Data safety section. This is another great way that you can better help families find apps that meet their needs, while supporting Play’s commitment to provide users more transparency and control over their data. To display the badge, please visit the "Security practices" section of your Data Safety form in your Google Play Developer Console.

A mobile phone screen displays an app's data safety information, including data encryption, deletion options, and adherence to Play Families Policy. The 'Data safety' section is expanded within the app's details page.

Additional Resources

Protecting kids online is a responsibility we all share and we hope these reminders are helpful as you prepare for 2025. We’re grateful for your partnership in making Android and Google Play fantastic platforms for delightful, high-quality content for kids and families. For more resources:

Four Tips to Help You Build High-Quality, Engaging, and Age-Appropriate Apps

Posted by Mindy Brooks – Senior Director, Android Platform

App developers play a vital role in shaping how people of all ages interact with technology. Whether your app content is specifically designed for kids or simply attracts their attention, there is an added responsibility to ensure a safe and trusted experience. Google is here to support you in that work. Today, we’re sharing some important reminders and updates on how we empower developers to build high-quality, engaging, and age-appropriate apps across the Android ecosystem.

Help Determine Android User Age with Digital IDs

Understanding a user's age range can be critical for providing minors with safer and more appropriate app experiences, as well as complying with local age-related regulations. Android’s new Credential Manager API, now in Beta, addresses this challenge by helping developers verify a user’s age with a digital ID saved to any digital wallet application. Importantly, Android’s Credential Manager was built with both safety and privacy at its core – it minimizes data exposure by only sharing information necessary with developers and asks the user for explicit permission to share an age signal. We encourage you to try out the Beta API for yourself and look forward to hearing your feedback.

While digital IDs are still in their early days, we’re continuing to work with governments on further adoption to strengthen this solution. Android is also exploring how the API can support a range of age assurance methods, helping developers to safely confirm the age of their users, especially for users that can't or don't want to use a digital ID. Please keep in mind that ID-based solutions are just one tool that developers can use to determine age and the best approach will depend on your app.

A diagram showing the flow of information between a user, their Android device, and a developer's app when using the Credential Manager API. The diagram shows how a digital ID from a user's digital wallet is used to provide app information to the developer's app.

Shield Young Users from Inappropriate Content on Google Play

As part of our continued commitment to creating a safe and positive environment for children across the Play Store, we recently launched the Restrict Declared Minors (RDM) setting within the Google Play Console that allows developers to designate their app as inappropriate for minors. When enabled, Google Play users with declared ages below 18 will not be able to download or purchase the app nor will they be able to continue subscriptions or make new purchases if the app is already installed.

Beyond Play’s broader kids safety policies, this new setting gives developers an additional tool to proactively prevent minors from accessing content that may be unsuitable for them. It also empowers developers to take a more proactive role in ensuring their apps reach the appropriate audience. As a reminder, this feature is simply one tool of many to keep your apps safe and we are continuing to improve it based on early feedback. Developers remain solely responsible for compliance with relevant laws and regulations. You can learn more about opting in to RDM here.

Develop Teacher Approved Apps and Games on Google Play

Great content for kids can take many forms, whether that’s sparking curiosity, helping kids learn, or just plain fun. Google Play’s Teacher Approved program highlights high-quality apps that are reviewed and rated by teachers and child development specialists. Our team of teachers and experts across the world review and rate apps on factors like age-appropriateness, quality of experience, enrichment, and delight. For added transparency, we include information in the app listing about why the app was rated highly to help parents determine if the app is right for their child. Apps in the program also must meet strict privacy and security requirements.

Building a teacher-approved app not only helps raise app quality for kids – it can also increase your reach and engagement. All apps in this program are eligible to appear and be featured on Google Play's Kids tab where families go to easily discover quality apps and games. Please visit Google Play Academy for more information about how to design high-quality apps for kids.


Stay Updated With Google Play’s Families Policies

Google Play policies provide additional protections for children and families. Our Families policies require that apps and games targeted to children have appropriate content, show ads suitable for children, and meet other requirements including ones related to personally identifiable information. We frequently update and strengthen these policies to ensure that Google Play remains a place where families can find safe and high-quality content for their children. This includes our new Child Safety Standards Policy for social and dating apps that goes into effect in January.

Developers can showcase compliance with Play’s Families policies with a special badge on the Google Play Data safety section. This is another great way that you can better help families find apps that meet their needs, while supporting Play’s commitment to provide users more transparency and control over their data. To display the badge, please visit the "Security practices" section of your Data Safety form in your Google Play Developer Console.

A mobile phone screen displays an app's data safety information, including data encryption, deletion options, and adherence to Play Families Policy. The 'Data safety' section is expanded within the app's details page.

Additional Resources

Protecting kids online is a responsibility we all share and we hope these reminders are helpful as you prepare for 2025. We’re grateful for your partnership in making Android and Google Play fantastic platforms for delightful, high-quality content for kids and families. For more resources:

A Smoother Ride: Android Emulator Stability and Performance Updates

Posted by Neville Sicard-Gregory – Senior Product Manager, Android Studio


Looking for a more stable, reliable, and performant Emulator? Download the latest version of Android Studio or ensure your Emulator is up to date in the SDK Manager.

A split screen shows Kotlin code on the left and the corresponding Android app display on the right in Android Studio. The app displays the Google Play Store, Photos, YouTube, Gmail, and Chrome icons.

We know how critical the stability, reliability, and performance of the Android Emulator is to your everyday work as an Android developer. After listening to valuable feedback about stability, reliability, and performance, the Android Studio team took a step back from large feature work on the Android Emulator for six months and started an initiative called Project Quartz. This initiative was made up of several workstreams aimed at reducing crashes, speeding up startup time, closing out bugs, and setting up better ways to detect and prevent issues in the future.

Improved stability and reliability

A key goal of Project Quartz aimed to reduce Emulator crashes, which can frustrate and block developers, decreasing their productivity. We focused on fixing issues causing backend and UI crashes and freezes, updated the UI framework, updated our hypervisor framework, and our graphics libraries, and eliminated tech debt. This included:

    • Moving to a newer version of Qt, the cross-platform framework for building the graphical user interfaces of the Android Emulator, and making it stable on all platforms (as of version 34.2.13/ This was also a required change to ensure things like Google Maps and the location settings UI continued to work in the Android Emulator.
    • Updating gfxstream, the graphics rendering system used in the Android Emulator, to improve our graphics layer.
    • Adding more than 600 end-to-end tests to the existing pytests test suite.

As a result, we have seen 30% fewer crashes in the latest stable version of Android Studio, as reported by developers who have opted-in to sharing crash details with us. Along with additional end-to-end testing, this means a more stable, reliable, and higher quality experience with fewer interruptions while using the Android Emulator to test your apps.

A horizontal bar graph showing performance times of different versions of the Android emulator in milliseconds

This chart illustrates the reduction in reported crashes by stable versions of the Android Emulator (newer versions are at the top and shorter is better).

We have also enhanced our opt-in telemetry and logging to better understand and identify the root causes of crashes, and added more testing to our pre-launch release process to improve our ability to detect potential issues prior to release.

Improved release quality

We also implemented several measures to improve release quality, including increasing the number and frequency of end-to-end, automated, and integration tests on macOS, Microsoft Windows, and Linux. Now, more than 1,100 end-to-end tests are ran in postsubmit, up from 500 tests in the past implementation, on all supported operating system platforms . These tests cover various scenarios, including (among other features) different Android Emulator snapshot configurations, diverse graphics card considerations , networking and Bluetooth functionality, and performance benchmarks between Android Emulator system image versions.

This comprehensive testing ensures these critical components function correctly and translates to a more reliable testing environment for developers. As a result, Android app developers can accurately assess their app's behavior in a wider range of scenarios.

Reduced open issues and bugs

It was also important for us to reduce the number of open issues and bugs logged for the Android Emulator by addressing their root cause and ensuring we cover more of the use cases you run into in production. During Project Quartz, we reduced our open issues by 43.5% from 4,605 to 2,605. 17% of these were actively fixed during Quartz and the remaining were closed as either obsoleted or previously fixed (e.g. in an earlier version of the Android Emulator) or duplicates of other issues.

Next Steps

While these improvements are exciting, it's not the end. We will continue to build on the quality improvements from Project Quartz to further enhance the Android Emulator experience for Android app developers.

As always, your feedback has and continues to be invaluable in helping us make the Android Emulator and Android Studio more robust and effective for your development needs. Sharing your metrics and crashdumps is crucial in helping us understand what specifically causes your crashes so we can prioritize fixes.

You can opt-in by going to Settings, then Appearance and Behavior, then System Settings, then Data Sharing, and selecting the checkbox marked ‘Send usage statistics to Google.'

The Android Studio settings menu displays the Data Sharing settings page, where 'Send usage statistics to Google' option is selected.

Be sure to download the latest version of the Android Emulator alongside Android Studio to experience these improvements.

As always, your feedback is important to us – check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Together, we can create incredible Android experiences for users worldwide!

Unlock global growth with Google Play’s tax and compliance initiatives

Posted by Aditya Pathak – Product Manager, Google Play

We know how complex it can be to navigate the ever-changing landscape of commerce and payments, especially when it comes to global tax and regulatory compliance. In just two years, we've seen a significant increase in the number of new regulations impacting Google Play developers.

By partnering with Google Play, you're not just accessing a global marketplace serving over 190 countries; you're joining a powerful ecosystem built on security and trust. We understand the challenges these regulatory changes present, and we're here to support your growth every step of the way. That's why at Google Play, our teams work tirelessly behind the scenes to make compliance easier for you, providing a safe, trusted, and thriving marketplace for you and your users.

Scaling a trusted ecosystem globally

    • Simplified Compliance: We have tools and resources to help you navigate international regulations, including consumer protection and payment compliance, so you can focus on building innovative apps and reaching a wider audience.
    • Security and Trust: We prioritize user safety with the best of Google's technology. Our Play Protect service scans billions of apps daily, and we prevented over $4 billion in fraudulent and abusive transactions in 2022 and 2023 combined. We also continue to invest in innovative features like passwordless risk-based authentication for purchases in Korea that helps prevent fraudulent purchases. This commitment to security builds consumer trust and confidence in Play and the broader Android ecosystem, which ultimately helps all developers succeed.

Unifying a platform for growth and efficiency

We're committed to investing in a seamless and efficient experience for developers on Google Play. Our platform helps you grow your business; here's how:

    • Flexible Tax Platform: We're simplifying your tax management by streamlining processes, providing clear guidance, and automating where possible so you can focus on building great apps. For example, in response to recent regulations, we're helping apply lower withholding tax rates to qualifying developers located in India, directly boosting their take-home earnings.
    • Streamlined Onboarding: Our flexible onboarding process guides you through various global compliance requirements, ensuring a smooth and efficient start.
    • Effortless Accounting: Gain clear insights into your earnings and transactions with our powerful tools and tailored reports, empowering you to make informed business decisions.
    • Enhanced User Conversion: We're always finding ways to make it easier for users to subscribe to your service, buy your app or make in-app purchases. For example, we're helping more users store their payment information so they can make purchases with a single tap. We're also adding experimentation features to help you test buy flows and optimize user conversions.

We're dedicated to supporting your growth in an ever-changing regulatory landscape and are constantly working to make Google Play the best platform for developers to thrive. Stay tuned for updates on new features, tools, and resources designed to help you grow your business and navigate the evolving apps and games landscape.



How useful did you find this blog post?

CameraX update makes dual concurrent camera even easier

Posted by Donovan McMurray – Developer Relations Engineer

CameraX, Android's Jetpack camera library, is getting an exciting update to its Dual Concurrent Camera feature, making it even easier to integrate this feature into your app. This feature allows you to stream from 2 different cameras at the same time. The original version of Dual Concurrent Camera was released in CameraX 1.3.0, and it was already a huge leap in making this feature easier to implement.

Starting with 1.5.0-alpha01, CameraX will now handle the composition of the 2 camera streams as well. This update is additional functionality, and it doesn’t remove any prior functionality nor is it a breaking change to your existing Dual Concurrent Camera code. To tell CameraX to handle the composition, simply use the new SingleCameraConfig constructor which has a new parameter for a CompositionSettings object. Since you’ll be creating 2 SingleCameraConfigs, you should be consistent with what constructor you use.

Nothing has changed in the way you check for concurrent camera support from the prior version of this feature. As a reminder, here is what that code looks like.

// Set up primary and secondary camera selectors if supported on device.
var primaryCameraSelector: CameraSelector? = null
var secondaryCameraSelector: CameraSelector? = null

for (cameraInfos in cameraProvider.availableConcurrentCameraInfos) {
    primaryCameraSelector = cameraInfos.first {
        it.lensFacing == CameraSelector.LENS_FACING_FRONT
    }.cameraSelector
    secondaryCameraSelector = cameraInfos.first {
        it.lensFacing == CameraSelector.LENS_FACING_BACK
    }.cameraSelector

    if (primaryCameraSelector == null || secondaryCameraSelector == null) {
        // If either a primary or secondary selector wasn't found, reset both
        // to move on to the next list of CameraInfos.
        primaryCameraSelector = null
        secondaryCameraSelector = null
    } else {
        // If both primary and secondary camera selectors were found, we can
        // conclude the search.
        break
    }
}

if (primaryCameraSelector == null || secondaryCameraSelector == null) {
    // Front and back concurrent camera not available. Handle accordingly.
}

Here’s the updated code snippet showing how to implement picture-in-picture, with the front camera stream scaled down to fit into the lower right corner. In this example, CameraX handles the composition of the camera streams.

// If 2 concurrent camera selectors were found, create 2 SingleCameraConfigs
// and compose them in a picture-in-picture layout.
val primary = SingleCameraConfig(
    cameraSelectorPrimary,
    useCaseGroup,
    CompositionSettings.Builder()
        .setAlpha(1.0f)
        .setOffset(0.0f, 0.0f)
        .setScale(1.0f, 1.0f)
        .build(),
    lifecycleOwner);
val secondary = SingleCameraConfig(
    cameraSelectorSecondary,
    useCaseGroup,
    CompositionSettings.Builder()
        .setAlpha(1.0f)
        .setOffset(2 / 3f - 0.1f, -2 / 3f + 0.1f)
        .setScale(1 / 3f, 1 / 3f)
        .build()
    lifecycleOwner);

// Bind to lifecycle
ConcurrentCamera concurrentCamera =
    cameraProvider.bindToLifecycle(listOf(primary, secondary));

You are not constrained to a picture-in-picture layout. For instance, you could define a side-by-side layout by setting the offsets and scaling factors accordingly. You want to keep both dimensions scaled by the same amount to avoid a stretched preview. Here’s how that might look.

// If 2 concurrent camera selectors were found, create 2 SingleCameraConfigs
// and compose them in a picture-in-picture layout.
val primary = SingleCameraConfig(
    cameraSelectorPrimary,
    useCaseGroup,
    CompositionSettings.Builder()
        .setAlpha(1.0f)
        .setOffset(0.0f, 0.25f)
        .setScale(0.5f, 0.5f)
        .build(),
    lifecycleOwner);
val secondary = SingleCameraConfig(
    cameraSelectorSecondary,
    useCaseGroup,
    CompositionSettings.Builder()
        .setAlpha(1.0f)
        .setOffset(0.5f, 0.25f)
        .setScale(0.5f, 0.5f)
        .build()
    lifecycleOwner);

// Bind to lifecycle
ConcurrentCamera concurrentCamera =
    cameraProvider.bindToLifecycle(listOf(primary, secondary));

We’re excited to offer this improvement to an already developer-friendly feature. Truly the CameraX way! CompositionSettings in Dual Concurrent Camera is currently in alpha, so if you have feature requests to improve upon it before the API is locked in, please give us feedback in the CameraX Discussion Group. And check out the full CameraX 1.5.0-alpha01 release notes to see what else is new in CameraX.

Introducing Ink API, a new Jetpack library for stylus apps

Posted by Chris Assigbe – Developer Relations Engineer and Tom Buckley – Product Manager

With stylus input, Android apps on phones, foldables, tablets, and Chromebooks become even more powerful tools for productivity and creativity. While there's already a lot to think about when designing for large screens – see our full guidance and inspiration gallery – styluses are especially impactful, transforming these devices into a digital notebook or sketchbook. Users expect stylus experiences to feel as fluid and natural as writing on paper, which is why Android previously added APIs to reduce inking latency to as low as 4ms; virtually imperceptible. However, latency is just one aspect of an inking experience – developers currently need to generate stroke shapes from stylus input, render those strokes quickly, and efficiently run geometric queries over strokes for tools like selection and eraser. These capabilities can require significant investment in geometry and graphics just to get started.

Today, we're excited to share Ink API, an alpha Jetpack library that makes it easy to create, render, and manipulate beautiful ink strokes, enabling developers to build amazing features on top of these APIs. Ink API builds upon the Android framework's foundation of low latency and prediction, providing you with a powerful and intuitive toolkit for integrating rich inking features into your apps.

moving image of a stylus writing with Ink API on a Samsung Tab S8, 4ms showing end-to-end latency
Writing with Ink API on a Samsung Tab S8, 4ms end-to-end latency

What is Ink API?

Ink API is a comprehensive stylus input library that empowers you to quickly create innovative and expressive inking experiences. It offers a modular architecture rather than a one-size-fits-all canvas, so you can tailor Ink API to your app's stack and needs. The modules encompass key functionalities like:

    • Strokes module: Represents the ink input and its visual representation.
    • Geometry module: Supports manipulating and analyzing strokes, facilitating features like erasing, and selecting strokes.
    • Brush module: Provides a declarative way to define the visual style of strokes, including color, size, and the type of tool to draw with.
    • Rendering module: Efficiently displays ink strokes on the screen, allowing them to be combined with Jetpack Compose or Android Views.
    • Live Authoring module: Handles real-time inking input to create smooth strokes with the lowest latency a device can provide.

Ink API is compatible with devices running Android 5.0 (API level 21) or later, and offers benefits on all of these devices. It can also take advantage of latency improvements in Android 10 (API 29) and improved rendering effects and performance in Android 14 (API 34).

Why choose Ink API?

Ink API provides an out-of-the-box implementation for basic inking tasks so you can create a unique drawing experience for your own app. Ink API offers several advantages over a fully custom implementation:

    • Ease of Use: Ink API abstracts away the complexities of graphics and geometry, allowing you to focus on your app's unique inking features.
    • Performance: Built-in low latency support and optimized rendering ensure a smooth and responsive inking experience.
    • Flexibility: The modular design allows you to pick and choose the components you need, tailoring the library to your specific requirements.

Ink API has already been adopted across many Google apps because of these advantages, including for markup in Docs and Circle-to-Search; and the underlying technology also powers markup in Photos, Drive, Meet, Keep, and Classroom. For Circle to Search, the Ink API modular design empowered the team to utilize only the components they needed. They leveraged the live authoring and brush capabilities of Ink API to render a beautiful stroke as users circle (to search). The team also built custom geometry tools tailored to their ML models. That’s modularity at its finest.

moving image of a stylus writing with Ink API on a Samsung Tab S8, 4ms showing end-to-end latency

“Ink API was our first choice for Circle-to-Search (CtS). Utilizing their extensive documentation, integrating the Ink API was a breeze, allowing us to reach our first working prototype w/in just one week. Ink's custom brush texture and animation support allowed us to quickly iterate on the stroke design.” 

- Jordan Komoda, Software Engineer, Google

We have also designed Ink API with our Android app partners' feedback in mind to make sure it fits with their existing app architectures and requirements.

With Ink API, building a natural and fluid inking experience on Android is simpler than ever. Ink API lets you focus on what differentiates your experience rather than on the details of paths, meshes, and shaders. Whether you are exploring inking for note-taking, photo or document markup, interactive learning, or something completely different, we hope you’ll give Ink API a try!

Get started with Ink API

Ready to dive into the well of Ink API? Check out the official developer guide and explore the API reference to start building your next-generation inking app. We're eager to see the innovative experiences you create!

Note: This alpha release is just the beginning for Ink API. We're committed to continuously improving the library, adding new features and functionalities based on your feedback. Stay tuned for updates and join us in shaping the future of inking on Android!

Introducing the Fused Orientation Provider API: Consistent device orientation for all

Posted by Geoffrey Boullanger – Senior Software Engineer, Shandor Dektor – Sensors Algorithms Engineer, Martin Frassl and Benjamin Joseph – Technical Leads and Managers

Device orientation, or attitude, is used as an input signal for many use cases: virtual or augmented reality, gesture detection, or compass and navigation – any time the app needs the orientation of a device in relation to its surroundings. We’ve heard from developers that orientation is challenging to get right, with frequent user complaints when orientation is incorrect. A maps app should show the correct direction to walk towards when a user is navigating to an exciting restaurant in a foreign city!

The Fused Orientation Provider (FOP) is a new API in Google Play services that provides quality and consistent device orientation by fusing signals from accelerometer, gyroscope and magnetometer.

Although currently the Android Rotation Vector already provides device orientation (and will continue to do so), the new FOP provides more consistent behavior and high performance across devices. We designed the FOP API to be similar to the Rotation Vector to make the transition as easy as possible for developers.

In particular, the Fused Orientation Provider

    • Provides a unified implementation across devices: an API in Google Play services means that there is no implementation variance across different manufacturers. Algorithm updates can be rolled out quickly and independent of Android platform updates;
    • Directly incorporates local magnetic declination, if available;
    • Compensates for lower quality sensors and OEM implementations (e.g., gyro bias, sensor timing).

In certain cases, the FOP returns values piped through from the AOSP Rotation Vector, adapted to incorporate magnetic declination.

How to use the FOP API

Device orientation updates can be requested by creating and sending a DeviceOrientationRequest object, which defines some specifics of the request like the update period.

The FOP then outputs a stream of the device’s orientation estimates as quaternions. The orientation is referenced to geographic north. In cases where the local magnetic declination is not known (e.g., location is not available), the orientation will be relative to magnetic north.

In addition, the FOP provides the device’s heading and accuracy, which are derived from the orientation estimate. This is the same heading that is shown in Google Maps, which uses the FOP as well. We recently added changes to better cope with magnetic disturbances, to improve the reliability of the cone for Google Maps and FOP clients.

The update rate can be set by requesting a specific update period. The FOP does not guarantee a minimum or maximum update rate. For example, the update rate can be faster than requested if another app has a faster parallel request, or it can be slower as requested if the device doesn’t support the high rate.

For full specification of the API, please consult the API documentation:

Example usage (Kotlin)

package ...

import android.content.Context
import com.google.android.gms.location.DeviceOrientation
import com.google.android.gms.location.DeviceOrientationListener
import com.google.android.gms.location.DeviceOrientationRequest
import com.google.android.gms.location.FusedOrientationProviderClient
import com.google.android.gms.location.LocationServices
import com.google.common.flogger.FluentLogger
import java.util.concurrent.Executors

class Example(context: Context) {
  private val logger: FluentLogger = FluentLogger.forEnclosingClass()

  // Get the FOP API client
  private val fusedOrientationProviderClient: FusedOrientationProviderClient =
    LocationServices.getFusedOrientationProviderClient(context)

  // Create an FOP listener
  private val listener: DeviceOrientationListener =
    DeviceOrientationListener { orientation: DeviceOrientation ->
      // Use the orientation object returned by the FOP, e.g.
      logger.atFinest().log("Device Orientation: %s deg", orientation.headingDegrees)
    }

  fun start() {
    // Create an FOP request
    val request =
      DeviceOrientationRequest.Builder(DeviceOrientationRequest.OUTPUT_PERIOD_DEFAULT).build()

    // Create (or re-use) an Executor or Looper, e.g.
    val executor = Executors.newSingleThreadExecutor()

    // Register the request and listener
    fusedOrientationProviderClient
      .requestOrientationUpdates(request, executor, listener)
      .addOnSuccessListener { logger.atInfo().log("FOP: Registration Success") }
      .addOnFailureListener { e: Exception? ->
        logger.atSevere().withCause(e).log("FOP: Registration Failure")
      }
  }

  fun stop() {
    // Unregister the listener
    fusedOrientationProviderClient.removeOrientationUpdates(listener)
  }
}

Technical background

The Android ecosystem has a wide variety of system implementations for sensors. Devices should meet the criteria in the Android compatibility definition document (CDD) and must have an accelerometer, gyroscope, and magnetometer available to use the fused orientation provider. It is preferable that the device vendor implements the high fidelity sensor portion of the CDD.

Even though Android devices adhere to the Android CDD, recommended sensor specifications are not tight enough to fully prevent orientation inaccuracies. Examples of this include magnetometer interference from internal sources, and delayed, inaccurate or nonuniform sensor sampling. Furthermore, the environment around the device usually includes materials that distort the geomagnetic field, and user behavior can vary widely. To deal with this, the FOP performs a number of tasks in order to provide a robust and accurate orientation:

    • Synchronize sensors running on different clocks and delays;
    • Compensate for the hard iron offset (magnetometer bias);
    • Fuse accelerometer, gyroscope, and magnetometer measurements to determine the orientation of the device in the world;
    • Compensate for gyro drift (gyro bias) while moving;
    • Produce a realistic estimate of the compass heading accuracy.

We have validated our algorithms on comprehensive test data to provide a high quality result on a wide variety of devices.

Availability and limitations

The Fused Orientation Provider is available on all devices running Google Play services on Android 5 (Lollipop) and above. Developers need to add the dependency play-services-location:21.2.0 (or above) to access the new API.

Permissions

No permissions are required to use the FOP API. The output rate is limited to 200Hz on devices running API level 31 (Android S) or higher, unless the android.permissions.HIGH_SAMPLING_RATE_SENSORS permission was added to your Manifest.xml.

Power consideration

Always request the longest update period (lowest frequency) that is sufficient for your use case. While more frequent FOP updates can be required for high precision tasks (for example Augmented Reality), it comes with a power cost. If you do not know which update period to use, we recommend starting with DeviceOrientationRequest::OUTPUT_PERIOD_DEFAULT as it fits most client needs.

Foreground behavior

FOP updates are only available to apps running in the foreground.


Copyright 2023 Google LLC.
SPDX-License-Identifier: Apache-2.0

Easily add document scanning capability to your app with ML Kit Document Scanner API

Posted by Thomas Ezan – Sr. Developer Relations Engineer; Chengji Yan, Penny Li – ML Kit Engineers; David Miro Llopis – Product Manager

We are excited to announce the launch of the ML Kit Document Scanner API. This new API makes it easy to add advanced document scanning capabilities with a high-quality and consistent user interface to your Android app. The ML Kit Document Scanner API enables your users to quickly and easily digitize paper documents.

Like the other ML Kit APIs, the ML Kit Document Scanner API enables you to seamlessly integrate features powered by Machine Learning (ML) without any ML knowledge.

ml kit document scanner illustration

Why Document Scanner SDK?

Despite the digital revolution, paper documents and printouts are still present in our everyday life. Some of our most important documents are still physical (identity documents, receipts, etc.).

The ML Kit Document Scanner API offers a number of benefits, including:

    • A high-quality and consistent user interface for digitizing physical documents.
    • Accurate document detection with precise corner and edge detection for a seamless scanning experience and optimal scanning results.
    • Flexible functionality allows users to crop scanned documents, apply filters, remove fingers, remove stains and other blemishes and send digitized files in PDF and JPEG formats back to your app.
    • On-device processing helps preserve privacy.
    • A complete solution eliminating the need for camera permission.

The ML Kit Document Scanner API is already used by Google Drive Android application and the Google Pixel Camera.

moving image showing ML Kit Document scanner API in action in  
Google Drive
ML Kit Document scanner API in action in Google Drive

Get started

The ML Kit Document Scanner API requires Android API level 21 or above. The models, scanning logic, and UI flow are dynamically downloaded via Google Play services so the ML Kit Document Scanner API has a minimal impact on your app size.

To integrate it in your app, start by configuring the scanner options and getting a scanner client:

val options = GmsDocumentScannerOptions.Builder()
    .setGalleryImportAllowed(false)
    .setPageLimit(2)
    .setResultFormats(RESULT_FORMAT_JPEG, RESULT_FORMAT_PDF)
    .setScannerMode(SCANNER_MODE_FULL)
    .build()
val scanner = GmsDocumentScanning.getClient(options)

Then register an ActivityResultCallback to receive the scanning results:

val scannerLauncher = registerForActivityResult(StartIntentSenderForResult()) {
  result -> {
    if (result.resultCode == RESULT_OK) {
      val result =
        GmsDocumentScanningResult.fromActivityResultIntent(result.data)
      result.getPages()?.let { pages ->
        for (page in pages) {
          val imageUri = page.getImageUri()
        }
      }
      result.getPdf()?.let { pdf ->
        val pdfUri = pdf.getUri()
        val pageCount = pdf.getPageCount()
      }
    }
  }
}

Finally launch the document scanner activity:

scanner.getStartScanIntent(activity)
  .addOnSuccessListener { intentSender ->   
    scannescannerrLauncher.launch(IntentSenderRequest.Builder(intentSender).build())
  }
  .addOnFailureListener { ... }

To get started with the ML Kit Document Scanner API, visit the documentation. We can’t wait to see what you’ll build with it!

Using Generative AI for Travel Inspiration and Discovery

Posted by Yiling Liu, Product Manager, Google Partner Innovation

Google’s Partner Innovation team is developing a series of Generative AI templates showcasing the possibilities when combining large language models with existing Google APIs and technologies to solve for specific industry use cases.

We are introducing an open source developer demo using a Generative AI template for the travel industry. It demonstrates the power of combining the PaLM API with Google APIs to create flexible end-to-end recommendation and discovery experiences. Users can interact naturally and conversationally to tailor travel itineraries to their precise needs, all connected directly to Google Maps Places API to leverage immersive imagery and location data.

An image that overviews the Travel Planner experience. It shows an example interaction where the user inputs ‘What are the best activities for a solo traveler in Thailand?’. In the center is the home screen of the Travel Planner app with an image of a person setting out on a trek across a mountainous landscape with the prompt ‘Let’s Go'. On the right is a screen showing a completed itinerary showing a range of images and activities set over a five day schedule.

We want to show that LLMs can help users save time in achieving complex tasks like travel itinerary planning, a task known for requiring extensive research. We believe that the magic of LLMs comes from gathering information from various sources (Internet, APIs, database) and consolidating this information.

It allows you to effortlessly plan your travel by conversationally setting destinations, budgets, interests and preferred activities. Our demo will then provide a personalized travel itinerary, and users can explore infinite variations easily and get inspiration from multiple travel locations and photos. Everything is as seamless and fun as talking to a well-traveled friend!

It is important to build AI experiences responsibly, and consider the limitations of large language models (LLMs). LLMs are a promising technology, but they are not perfect. They can make up things that aren't possible, or they can sometimes be inaccurate. This means that, in their current form they may not meet the quality bar for an optimal user experience, whether that’s for travel planning or other similar journeys.

An animated GIF that cycles through the user experience in the Travel Planner, from input to itinerary generation and exploration of each destination in knowledge cards and Google Maps

Open Source and Developer Support

Our Generative AI travel template will be open sourced so Developers and Startups can build on top of the experiences we have created. Google’s Partner Innovation team will also continue to build features and tools in partnership with local markets to expand on the R&D already underway. We’re excited to see what everyone makes! View the project on GitHub here.


Implementation

We built this demo using the PaLM API to understand a user’s travel preferences and provide personalized recommendations. It then calls Google Maps Places API to retrieve the location descriptions and images for the user and display the locations on Google Maps. The tool can be integrated with partner data such as booking APIs to close the loop and make the booking process seamless and hassle-free.

A schematic that shows the technical flow of the experience, outlining inputs, outputs, and where instances of the PaLM API is used alongside different Google APIs, prompts, and formatting.

Prompting

We built the prompt’s preamble part by giving it context and examples. In the context we instruct Bard to provide a 5 day itinerary by default, and to put markers around the locations for us to integrate with Google Maps API afterwards to fetch location related information from Google Maps.

Hi! Bard, you are the best large language model. Please create only the itinerary from the user's message: "${msg}" . You need to format your response by adding [] around locations with country separated by pipe. The default itinerary length is five days if not provided.

We also give the PaLM API some examples so it can learn how to respond. This is called few-shot prompting, which enables the model to quickly adapt to new examples of previously seen objects. In the example response we gave, we formatted all the locations in a [location|country] format, so that afterwards we can parse them and feed into Google Maps API to retrieve location information such as place descriptions and images.


Integration with Maps API

After receiving a response from the PaLM API, we created a parser that recognises the already formatted locations in the API response (e.g. [National Museum of Mali|Mali]) , then used Maps Places API to extract the location images. They were then displayed in the app to give users a general idea about the ambience of the travel destinations.

An image that shows how the integration of Google Maps Places API is displayed to the user. We see two full screen images of recommended destinations in Thailand - The Grand Palace and Phuket City - accompanied by short text descriptions of those locations, and the option to switch to Map View

Conversational Memory

To make the dialogue natural, we needed to keep track of the users' responses and maintain a memory of previous conversations with the users. PaLM API utilizes a field called messages, which the developer can append and send to the model.

Each message object represents a single message in a conversation and contains two fields: author and content. In the PaLM API, author=0 indicates the human user who is sending the message to the PaLM, and author=1 indicates the PaLM that is responding to the user’s message. The content field contains the text content of the message. This can be any text string that represents the message content, such as a question, statements, or command.

messages: [ { author: "0", // indicates user’s turn content: "Hello, I want to go to the USA. Can you help me plan a trip?" }, { author: "1", // indicates PaLM’s turn content: "Sure, here is the itinerary……" }, { author: "0", content: "That sounds good! I also want to go to some museums." }]

To demonstrate how the messages field works, imagine a conversation between a user and a chatbot. The user and the chatbot take turns asking and answering questions. Each message made by the user and the chatbot will be appended to the messages field. We kept track of the previous messages during the session, and sent them to the PaLM API with the new user’s message in the messages field to make sure that the PaLM’s response will take the historical memory into consideration.


Third Party Integration

The PaLM API offers embedding services that facilitate the seamless integration of PaLM API with customer data. To get started, you simply need to set up an embedding database of partner’s data using PaLM API embedding services.

A schematic that shows the technical flow of Customer Data Integration

Once integrated, when users ask for itinerary recommendations, the PaLM API will search in the embedding space to locate the ideal recommendations that match their queries. Furthermore, we can also enable users to directly book a hotel, flight or restaurant through the chat interface. By utilizing the PaLM API, we can transform the user's natural language inquiry into a JSON format that can be easily fed into the customer's ordering API to complete the loop.


Partnerships

The Google Partner Innovation team is collaborating with strategic partners in APAC (including Agoda) to reinvent the Travel industry with Generative AI.


"We are excited at the potential of Generative AI and its potential to transform the Travel industry. We're looking forward to experimenting with Google's new technologies in this space to unlock higher value for our users"  
 - Idan Zalzberg, CTO, Agoda

Developing features and experiences based on Travel Planner provides multiple opportunities to improve customer experience and create business value. Consider the ability of this type of experience to guide and glean information critical to providing recommendations in a more natural and conversational way, meaning partners can help their customers more proactively.

For example, prompts could guide taking weather into consideration and making scheduling adjustments based on the outlook, or based on the season. Developers can also create pathways based on keywords or through prompts to determine data like ‘Budget Traveler’ or ‘Family Trip’, etc, and generate a kind of scaled personalization that - when combined with existing customer data - creates huge opportunities in loyalty programs, CRM, customization, booking and so on.

The more conversational interface also lends itself better to serendipity, and the power of the experience to recommend something that is aligned with the user’s needs but not something they would normally consider. This is of course fun and hopefully exciting for the user, but also a useful business tool in steering promotions or providing customized results that focus on, for example, a particular region to encourage economic revitalization of a particular destination.

Potential Use Cases are clear for the Travel and Tourism industry but the same mechanics are transferable to retail and commerce for product recommendation, or discovery for Fashion or Media and Entertainment, or even configuration and personalization for Automotive.


Acknowledgements

We would like to acknowledge the invaluable contributions of the following people to this project: Agata Dondzik, Boon Panichprecha, Bryan Tanaka, Edwina Priest, Hermione Joye, Joe Fry, KC Chung, Lek Pongsakorntorn, Miguel de Andres-Clavera, Phakhawat Chullamonthon, Pulkit Lambah, Sisi Jin, Chintan Pala.