Tag Archives: Best Practices

Achieving privacy compliance with your CI/CD: A guide for compliance teams

Posted by Fergus Hurley – Co-Founder & GM, Checks, and Evan Otero – Product Manager, Checks

In the fast-paced world of software development, Continuous Integration and Continuous Deployment (CI/CD) have become cornerstones, enabling teams to deliver high-quality software faster than ever. However, the rise of rapid innovation, increasing use of third-party libraries, and AI-generated code have accelerated vulnerabilities and risks. Therefore, addressing these issues early in the development lifecycle is essential so that teams can launch their products quickly and confidently.

The introduction of Checks privacy compliance CI/CD tooling feature represents a significant stride towards addressing these concerns, by reducing manual intervention and automating compliance and privacy standards as part of a release cycle.

In this post, we explore the meaning of CI/CD for compliance team members unfamiliar with this technology and how Checks can weave privacy and compliance protection practices into that pipeline.


What is CI/CD?

Continuous Integration (CI) and Continuous Deployment (CD) are foundational practices in modern software development. They enable development teams to increase efficiency, improve quality, and accelerate delivery.

Continuous Integration (CI) automatically integrates code changes from multiple contributors into a software project. This practice enables teams to detect problems early by running automated tests on each change before it is merged into the main branch.

Graphic showing CI/CD continuous cycle

Continuous Deployment (CD) takes automation further by automatically deploying all code changes to a testing or production environment after the build stage. This means that, in addition to automated testing, automated release processes ensure that new changes are accessible to users as quickly as possible.


Shifting issue-spotting left with CI/CD pipelines

The automation of CI/CD processes is typically called “pipelines.” CI/CD pipelines automate the steps software changes go through, from development to deployment. These steps include compiling code, running tests (unit tests, integration tests, etc.), security scans, and more. If all automated tests pass, the changes go live without human intervention in a specific environment, such as testing or production.

These pipelines are designed to catch issues as early as possible, embodying the practice known as “shifting left.” The benefits of “shifting left”, particularly when applied through CI/CD pipelines, include:

  • Improved quality and security: Automated testing in CI/CD pipelines ensures that code is rigorously tested for functional and compliance issues before it reaches production. This early detection enables teams to address vulnerabilities and errors when they are generally easier and less costly to fix.
  • Faster release cycles: By catching and addressing issues early, teams avoid the bottlenecks associated with late-stage discovery of problems. This efficiency reduces the time from development to deployment, enabling faster release cycles and more responsive delivery of features and fixes.
  • Reduced costs: Detecting issues later in the development process can be significantly more expensive to resolve, especially if they're found after deployment. Early detection through CI/CD pipelines minimizes these costs by preventing complex rollbacks and the need for emergency fixes in production environments.
  • Increased reliability and trust: Software that undergoes thorough testing before release is generally more reliable and secure. This reliability builds trust among users and stakeholders, crucial for maintaining a positive reputation and ensuring user satisfaction.

Checks brings privacy and compliance tests to your CI/CD

TChecks CI/CD tooling seamlessly integrates app compliance scanning into CI/CD pipelines via plugins for GitHub, Jenkins, and FastLane. You can also use Checks in any other CI/CD system that supports custom scripts, such as GitLab, TeamCity, Bitbucket, and more.

image showing logos of CI/CD systems that support custom scripts - FastLane, Jenkins, GitHub, Atlassian BitBucket, GitLab, Azure DevOps, and Team City

When Checks scans an app, the binary undergoes dynamic and static analysis to understand your data collection and sharing practices, including app dependencies such as SDKs, permissions, and endpoints. This data is then tested against global regulatory requirements, store policies, your custom Checks policies, and your privacy policy to find potential issues and opportunities for improvement.


Top 5 benefits of integrating Checks into your CI/CD

image showing checks report highlighting potential issues

By adding Checks as a step in your CI/CD pipeline, you can automate app and code compliance scanning as part of the development lifecycle.

The top 5 benefits of integrating Checks in your CI/CD are:

  1. Real-time, intelligent alerting: You can stay informed of new compliance issues or changes in data behavior across your product portfolio with instant notifications via email or Slack. 
  2. Understand data sharing & SDKs: Checks can help ensure secure third-party data sharing by gaining visibility into SDK integrations, permissions, and data flow analysis. By using Checks, you can be confident in your third-party dependencies before your public release. 
  3. Ensure new builds follow your company policies: Checks enables you to automate data governance with custom policies that let you set up safeguards against specific endpoints, SDKs, data types, and permissions, tailoring privacy to your specific needs. These policies help ensure all new releases comply with your company’s data policies. 
  4. Keep your Google Play Data safety section up-to-date: Checks can recommend Google Play Data safety section disclosures and alert you if you should make an update before releasing publicly, ensuring your declarations are always up-to-date. 
  5. Deploy quickly and with confidence: When Checks finds issues in the CI/CD, these vulnerabilities are caught and remedied early, significantly reducing the risk of compliance violations once you deploy the app. Checks helps you maintain high compliance standards without slowing down the release cycle, enabling teams to deploy with confidence and ensuring that user data is protected from the outset.

Next steps

Getting started is simple. Start by first signing up for Checks and then adding Checks to your CI/CD pipelines with these simple configuration steps. Once configured, Checks is ready to perform a variety of privacy and compliance verifications.

This proactive approach to privacy and compliance safeguards against potential risks and aligns with regulatory compliance requirements, making it an invaluable asset for any compliance and development team.

Battling Impersonation Scams: Monzo’s Innovative Approach

Posted by Todd Burner – Developer Relations Engineer

Cybercriminals continue to invest in advanced financial fraud scams, costing consumers more than $1 trillion in losses. According to the 2023 Global State of Scams Report by the Global Anti-Scam Alliance, 78 percent of mobile users surveyed experienced at least one scam in the last year. Of those surveyed, 45 percent said they’re experiencing more scams in the last 12 months.

ALT TEXT

The Global Scam Report also found that phone calls are the top method to initiate a scam. Scammers frequently employ social engineering tactics to deceive mobile users.

The key place these scammers want individuals to take action are in the tools that give access to their money. This means financial services are frequently targeted. As cybercriminals push forward with more scams, and their reach extends globally, it’s important to innovate in the response.

One such innovator is Monzo, who have been able to tackle scam calls through a unique impersonation detection feature in their app.

Monzo’s Innovative Approach

Founded in 2015, Monzo is the largest digital bank in the UK with presence in the US as well. Their mission is to make money work for everyone with an ambition to become the one app customers turn to to manage their entire financial lives.

Monzo logo

Impersonation fraud is an issue that the entire industry is grappling with and Monzo decided to take action and introduce an industry-first tool. An impersonation scam is a very common social engineering tactic when a criminal pretends to be someone else so they can encourage you to send them money. These scams often involve using urgent pretenses that involve a risk to a user’s finances or an opportunity for quick wealth. With this pressure, fraudsters convince users to disable security safeguards and ignore proactive warnings for potential malware, scams, and phishing.

Call Status Feature

Android offers multiple layers of spam and phishing protection for users including call ID and spam protection in the Phone by Google app. Monzo’s team wanted to enhance that protection by leveraging their in-house telephone systems. By integrating with their mobile application infrastructure they could help their customers confirm in real time when they’re actually talking to a member of Monzo’s customer support team in a privacy preserving way.

If someone calls a Monzo customer stating they are from the bank, their users can go into the app to verify this. In the Monzo app’s Privacy & Security section, users can see the ‘Monzo Call Status’, letting them know if there is an active call ongoing with an actual Monzo team member.

“We’ve built this industry-first feature using our world-class tech to provide an additional layer of comfort and security. Our hope is that this could stop instances of impersonation scams for Monzo customers from happening in the first place and impacting customers.” 

- Priyesh Patel, Senior Staff Engineer, Monzo’s Security team

Keeping Customers Informed

If a user is not talking to a member of Monzo’s customer support team they will see that as well as some helpful information. If the ‘Monzo call status’ is showing that you are not speaking to Monzo, the call status feature tells you to hang up right away and report it to their team. Their customers can start a scam report directly from the call status feature in the app.

screen grab of Monzo call status alerting the customer that the call the customer is receiving is not coming from Monzo. The customer is being advised to end the call

If a genuine call is ongoing the customer will see the information.

screen grab of Monzo call status confirming to the customer that the call the customer is receiving is coming from Monzo.

How does it work?

Monzo has integrated a few systems together to help inform their customers. A cross functional team was put together to build a solution.

Monzo’s in-house technology stack meant that the systems that power their app and customer service phone calls can easily communicate with one another. This allowed them to link the two and share details of customer service calls with their app, accurately and in real-time.

The team then worked to identify edge cases, like when the user is offline. In this situation Monzo recommends that customers don’t speak to anyone claiming they’re from Monzo until you’re connected to the internet again and can check the call status within the app.

screen grab of Monzo call status displaying warning while the customer is offline letting the customer know the app is unable to verify whether or not the call is coming from Monzo, so it is safer not to answer.

Results and Next Steps

The feature has proven highly effective in safeguarding customers, and received universal praise from industry experts and consumer champions.

“Since we launched Call Status, we receive an average of around 700 reports of suspected fraud from our customers through the feature per month. Now that it’s live and helping protect customers, we’re always looking for ways to improve Call Status - like making it more visible and easier to find if you’re on a call and you want to quickly check that who you’re speaking to is who they say they are.” 

- Priyesh Patel, Senior Staff Engineer, Monzo’s Security team

Final Advice

Monzo continues to invest and innovate in fraud prevention. The call status feature brings together both technological innovation and customer education to achieve its success, and gives their customers a way to catch scammers in action.

A layered security approach is a great way to protect users. Android and Google Play provide layers like app sandboxing, Google Play Protect, and privacy preserving permissions, and Monzo has built an additional one in a privacy-preserving way.

To learn more about Android and Play’s protections and to further protect your app check out these resources:

Enhanced screen sharing capabilities in Android 14 (and Google Meet) improve meeting productivity

Posted by Francesco Romano – Developer Relations Engineer on Android

App screen sharing improves privacy and productivity

Android 14 QPR2 brings exciting advancements in user privacy and streamlined multitasking with app screen sharing. No longer do users have to broadcast their entire screen while screen sharing or casting, ensuring they share exactly what they want to share.

Leverage the new MediaProjection APIs to customize the screen sharing experience and deliver even greater utility to your users.

What is app screen sharing?

Prior to Android 14, users could only share or record their entire screen on Android devices, which could expose private information in other apps or notifications.

App screen sharing is a new platform feature that lets users restrict sharing and recording to a single app window, mitigating the risk of oversharing private messages or notifications. With app screen sharing, the status bar, navigation bar, notifications, and other system UI elements are excluded from the shared display. Only the content of the selected app is shared.

This not only enhances security for screen sharing, but also enables new use cases on large screens. Users can improve multitasking productivity – such as screen sharing while attending a meeting – by taking advantage of extra screen space on these larger devices.

How does it work?

There are three different entry points for users to start app screen sharing:

    1. Start casting from Quick Settings
    2. Start screen recording from Quick Settings
    3. Launch from an app with screen sharing or recording capabilities via the MediaProjection API

Let’s consider an example where a host user wants to share a single app to the participants of a video call.

The host user starts screen sharing as usual, but now in Android 14 they are presented with an updated dialog that allows them to choose whether to share a single app instead of their entire screen.

The host user decides to share a single app, and they select the app from the App Selector.

During screen sharing, the video call participants can see only the content from the selected app.

The host user can end the screen capture in a few ways: from the app where sharing started, in the notification shade, by closing the app being shared, or by ending the video call.

visual journey of host sharing a single app to the participants in a video call across four panels

How to support app screen sharing?

Apps that use the MediaProjection APIs are capable of starting app screen sharing without any code changes. However, it’s important to test your app to ensure that the screen sharing experience works as intended, since the user flow changes with this new behavior. Previously, the user would stay in the host app after the permission dialog. With app screen sharing the user is not returned to the host app, but the target app to be shared is launched instead. If the target app was already running in foreground (e.g. in multi window mode), then it simply becomes the top focused app.

Android 14 also introduces two callback methods to empower you to customize the sharing experience:

MediaProjection.Callback#onCapturedContentResize(width, height) is invoked immediately after capture begins or when the size of the captured region changes. The method arguments provide the accurate sizing for the streamed capture.

Note: The given width and height correspond to the same width and height that would be returned from android.view.WindowMetrics#getBounds() of the captured region.

If the recorded content has a different aspect ratio from either the VirtualDisplay or output Surface, the captured stream has black bars around the recorded content. The application can avoid the black bars around the recorded content by updating the size of both the VirtualDisplay and output Surface:

override fun onCapturedContentResize(width: Int, height: Int): String {
    // VirtualDisplay instance from MediaProjection#createVirtualDisplay().
    virtualDisplay.resize(width, height, dpi)

    // Create a new Surface with the updated size.
    val textureName: Int // the OpenGL texture object name
    val surfaceTexture = SurfaceTexture(textureName)
    surfaceTexture.setDefaultBufferSize(width, height)
    val surface = Surface(surfaceTexture)

    // Ensure the VirtualDisplay has the updated Surface to send the capture to.
    virtualDisplay.setSurface(surface)
}

The other API is MediaProjection.Callback#onCapturedContentVisibilityChanged(isVisible), which is invoked after capture begins or when the visibility of the captured region changes. The method argument indicates the current visibility of the captured region.

The callback is triggered when:

    • The captured region becomes invisible (isVisible==False).This may happen when the projected app is not topmost anymore, like when another app entirely covers it, or the user navigates away from the captured app.
    • The captured region becomes visible again (isVisible==True).This may happen if the user moves the covering app to show at least some portion of the captured app (for example, the user has multiple apps visible in multi-window mode).

Applications can take advantage of this callback by showing or hiding the captured content from the output Surface based on whether the captured region is currently visible to the user. You should pause or resume the sharing accordingly in order to conserve resources.

How Google Meet is improving meeting productivity

“App screen sharing enables users to share specific information in a Meet call without oversharing private information on the screen like messages and notifications. Users can choose specific apps to share, or they can share the whole screen as before. Additionally, users can leverage split-screen mode on large screen devices to share content while still seeing the faces of friends, families, coworkers, and other meeting participants.” - Product Manager at Google Meet

Let’s see app screen sharing in action during a video call, in this coming-soon version of Google Meet!

moving image of app screen sharing in action during a video call on Google Meet

Window on the world

App screen sharing opens doors (and windows) for more focused and secure app experiences within the Android ecosystem.

This new feature enhances several use cases:

    • Collaboration apps can facilitate focused discussion on specific design elements, documents, or spreadsheets without including distracting background details.
    • Tech support agents can remotely view the user's problem app without seeing potentially sensitive content in other areas.
    • Video conferencing tools can share a presentation window selectively rather than the entire screen.
    • Educational apps can demonstrate functionality without compromising student privacy, and students can share projects without fear of showing sensitive information.

By thoughtfully implementing app screen sharing, you can establish your app as a champion of user privacy and convenience.

Introducing the Fused Orientation Provider API: Consistent device orientation for all

Posted by Geoffrey Boullanger – Senior Software Engineer, Shandor Dektor – Sensors Algorithms Engineer, Martin Frassl and Benjamin Joseph – Technical Leads and Managers

Device orientation, or attitude, is used as an input signal for many use cases: virtual or augmented reality, gesture detection, or compass and navigation – any time the app needs the orientation of a device in relation to its surroundings. We’ve heard from developers that orientation is challenging to get right, with frequent user complaints when orientation is incorrect. A maps app should show the correct direction to walk towards when a user is navigating to an exciting restaurant in a foreign city!

The Fused Orientation Provider (FOP) is a new API in Google Play services that provides quality and consistent device orientation by fusing signals from accelerometer, gyroscope and magnetometer.

Although currently the Android Rotation Vector already provides device orientation (and will continue to do so), the new FOP provides more consistent behavior and high performance across devices. We designed the FOP API to be similar to the Rotation Vector to make the transition as easy as possible for developers.

In particular, the Fused Orientation Provider

    • Provides a unified implementation across devices: an API in Google Play services means that there is no implementation variance across different manufacturers. Algorithm updates can be rolled out quickly and independent of Android platform updates;
    • Directly incorporates local magnetic declination, if available;
    • Compensates for lower quality sensors and OEM implementations (e.g., gyro bias, sensor timing).

In certain cases, the FOP returns values piped through from the AOSP Rotation Vector, adapted to incorporate magnetic declination.

How to use the FOP API

Device orientation updates can be requested by creating and sending a DeviceOrientationRequest object, which defines some specifics of the request like the update period.

The FOP then outputs a stream of the device’s orientation estimates as quaternions. The orientation is referenced to geographic north. In cases where the local magnetic declination is not known (e.g., location is not available), the orientation will be relative to magnetic north.

In addition, the FOP provides the device’s heading and accuracy, which are derived from the orientation estimate. This is the same heading that is shown in Google Maps, which uses the FOP as well. We recently added changes to better cope with magnetic disturbances, to improve the reliability of the cone for Google Maps and FOP clients.

The update rate can be set by requesting a specific update period. The FOP does not guarantee a minimum or maximum update rate. For example, the update rate can be faster than requested if another app has a faster parallel request, or it can be slower as requested if the device doesn’t support the high rate.

For full specification of the API, please consult the API documentation:

Example usage (Kotlin)

package ...

import android.content.Context
import com.google.android.gms.location.DeviceOrientation
import com.google.android.gms.location.DeviceOrientationListener
import com.google.android.gms.location.DeviceOrientationRequest
import com.google.android.gms.location.FusedOrientationProviderClient
import com.google.android.gms.location.LocationServices
import com.google.common.flogger.FluentLogger
import java.util.concurrent.Executors

class Example(context: Context) {
  private val logger: FluentLogger = FluentLogger.forEnclosingClass()

  // Get the FOP API client
  private val fusedOrientationProviderClient: FusedOrientationProviderClient =
    LocationServices.getFusedOrientationProviderClient(context)

  // Create an FOP listener
  private val listener: DeviceOrientationListener =
    DeviceOrientationListener { orientation: DeviceOrientation ->
      // Use the orientation object returned by the FOP, e.g.
      logger.atFinest().log("Device Orientation: %s deg", orientation.headingDegrees)
    }

  fun start() {
    // Create an FOP request
    val request =
      DeviceOrientationRequest.Builder(DeviceOrientationRequest.OUTPUT_PERIOD_DEFAULT).build()

    // Create (or re-use) an Executor or Looper, e.g.
    val executor = Executors.newSingleThreadExecutor()

    // Register the request and listener
    fusedOrientationProviderClient
      .requestOrientationUpdates(request, executor, listener)
      .addOnSuccessListener { logger.atInfo().log("FOP: Registration Success") }
      .addOnFailureListener { e: Exception? ->
        logger.atSevere().withCause(e).log("FOP: Registration Failure")
      }
  }

  fun stop() {
    // Unregister the listener
    fusedOrientationProviderClient.removeOrientationUpdates(listener)
  }
}

Technical background

The Android ecosystem has a wide variety of system implementations for sensors. Devices should meet the criteria in the Android compatibility definition document (CDD) and must have an accelerometer, gyroscope, and magnetometer available to use the fused orientation provider. It is preferable that the device vendor implements the high fidelity sensor portion of the CDD.

Even though Android devices adhere to the Android CDD, recommended sensor specifications are not tight enough to fully prevent orientation inaccuracies. Examples of this include magnetometer interference from internal sources, and delayed, inaccurate or nonuniform sensor sampling. Furthermore, the environment around the device usually includes materials that distort the geomagnetic field, and user behavior can vary widely. To deal with this, the FOP performs a number of tasks in order to provide a robust and accurate orientation:

    • Synchronize sensors running on different clocks and delays;
    • Compensate for the hard iron offset (magnetometer bias);
    • Fuse accelerometer, gyroscope, and magnetometer measurements to determine the orientation of the device in the world;
    • Compensate for gyro drift (gyro bias) while moving;
    • Produce a realistic estimate of the compass heading accuracy.

We have validated our algorithms on comprehensive test data to provide a high quality result on a wide variety of devices.

Availability and limitations

The Fused Orientation Provider is available on all devices running Google Play services on Android 5 (Lollipop) and above. Developers need to add the dependency play-services-location:21.2.0 (or above) to access the new API.

Permissions

No permissions are required to use the FOP API. The output rate is limited to 200Hz on devices running API level 31 (Android S) or higher, unless the android.permissions.HIGH_SAMPLING_RATE_SENSORS permission was added to your Manifest.xml.

Power consideration

Always request the longest update period (lowest frequency) that is sufficient for your use case. While more frequent FOP updates can be required for high precision tasks (for example Augmented Reality), it comes with a power cost. If you do not know which update period to use, we recommend starting with DeviceOrientationRequest::OUTPUT_PERIOD_DEFAULT as it fits most client needs.

Foreground behavior

FOP updates are only available to apps running in the foreground.


Copyright 2023 Google LLC.
SPDX-License-Identifier: Apache-2.0

Designing your account deletion experience with users in mind

Posted by Tatiana van Maaren – Global T&S Partnerships Lead, Privacy & Security, May Smith - Product Manager, and Anita Issagholyan – Policy Lead

With millions of developers relying on our platform, Google Play is committed to keeping our ecosystem safe for everyone. That’s why, in addition to our ongoing investments in app privacy and security, we also continuously update our policies to respond to new challenges and user expectations.

For example, we recently introduced a new account deletion policy with required disclosures within the Data Safety section on the Play Store. Deleting an account should be as easy as creating one, so the new policy requires developers to provide information and web resources that help users to manage their data and understand an app's deletion practices.

To help you build trust and design a user-friendly experience that helps meet our policy requirements, consider these 5 best practices when implementing your account deletion solution.

1.     Make it seamless

Users prefer a simple and straightforward account deletion flow. Although users know that more steps may follow (such as authentication) navigating multiple screens before the deletion page can be a significant barrier and create negative feelings for the user. Consider providing your account deletion option on an account settings page or place a prominent button on the home screen. Design the flow with discoverability in mind by taking the user directly to the deletion process.

2.     Allow automatic deletion

Users feel that if they can create an account without talking to a customer service agent, they should be able to delete their account online, too. If automation is not on your roadmap just yet, consider a step-by-step deletion request form or a dedicated page to connect users with customer support.

3.     Offer guidance and explain potential implications

Users delete their accounts for various reasons, some of which may be better resolved another way. Early in your deletion flow, point your users toward a Help Center article that explains how your deletion process works in simple terms, including any potential consequences. For example, make it clear if your users will need to pause their payment method before deleting the account, or download any account data they want to keep. Helping your users understand the process in advance can prevent accidental deletions. For those who do change their minds, consider offering a way to recover their accounts within a reasonable timeframe.

Here’s an example of how Play Store Developer, Canva, has designed the in-app deletion flow to explain potential consequences of account deletion:

user journey on the Canva app in three panels
User journey on the Canva app
“User data privacy has always been important for us. We’ve always been intentional about our approach in optimizing the Canva app so our users can have more transparency and control over their data. We’re welcoming these new requirements from the Play store as we know this new flow will elevate users’ trust in our app and benefit our business in the long term.” - Will Currie, Software Engineer, Canva

4.     Confirm account deletion

Sometimes users misunderstand whether the account itself or just data collected by the app was deleted in the deletion process. Users often think that the data your app stored in the cloud will automatically be deleted at the same time as account deletion. Since it may take time to remove account data from internal company systems or comply with data retention requirements in different regions, transparency about the process can help you maintain trust in your brand and make it more likely for users to return in the future.

Here’s SYBO Games, has designed their web deletion in-app deletion flow:

user journey on the Sybo Games web resource in four panels
User journey on the SYBO Games web resource
“We are always striving to ensure that our games provide a fun user experience, built on a solid data protection foundation. When we learned about the new account deletion update on Google Play, we thought this was a great step forward to ensure that the entire developer ecosystem optimizes for user safety. We encourage developers to carve out time to embrace these improvements and prioritize regular privacy check-ins.”  - Elizabeth Ann Schweitzer, Games Compliance Manager, SYBO Games

5.     Don’t forget user engagement

This is a great opportunity to connect with your users at a critical moment. Make sure users who have uninstalled your app can easily remove their accounts through a web resource without needing to reinstall the app. You can also invite them to complete a survey or provide feedback on their decision.

Protecting users' data is essential for building trust and loyalty. By updating the Data Safety section on Google Play and continuing to optimize user experience for account deletion, you can strengthen trust in your company while striving for the highest level of user data protection.


Thank you for your continued collaboration and feedback in developing this data transparency feature and in helping make Google Play safe for all.

Building Open Models Responsibly in the Gemini Era

Google has long believed that open technology is not only good for our company, but good for the industry, consumers, and the world. We’ve released open-source projects like Android and Chromium that transformed access to mobile and web technologies, and have done the same in AI with Transformers, TensorFlow, and AlphaFold. The release of our Gemma family of open models is a next step in how we’re deepening our commitment to open technology alongside an industry-leading safe, responsible approach. At the same time, the rapidly evolving nature of AI raises important considerations for how to enable safety-aligned open models: an approach that supports broad innovation while promoting safe uses.

A benefit of open source is that once it is released, its license gives users full creative autonomy. This is a powerful guarantee of technology access for developers and end users. Another benefit is that open-source technology can be modified to fit the unique use case of the end user, without restriction.

In the hands of a malicious actor, however, the lack of restrictions can raise risks. Computing has been through similar cycles before, addressing issues such as protecting users of the open internet, handling cryptography, and addressing open-source software security. We now face this challenge with AI. Below we share the approach we took to openly releasing Gemma models, and the advancements in open model safety we hope to accelerate.


Providing access to Gemma open models

Today, Gemma models are being released as what the industry collectively has begun to refer to as “open models.” Open models feature free access to the model weights, but terms of use, redistribution, and variant ownership vary according to a model’s specific terms of use, which may not be based on an open-source license. The Gemma models’ terms of use make them freely available for individual developers, researchers, and commercial users for access and redistribution. Users are also free to create and publish model variants. In using Gemma models, developers agree to avoid harmful uses, reflecting our commitment to developing AI responsibly while increasing access to this technology.

We’re precise about the language we’re using to describe Gemma models because we’re proud to enable responsible AI access and innovation, and we’re equally proud supporters of open source. The definition of "Open Source" has been invaluable to computing and innovation because of requirements for redistribution and derived works, and against discrimination. These requirements enable cross-industry collaboration, individual innovation and entrepreneurship, and shared research to happen with exponential effects.

However, existing open-source concepts can’t always be directly applied to AI systems, which raises questions on how to use open-source licenses with AI. It’s important that we carry forward open principles that have made the sea-change we’re experiencing with AI possible while clarifying the concept of open-source AI and addressing concepts like derived work and author attribution.


Taking a comprehensive approach to releasing Gemma safely and responsibly

Licensing and terms of use are only one part of the evaluations, technical tools, and considered decision-making that went into aligning this release with our responsible AI Principles. Our approach involved:

  • Systematic internal review in accordance with our AI Principles: Consistent with our AI Principles, we release models only when we have determined the benefits are significant, and the risks of misuse are low or can be mitigated. We take that same approach to open models, incorporating a balance of the benefits of wider access to a particular model as well as the risks of misuse and how we can mitigate them. With Gemma, we considered the increased AI research and innovation by us and many others in the community, the access to AI technology the models could bring, and what access was needed to support these use cases.
  • A high evaluation bar: Gemma models underwent thorough evaluations, and were held to a higher bar for evaluating risk of abuse or harm than our proprietary models, given the more limited mitigations currently available for open models. These evaluations cover a broad range of responsible AI areas, including safety, fairness, privacy, societal risk, as well as capabilities such as chemical, biological, radiological, nuclear (CBRN) risks, cybersecurity, and autonomous replication. As described in our technical report, the Gemma models exhibit state-of-the-art safety performance in human side-by-side evaluations.
  • Responsibility tools for developers: As we release the Gemma models, we are also releasing a Responsible Generative AI Toolkit for developers, providing guidance and tools to help them create safer AI applications.

We continue to evolve our approach. As we build these frameworks further, we will proceed thoughtfully and incorporate what we learn into future model assessments. We will continue to explore the full range of access mechanisms, with benefits and risk mitigation in mind, including API-based access and staged releases.


Advancing open model safety together

Many of today’s AI safety tools are designed for systems where the design approach assumes restricted access and redistribution, as well as auxiliary controls like query filters. Similarly, much of the AI safety research for improving mitigations takes on the design assumptions of those systems. Just as we have created unique threat models and solutions for other open technology, we are developing safety and security tools appropriate for the differences of openly available AI.

As models become more and more capable, we are conducting research and investing in rigorous safety evaluation, testing, and mitigations for open models. We are also actively participating in conversations with policymakers and open-source community leaders on how the industry should approach this technology. This challenge is multifaceted, just like AI systems themselves. Model-sharing platforms like Hugging Face and Kaggle, where developers inspire each other with novel model iterations, play a critical role in efforts to develop open models safely; there is also a role for the cybersecurity community to contribute learnings and best practices.

Building those solutions requires access to open models, sharing innovations and improvements. We believe sharing the Gemma models will not just help increase access to AI technology, but also help the industry develop new approaches to safety and responsibility.

As developers adopt Gemma models and other safety-aligned open models, we look forward to working with the open-source community to develop more solutions for responsible approaches to AI in the open ecosystem. A global diversity of experiences, perspectives, and opportunities will help build safe and responsible AI that works for everyone.

By Anne Bertucio – Sr Program Manager, Open Source Programs Office; Helen King – Sr Director of Responsibility, Google DeepMind

YouTube releases scripts to help partners and creators to optimize their work

At YouTube Technology Services, we believe that open source software is essential for driving innovation and collaboration in the YouTube ecosystem. We want to make automation on YouTube more accessible by providing publicly available scripts to automate common use cases, aiming to decrease the cost for partners and creators to handle the most common scenarios when managing their content on YouTube.

In order to do so, we are announcing a new GitHub Organization, YouTubeLabs, where you will find open source code examples in the code-samples repository. We are providing open source scripts for a variety of use cases, including but not limited to:

Most code samples rely on public YouTube APIs or Google APIs and are well-documented and well-commented, in order to be easily modified by partners and creators.

We are delivering code that aims to be as accessible as possible to our partners and creators, with minimal configurations and minimal installation required. That's why we rely on Colaboratory Notebooks (Colab) and AppsScript as the main pillars of our open source offering. Colab is a free, cloud-based Jupyter notebook environment that makes it easy to run Python code in the browser, and it is integrated with Google Drive. AppsScript is a serverless platform that allows you to write scripts that run on Google's servers.

We believe that open source software is key to the future of the YouTube ecosystem. By making our code available to the public, we are helping to empower partners and creators to do more with YouTube.

Want to get started? Check out some of the code examples already available in YouTubeLabs’ code-sharing repository:

We look forward to continuing to build out our open source examples in the coming months, so don’t forget to “like and subscribe” to our repository to stay tuned for more!

By Federico Villa and Haley Schafer – Partner Technology Managers on behalf of YouTube Technology Services

Prompt users to update to your latest app version

Posted by Lidia Gaymond – Product Manager, Google Play

For years, Google Play has helped users enjoy the latest versions of your app through auto-updates or in-app updates. While most users update their apps this way, some may still be stuck on outdated, unsupported or broken versions of your app.

Today, we are introducing a new tool that will prompt these users to update, bringing them closer to the app experience you intended to deliver.

Play recovery tools allow you to prompt users running specific versions of your app to update every time they restart the app.

Image of side by side mobile device screens showing how the prompt to update may look to users
Note: Images are examples and subject to change

To use this new feature, log into Google Play Console and head to your Releases or to the App Bundle Explorer page, where you can select the app versions where you want to deliver the prompts. Alternatively, the feature is also available via the Play Developer API, and will soon be extended to allow you to target multiple app versions at once. Please note that the version you want to deploy the prompt to needs to be built as an app bundle.

You can then narrow your targeting criteria by country or Android version (if required), with no prior integration necessary.

Currently, over 50% of users are responding to the prompts, enabling more users to get the best experience of your apps.

After prompting users to update, you can use Play Console's recovery tools to edit your update configuration, view its progress, or cancel the recovery action altogether. Learn more about the feature here and start using it today!

Navigating AI Safety & Compliance: A guide for CTOs

Posted by Fergus Hurley – Co-Founder & GM, Checks, and Pedro Rodriguez – Head of Engineering, Checks

The rapid advances in generative artificial intelligence (GenAI) have brought about transformative opportunities across many industries. However, these advances have raised concerns about risks, such as privacy, misuse, bias, and unfairness. Responsible development and deployment is, therefore, a must.

AI applications are becoming more sophisticated, and developers are integrating them into critical systems. Therefore, the onus is on technology leaders, particularly CTOs and Heads of Engineering and AI – those responsible for leading the adoption of AI across their products and stacks – to ensure they use AI safely, ethically, and in compliance with relevant policies, regulations, and laws.

While comprehensive AI safety regulations are nascent, CTOs cannot wait for regulatory mandates before they act. Instead, they must adopt a forward-thinking approach to AI governance, incorporating safety and compliance considerations into the entire product development cycle.

This article is the first in a series to explore these challenges. To start, this article presents four key proposals for integrating AI safety and compliance practices into the product development lifecycle:


1.     Establish a robust AI governance framework

Formulate a comprehensive AI governance framework that clearly defines the organization’s principles, policies, and procedures for developing, deploying, and operating AI systems. This framework should establish clear roles, responsibilities, accountability mechanisms, and risk assessment protocols.

Examples of emerging frameworks include the US National Institute of Standards and Technologies’ AI Risk Management Framework, the OSTP Blueprint for an AI Bill of Rights, the EU AI Act, as well as Google’s Secure AI Framework (SAIF).

As your organization adopts an AI governance framework, it is crucial to consider the implications of relying on third-party foundation models. These considerations include the data from your app that the foundation model uses and your obligations based on the foundation model provider's terms of service.


2.     Embed AI safety principles into the design phase

Incorporate AI safety principles, such as Google’s responsible AI principles, into the design process from the outset.

AI safety principles involve identifying and mitigating potential risks and challenges early in the development cycle. For example, mitigate bias in training or model inferences and ensure explainability of models behavior. Use techniques such as adversarial training – red teaming testing of LLMs using prompts that look for unsafe outputs – to help ensure that AI models operate in a fair, unbiased, and robust manner.


3.     Implement continuous monitoring and auditing

Track the performance and behavior of AI systems in real time with continuous monitoring and auditing. The goal is to identify and address potential safety issues or anomalies before they escalate into larger problems.

Look for key metrics like model accuracy, fairness, and explainability, and establish a baseline for your app and its monitoring. Beyond traditional metrics, look for unexpected changes in user behavior and AI model drift using a tool such as Vertex AI Model Monitoring. Do this using data logging, anomaly detection, and human-in-the-loop mechanisms to ensure ongoing oversight.


4.     Foster a culture of transparency and explainability

Drive AI decision-making through a culture of transparency and explainability. Encourage this culture by defining clear documentation guidelines, metrics, and roles so that all the team members developing AI systems participate in the design, training, deployment, and operations.

Also, provide clear and accessible explanations to cross-functional stakeholders about how AI systems operate, their limitations, and the available rationale behind their decisions. This information fosters trust among users, regulators, and stakeholders.


Final word

As AI's role in core and critical systems grows, proper governance is essential for its success and that of the systems and organizations using AI. The four proposals in this article should be a good start in that direction.

However, this is a broad and complex domain, which is what this series of articles is about. So, look out for deeper dives into the tools, techniques, and processes you need to safely integrate AI into your development and the apps you create.

Increase your app’s availability across device types

Posted by Alex Vanyo – Developer Relations Engineer

TL;DR: Remove unnecessary feature requirements that prevent users from downloading your app on devices that don’t support the features. Automate tracking feature requirements and maximize app availability with badging!

Required features reduce app availability

<uses-feature> is an app manifest element that specifies whether your app depends on a hardware or software feature. By default, <uses-feature> specifies that a feature is required. To indicate that the feature is optional, you must add the android:required="false" attribute.

Google Play filters which apps are available to download based on required features. If the user’s device doesn’t support some hardware or software feature, then an app that requires that feature won’t be available for the user to download.

<uses-permission>, another app manifest element, complicates things by implicitly requiring features for permissions such as CAMERA or BLUETOOTH (see Permissions that imply feature requirements). The initial declared orientations for your activities can also implicitly require hardware features.

The system determines implicitly required features after merging all modules and dependencies, so it may not be clear to you which features your app ultimately requires. You might not even be aware when the list of required features has changed. For example, integrating a new dependency into your app might introduce a new required feature. Or the integration might request additional permissions, and the permissions could introduce new, implicitly required features.

This behavior has been around for a while, but Android has changed a lot over the years. Android apps now run on phones, foldables, tablets, laptops, cars, TVs and watches, and these devices are more varied than ever. Some devices don’t have telephony services, some don’t have touchscreens, some don’t have cameras.

Expectations based on permissions have also changed. With runtime permissions, a <uses-permission> declaration in the manifest no longer guarantees that your app will be granted that permission. Users can choose to deny access to hardware in favor of other ways to interact with the app. For example, instead of giving an app permission to access the device’s location, a user may prefer to always search for a particular location instead.

Banking apps shouldn’t require the device to have an autofocusing camera for check scanning. They shouldn’t specify that the camera must be a front or rear camera or that the device has a camera at all! It should be enough to allow the user to upload a picture of a check from another source.

Apps should support keyboard navigation and mouse input for accessibility and usability reasons, so strictly requiring a hardware touchscreen should not be necessary.

Apps should support both landscape and portrait orientations, so they shouldn’t require that the screen could be landscape-oriented or could be portrait-oriented. For example, screens built in to cars may be in a fixed landscape orientation. Even if the app supports both landscape and portrait, the app could be unnecessarily requiring that the device supports being used in portrait, which would exclude those cars.

Determine your app’s required features

You can use aapt2 to output information about your APK, including the explicitly and implicitly required features. The logic matches how the Play Store filters app availability.

aapt2 dump badging <path_to_.apk>

In the Play Console, you can also check which devices are being excluded from accessing your app.

Increase app availability by making features optional

Most apps should not strictly require hardware and software features. There are few guarantees that the user will allow using that feature in the first place, and users expect to be able to use all parts of your app in the way they see fit. To increase your app’s availability across form factors:

    • Provide alternatives in case the feature is not available, ensuring your app doesn’t need the feature to function.
    • Add android:required="false" to existing <uses-feature> tags to mark the feature as not required (or remove the tag entirely if the app no longer uses a feature).
    • Add the <uses-feature> tag with android:required="false" for implicitly required feature due to declaring permissions that imply feature requirements.

Prevent regressions with CI and badging

To guard against regressions caused by inadvertently adding a new feature requirement that reduces device availability, automate the task of determining your app’s features as part of your build system. By storing the badging output of the aapt2 tool in a text file and checking the file into version control, you can track all declared permissions and explicitly and implicitly required features from your final universal apk. This includes all features and permissions included by transitive dependencies, in addition to your own.

You can automate badging as part of your continuous integration setup by setting up three Gradle tasks for each variant of your app you want to validate. Using release as an example, create these three tasks:

    • generateReleaseBadging – Generates the badging file from the universal APK using the aapt2 executable. The output of this task (the badging information) is used for the following two tasks.
    • updateReleaseBadging – Copies the generated badging file into the main project directory. The file is checked into source control as a golden badging file.
    • checkReleaseBadging – Validates the generated badging file against the golden badging file.

CI should run checkReleaseBadging to verify that the checked-in golden badging file still matches the generated badging file for the current code. If code changes or dependency updates have caused the badging file to change in any way, CI fails.

Screen grab of failing CI due to adding a new permission and required feature without updating the badging file.
Failing CI due to adding a new permission and required feature without updating the badging file.

When changes are intentional, run updateReleaseBadging to update the golden badging file and recheck it into source control. Then, this change will surface in code review to be validated by reviewers that the badging changes are expected.

Screen grab showing updated golden badging file for review with additional permission and implied required feature.
Updated golden badging file for review with additional permission and implied required feature.

CI-automated badging guards against changes inadvertently causing a new feature to be required, which would reduce availability of the app.

For a complete working example of a CI system verifying the badging file, check out the setup in the Now in Android app.

Keep features optional

Android devices are continually becoming more varied, with users expecting a great experience from your Android app regardless of the type of device they’re using. While some software or hardware features might be essential to your app’s function, in many cases they should not be strictly required, needlessly preventing some users from downloading your app.

Use the badging output from aapt2 to check which features your app requires, and use the Play Console to verify which devices the requirements are preventing from downloading your app. You can automatically check your app’s badging in CI and catch regressions.

Bottom line: If you don’t absolutely need a feature for your entire app to function, make the feature optional to ensure your app’s availability to the greatest number of devices and users.

Learn more by checking out our developer guide.