Category Archives: Online Security Blog

The latest news and insights from Google on security and safety on the Internet

Increasing transparency in AI security


New AI innovations and applications are reaching consumers and businesses on an almost-daily basis. Building AI securely is a paramount concern, and we believe that Google’s Secure AI Framework (SAIF) can help chart a path for creating AI applications that users can trust. Today, we’re highlighting two new ways to make information about AI supply chain security universally discoverable and verifiable, so that AI can be created and used responsibly. 




The first principle of SAIF is to ensure that the AI ecosystem has strong security foundations. In particular, the software supply chains for components specific to AI development, such as machine learning models, need to be secured against threats including model tampering, data poisoning, and the production of harmful content




Even as machine learning and artificial intelligence continue to evolve rapidly, some solutions are now within reach of ML creators. We’re building on our prior work with the Open Source Security Foundation to show how ML model creators can and should protect against ML supply chain attacks by using SLSA and Sigstore.



Supply chain security for ML


For supply chain security of conventional software (software that does not use ML), we usually consider questions like:


  • Who published the software? Are they trustworthy? Did they use safe practices?
  • For open source software, what was the source code?
  • What dependencies went into building that software?
  • Could the software have been replaced by a tampered version following publication? Could this have occurred during build time?




All of these questions also apply to the hundreds of free ML models that are available for use on the internet. Using an ML model means trusting every part of it, just as you would any other piece of software. This includes concerns such as:



  • Who published the model? Are they trustworthy? Did they use safe practices?
  • For open source models, what was the training code?
  • What datasets went into training that model?
  • Could the model have been replaced by a tampered version following publication? Could this have occurred during training time?




We should treat tampering of ML models with the same severity as we treat injection of malware into conventional software. In fact, since models are programs, many allow the same types of arbitrary code execution exploits that are leveraged for attacks on conventional software. Furthermore, a tampered model could leak or steal data, cause harm from biases, or spread dangerous misinformation. 




Inspection of an ML model is insufficient to determine whether bad behaviors were injected. This is similar to trying to reverse engineer an executable to identify malware. To protect supply chains at scale, we need to know how the model or software was created to answer the questions above.



Solutions for ML supply chain security


In recent years, we’ve seen how providing public and verifiable information about what happens during different stages of software development is an effective method of protecting conventional software against supply chain attacks. This supply chain transparency offers protection and insights with:



  • Digital signatures, such as those from Sigstore, which allow users to verify that the software wasn’t tampered with or replaced
  • Metadata such as SLSA provenance that tell us what’s in software and how it was built, allowing consumers to ensure license compatibility, identify known vulnerabilities, and detect more advanced threats





Together, these solutions help combat the enormous uptick in supply chain attacks that have turned every step in the software development lifecycle into a potential target for malicious activity.




We believe transparency throughout the development lifecycle will also help secure ML models, since ML model development follows a similar lifecycle as for regular software artifacts:




Similarities between software development and ML model development





An ML training process can be thought of as a “build:” it transforms some input data to some output data. Similarly, training data can be thought of as a “dependency:” it is data that is used during the build process. Because of the similarity in the development lifecycles, the same software supply chain attack vectors that threaten software development also apply to model development: 





Attack vectors on ML through the lens of the ML supply chain




Based on the similarities in development lifecycle and threat vectors, we propose applying the same supply chain solutions from SLSA and Sigstore to ML models to similarly protect them against supply chain attacks.



Sigstore for ML models



Code signing is a critical step in supply chain security. It identifies the producer of a piece of software and prevents tampering after publication. But normally code signing is difficult to set up—producers need to manage and rotate keys, set up infrastructure for verification, and instruct consumers on how to verify. Often times secrets are also leaked since security is hard to get right during the process.




We suggest bypassing these challenges by using Sigstore, a collection of tools and services that make code signing secure and easy. Sigstore allows any software producer to sign their software by simply using an OpenID Connect token bound to either a workload or developer identity—all without the need to manage or rotate long-lived secrets.




So how would signing ML models benefit users? By signing models after training, we can assure users that they have the exact model that the builder (aka “trainer”) uploaded. Signing models discourages model hub owners from swapping models, addresses the issue of a model hub compromise, and can help prevent users from being tricked into using a bad model. 




Model signatures make attacks similar to PoisonGPT detectable. The tampered models will either fail signature verification or can be directly traced back to the malicious actor. Our current work to encourage this industry standard includes:




  • Having ML frameworks integrate signing and verification in the model save/load APIs
  • Having ML model hubs add a badge to all signed models, thus guiding users towards signed models and incentivizing signatures from model developers
  • Scaling model signing for LLMs 




SLSA for ML Supply Chain Integrity



Signing with Sigstore provides users with confidence in the models that they are using, but it cannot answer every question they have about the model. SLSA goes a step further to provide more meaning behind those signatures. 




SLSA (Supply-chain Levels for Software Artifacts) is a specification for describing how a software artifact was built. SLSA-enabled build platforms implement controls to prevent tampering and output signed provenance describing how the software artifact was produced, including all build inputs. This way, SLSA provides trustworthy metadata about what went into a software artifact.




Applying SLSA to ML could provide similar information about an ML model’s supply chain and address attack vectors not covered by model signing, such as compromised source control, compromised training process, and vulnerability injection. Our vision is to include specific ML information in a SLSA provenance file, which would help users spot an undertrained model or one trained on bad data. Upon detecting a vulnerability in an ML framework, users can quickly identify which models need to be retrained, thus reducing costs.




We don’t need special ML extensions for SLSA. Since an ML training process is a build (shown in the earlier diagram), we can apply the existing SLSA guidelines to ML training. The ML training process should be hardened against tampering and output provenance just like a conventional build process. More work on SLSA is needed to make it fully useful and applicable to ML, particularly around describing dependencies such as datasets and pretrained models.  Most of these efforts will also benefit conventional software.




For models training on pipelines that do not require GPUs/TPUs, using an existing, SLSA-enabled build platform is a simple solution. For example, Google Cloud Build, GitHub Actions, or GitLab CI are all generally available SLSA-enabled build platforms. It is possible to run an ML training step on one of these platforms to make all of the built-in supply chain security features available to conventional software.



How to do model signing and SLSA for ML today



By incorporating supply chain security into the ML development lifecycle now, while the problem space is still unfolding, we can jumpstart work with the open source community to establish industry standards to solve pressing problems. This effort is already underway and available for testing.  




Our repository of tooling for model signing and experimental SLSA provenance support for smaller ML models is available now. Our future ML framework and model hub integrations will be released in this repository as well. 




We welcome collaboration with the ML community and are looking forward to reaching consensus on how to best integrate supply chain protection standards into existing tooling (such as Model Cards). If you have feedback or ideas, please feel free to open an issue and let us know. 

Google’s reward criteria for reporting bugs in AI products


In September, we shared how we are implementing the voluntary AI commitments that we and others in industry made at the White House in July. One of the most important developments involves expanding our existing Bug Hunter Program to foster third-party discovery and reporting of issues and vulnerabilities specific to our AI systems. Today, we’re publishing more details on these new reward program elements for the first time. Last year we issued over $12 million in rewards to security researchers who tested our products for vulnerabilities, and we expect today’s announcement to fuel even greater collaboration for years to come. 




What’s in scope for rewards 

In our recent AI Red Team report, we identified common tactics, techniques, and procedures (TTPs) that we consider most relevant and realistic for real-world adversaries to use against AI systems. The following table incorporates shared learnings from Google’s AI Red Team exercises to help the research community better understand what’s in scope for our reward program. We're detailing our criteria for AI bug reports to assist our bug hunting community in effectively testing the safety and security of AI products. Our scope aims to facilitate testing for traditional security vulnerabilities as well as risks specific to AI systems. It is important to note that reward amounts are dependent on severity of the attack scenario and the type of target affected (go here for more information on our reward table). 





Category

Attack Scenario

Guidance

Prompt Attacks: Crafting adversarial prompts that allow an adversary to influence the behavior of the model, and hence the output in ways that were not intended by the application.

Prompt injections that are invisible to victims and change the state of the victim's account or or any of their assets.

In Scope

Prompt injections into any tools in which the response is used to make decisions that directly affect victim users.

In Scope

Prompt or preamble extraction in which a user is able to extract the initial prompt used to prime the model only when sensitive information is present in the extracted preamble.

In Scope

Using a product to generate violative, misleading, or factually incorrect content in your own session: e.g. 'jailbreaks'. This includes 'hallucinations' and factually inaccurate responses. Google's generative AI products already have a dedicated reporting channel for these types of content issues.

Out of Scope

Training Data Extraction: Attacks that are able to successfully reconstruct verbatim training examples that contain sensitive information. Also called membership inference.


Training data extraction that reconstructs items used in the training data set that leak sensitive, non-public information.

In Scope

Extraction that reconstructs nonsensitive/public information.

Out of Scope

Manipulating Models: An attacker able to covertly change the behavior of a model such that they can trigger pre-defined adversarial behaviors.


Adversarial output or behavior that an attacker can reliably trigger via specific input in a model owned and operated by Google ("backdoors"). Only in-scope when a model's output is used to change the state of a victim's account or data. 

In Scope

Attacks in which an attacker manipulates the training data of the model to influence the model’s output in a victim's session according to the attacker’s preference. Only in-scope when a model's output is used to change the state of a victim's account or data. 

In Scope

Adversarial Perturbation: Inputs that are provided to a model that results in a deterministic, but highly unexpected output from the model.

Contexts in which an adversary can reliably trigger a misclassification in a security control that can be abused for malicious use or adversarial gain. 

In Scope

Contexts in which a model's incorrect output or classification does not pose a compelling attack scenario or feasible path to Google or user harm.

Out of Scope

Model Theft / Exfiltration: AI models often include sensitive intellectual property, so we place a high priority on protecting these assets. Exfiltration attacks allow attackers to steal details about a model such as its architecture or weights.

Attacks in which the exact architecture or weights of a confidential/proprietary model are extracted.

In Scope

Attacks in which the architecture and weights are not extracted precisely, or when they're extracted from a non-confidential model.

Out of Scope

If you find a flaw in an AI-powered tool other than what is listed above, you can still submit, provided that it meets the qualifications listed on our program page.


A bug or behavior that clearly meets our qualifications for a valid security or abuse issue.


In Scope

Using an AI product to do something potentially harmful that is already possible with other tools. For example, finding a vulnerability in open source software (already possible using publicly-available static analysis tools) and producing the answer to a harmful question when the answer is already available online.

Out of Scope

As consistent with our program, issues that we already know about are not eligible for reward.

Out of Scope

Potential copyright issues: findings in which products return content appearing to be copyright-protected. Google's generative AI products already have a dedicated reporting channel for these types of content issues.

Out of Scope





Conclusion 

We look forward to continuing our work with the research community to discover and fix security and abuse issues in our AI-powered features. If you find a qualifying issue, please go to our Bug Hunter website to send us your bug report and–if the issue is found to be valid–be rewarded for helping us keep our users safe.

Joint Industry statement of support for Consumer IoT Security Principles



Last week at Singapore International Cyber Week and the ETSI Security Conferences, the international community gathered together to discuss cybersecurity hot topics of the day. Amidst a number of important cybersecurity discussions, we want to highlight progress on connected device security demonstrated by  joint industry principles for IoT security transparency. The future of connected devices offers tremendous potential for innovation and quality of life improvements. Putting a spotlight on consumer IoT security is a key aspect of achieving these benefits. Marketplace competition can be an important driver of security improvements, with consumers empowered and motivated to make informed purchasing decisions based on device security. 

As with other IoT security transparency initiatives globally, it’s great to see this topic being covered at both conferences this week. The below IoT security labeling principles are aimed at helping to improve consumer awareness and to foster marketplace competition based on security.

To help consumers make an informed purchase decision they should receive clear, consistent, and actionable information about the security of the device (e.g. security support period, authentication support, cryptographic assurance) before purchase - a communication and transparency mechanism commonly referred to as “a label” or “labeling,” although the communication is not merely a printed sticker on physical product packaging. While an IoT label will not solve the problem of IoT security on its own, transparency can both help educate consumers and also facilitate the coordination of security responsibilities between all of the components in a connected device ecosystem.

Our goal is to strengthen the security of IoT devices and ecosystems to protect individuals and organizations, and to unleash the full future benefit of IoT. Security labeling programs can support consumer purchase decisions that drive security improvements, but only if the label is credible, actionable, and easily understood. We are hopeful that the public sector and industry can work together to drive harmonized policies that achieve this goal. 

Signed,

Google

ARM

HackerOne

Keysight

NXP

OpenPolicy

Rapid7

Schlage

Silicon Labs

Assa Abloy

Joint Industry statement of support for Consumer IoT Security Principles



Last week at Singapore International Cyber Week and the ETSI Security Conferences, the international community gathered together to discuss cybersecurity hot topics of the day. Amidst a number of important cybersecurity discussions, we want to highlight progress on connected device security demonstrated by  joint industry principles for IoT security transparency. The future of connected devices offers tremendous potential for innovation and quality of life improvements. Putting a spotlight on consumer IoT security is a key aspect of achieving these benefits. Marketplace competition can be an important driver of security improvements, with consumers empowered and motivated to make informed purchasing decisions based on device security. 

As with other IoT security transparency initiatives globally, it’s great to see this topic being covered at both conferences this week. The below IoT security labeling principles are aimed at helping to improve consumer awareness and to foster marketplace competition based on security.

To help consumers make an informed purchase decision they should receive clear, consistent, and actionable information about the security of the device (e.g. security support period, authentication support, cryptographic assurance) before purchase - a communication and transparency mechanism commonly referred to as “a label” or “labeling,” although the communication is not merely a printed sticker on physical product packaging. While an IoT label will not solve the problem of IoT security on its own, transparency can both help educate consumers and also facilitate the coordination of security responsibilities between all of the components in a connected device ecosystem.

Our goal is to strengthen the security of IoT devices and ecosystems to protect individuals and organizations, and to unleash the full future benefit of IoT. Security labeling programs can support consumer purchase decisions that drive security improvements, but only if the label is credible, actionable, and easily understood. We are hopeful that the public sector and industry can work together to drive harmonized policies that achieve this goal. 

Signed,

Google

ARM

HackerOne

Keysight

NXP

OpenPolicy

Rapid7

Schlage

Silicon Labs

Assa Abloy

Enhanced Google Play Protect real-time scanning for app installs

Mobile devices have supercharged our modern lives, helping us do everything from purchasing goods in store and paying bills online to storing financial data, health records, passwords and pictures. According to Data.ai, the pandemic accelerated existing mobile habits – with app categories like finance growing 25% year-over-year and users spending over 100 billion hours in shopping apps. It's now even more important that data is protected so that bad actors can't access the information.

Powering up Google Play Protect

Google Play Protect is built-in, proactive protection against malware and unwanted software and is enabled on all Android devices with Google Play Services. Google Play Protect scans 125 billion apps daily to help protect you from malware and unwanted software. If it finds a potentially harmful app, Google Play Protect can take certain actions such as sending you a warning, preventing an app install, or disabling the app automatically.

To try and avoid detection by services like Play Protect, cybercriminals are using novel malicious apps available outside of Google Play to infect more devices with polymorphic malware, which can change its identifiable features. They’re turning to social engineering to trick users into doing something dangerous, such as revealing confidential information or downloading a malicious app from ephemeral sources – most commonly via links to download malicious apps or downloads directly through messaging apps.

For this reason, Google Play Protect has always also offered users protection outside of Google Play. It checks your device for potentially harmful apps regardless of the install source when you’re online or offline as well. Previously, when installing an app, Play Protect conducted a real-time check and warned users when it identified an app known to be malicious from existing scanning intelligence or was identified as suspicious from our on-device machine learning, similarity comparisons, and other techniques that we are always evolving.


Today, we are making Google Play Protect’s security capabilities even more powerful with real-time scanning at the code-level to combat novel malicious apps. Google Play Protect will now recommend a real-time app scan when installing apps that have never been scanned before to help detect emerging threats.

Scanning will extract important signals from the app and send them to the Play Protect backend infrastructure for a code-level evaluation. Once the real-time analysis is complete, users will get a result letting them know if the app looks safe to install or if the scan determined the app is potentially harmful. This enhancement will help better protect users against malicious polymorphic apps that leverage various methods, such as AI, to be altered to avoid detection.

Our security protections and machine learning algorithms learn from each app submitted to Google for review and we look at thousands of signals and compare app behavior. Google Play Protect is constantly improving with each identified app, allowing us to strengthen our protections for the entire Android ecosystem.

This enhancement to Google Play Protect has started to roll out to all Android devices with Google Play services in select countries, starting with India, and will expand to all regions in the coming months.

Our Multi-Layered User Protections on Android


Android takes a multi-layered defense approach to help keep you safe from mobile malware and unwanted software on Android. Android’s built-in proactive and advanced user protections like Google Play Protect, ongoing security updates, app permission controls, Safe Browsing, and more – alongside spam and phishing protection in Messages by Google and Gmail – work together to help protect your data security and privacy. We are constantly improving this multi-layered approach to find new ways to protect our billions of users.

Keeping Android users safe is a top priority. We are committed to working with our ecosystem partners and app developer community to improve the security of apps and combat malware and unwanted software to make Android even more secure.

Enhanced Google Play Protect real-time scanning for app installs

Mobile devices have supercharged our modern lives, helping us do everything from purchasing goods in store and paying bills online to storing financial data, health records, passwords and pictures. According to Data.ai, the pandemic accelerated existing mobile habits – with app categories like finance growing 25% year-over-year and users spending over 100 billion hours in shopping apps. It's now even more important that data is protected so that bad actors can't access the information.

Powering up Google Play Protect

Google Play Protect is built-in, proactive protection against malware and unwanted software and is enabled on all Android devices with Google Play Services. Google Play Protect scans 125 billion apps daily to help protect you from malware and unwanted software. If it finds a potentially harmful app, Google Play Protect can take certain actions such as sending you a warning, preventing an app install, or disabling the app automatically.

To try and avoid detection by services like Play Protect, cybercriminals are using novel malicious apps available outside of Google Play to infect more devices with polymorphic malware, which can change its identifiable features. They’re turning to social engineering to trick users into doing something dangerous, such as revealing confidential information or downloading a malicious app from ephemeral sources – most commonly via links to download malicious apps or downloads directly through messaging apps.

For this reason, Google Play Protect has always also offered users protection outside of Google Play. It checks your device for potentially harmful apps regardless of the install source when you’re online or offline as well. Previously, when installing an app, Play Protect conducted a real-time check and warned users when it identified an app known to be malicious from existing scanning intelligence or was identified as suspicious from our on-device machine learning, similarity comparisons, and other techniques that we are always evolving.


Today, we are making Google Play Protect’s security capabilities even more powerful with real-time scanning at the code-level to combat novel malicious apps. Google Play Protect will now recommend a real-time app scan when installing apps that have never been scanned before to help detect emerging threats.

Scanning will extract important signals from the app and send them to the Play Protect backend infrastructure for a code-level evaluation. Once the real-time analysis is complete, users will get a result letting them know if the app looks safe to install or if the scan determined the app is potentially harmful. This enhancement will help better protect users against malicious polymorphic apps that leverage various methods, such as AI, to be altered to avoid detection.

Our security protections and machine learning algorithms learn from each app submitted to Google for review and we look at thousands of signals and compare app behavior. Google Play Protect is constantly improving with each identified app, allowing us to strengthen our protections for the entire Android ecosystem.

This enhancement to Google Play Protect has started to roll out to all Android devices with Google Play services in select countries, starting with India, and will expand to all regions in the coming months.

Our Multi-Layered User Protections on Android


Android takes a multi-layered defense approach to help keep you safe from mobile malware and unwanted software on Android. Android’s built-in proactive and advanced user protections like Google Play Protect, ongoing security updates, app permission controls, Safe Browsing, and more – alongside spam and phishing protection in Messages by Google and Gmail – work together to help protect your data security and privacy. We are constantly improving this multi-layered approach to find new ways to protect our billions of users.

Keeping Android users safe is a top priority. We are committed to working with our ecosystem partners and app developer community to improve the security of apps and combat malware and unwanted software to make Android even more secure.

Scaling BeyondCorp with AI-Assisted Access Control Policies



In July 2023, four Googlers from the Enterprise Security and Access Security organizations developed a tool that aimed at revolutionizing the way Googlers interact with Access Control Lists - SpeakACL. This tool, awarded the Gold Prize during Google’s internal Security & AI Hackathon, allows developers to create or modify security policies using simple English instructions rather than having to learn system-specific syntax or complex security principles. This can save security and product teams hours of time and effort, while helping to protect the information of their users by encouraging the reduction of permitted access by adhering to the principle of least privilege.


Access Control Policies in BeyondCorp

Google requires developers and owners of enterprise applications to define their own access control policies, as described in BeyondCorp: The Access Proxy. We have invested in reducing the difficulty of self-service ACL and ACL test creation to encourage these service owners to define least privilege access control policies. However, it is still challenging to concisely transform their intent into the language acceptable to the access control engine. Additional complexity is added by the variety of engines, and corresponding policy definition languages that target different access control domains (i.e. websites, networks, RPC servers).


To adequately implement an access control policy, service developers are expected to learn various policy definition languages and their associated syntax, in addition to sufficiently understanding security concepts. As this takes time away from core developer work, it is not the most efficient use of developer time. A solution was required to remove these challenges so developers can focus on building innovative tools and products.



Making it Work

We built a prototype interface for interactively defining and modifying access control policies for the BeyondCorp access control engine using the PaLM 2 Large Language Model (LLM). using the PaLM 2 Large Language Model (LLM). We used Google Colab to provide the model with a diverse, highly variable, dataset using in-context learning and fine-tuning. In-context learning allows the model to learn from a dataset of examples that are relevant to the task at hand, which we provided via few-shot learning. Fine-tuning allows the model to be adapted to a specific task by adjusting its parameters. Tuning the model with a diverse labeled dataset that we curated for this task allowed us to improve its ability to generate ACLs that are both syntactically accurate and adhered to the principle of least privilege. 




With SpeakACL, and other tools leveraging AI in security, it is always recommended to take a conservative approach with the autonomy you give an AI agent. To ensure our model outputs are correct & safe to use, we combined our tool with existing safeguards that exist at Google for all access policy modifications:



  • Request LGTM from a teammate to ensure that the intent of the proposed change is correct. 

  • Automated Risk Assessment occurs on proposed security policy at Google. 

  • Manual Review by Security Engineers is performed on changes not assessed as low risk to ensure compliance with security policies and guidelines.

  • Linting, unit tests, and integration tests ensure that the access control language syntax is correct, and that the change does not break any expected access or permit unexpected access.



Looking to the future

While progress in AI is impressive, it is crucial we as an industry continue to prioritize safety while navigating the landscape. Other than adding checks to syntactically and semantically verify access policies produced by our model, we also designed safeguards for sensitive information disclosure, data leaking, prompt injections, and supply chain vulnerabilities to make sure our model is performing at the highest level of security.


SpeakACL is an ACL Generation tool that has the potential to revolutionize the way access policies are created and managed. The efficiency, security, and ease of use achieved by this AI-powered ACL Generation Engine reflects Google’s ongoing commitment to leveraging AI across domains to develop cutting-edge products and infrastructure. 


Bare-metal Rust in Android

Last year we wrote about how moving native code in Android from C++ to Rust has resulted in fewer security vulnerabilities. Most of the components we mentioned then were system services in userspace (running under Linux), but these are not the only components typically written in memory-unsafe languages. Many security-critical components of an Android system run in a “bare-metal” environment, outside of the Linux kernel, and these are historically written in C. As part of our efforts to harden firmware on Android devices, we are increasingly using Rust in these bare-metal environments too.

To that end, we have rewritten the Android Virtualization Framework’s protected VM (pVM) firmware in Rust to provide a memory safe foundation for the pVM root of trust. This firmware performs a similar function to a bootloader, and was initially built on top of U-Boot, a widely used open source bootloader. However, U-Boot was not designed with security in a hostile environment in mind, and there have been numerous security vulnerabilities found in it due to out of bounds memory access, integer underflow and memory corruption. Its VirtIO drivers in particular had a number of missing or problematic bounds checks. We fixed the specific issues we found in U-Boot, but by leveraging Rust we can avoid these sorts of memory-safety vulnerabilities in future. The new Rust pVM firmware was released in Android 14.

As part of this effort, we contributed back to the Rust community by using and contributing to existing crates where possible, and publishing a number of new crates as well. For example, for VirtIO in pVM firmware we’ve spent time fixing bugs and soundness issues in the existing virtio-drivers crate, as well as adding new functionality, and are now helping maintain this crate. We’ve published crates for making PSCI and other Arm SMCCC calls, and for managing page tables. These are just a start; we plan to release more Rust crates to support bare-metal programming on a range of platforms. These crates are also being used outside of Android, such as in Project Oak and the bare-metal section of our Comprehensive Rust course.

Training engineers

Many engineers have been positively surprised by how productive and pleasant Rust is to work with, providing nice high-level features even in low-level environments. The engineers working on these projects come from a range of backgrounds. Our comprehensive Rust course has helped experienced and novice programmers quickly come up to speed. Anecdotally the Rust type system (including the borrow checker and lifetimes) helps avoid making mistakes that are easily made in C or C++, such as leaking pointers to stack-allocated values out of scope.

One of our bare-metal Rust course attendees had this to say:

"types can be built that bring in all of Rust's niceties and safeties and 
yet still compile down to extremely efficient code like writes
of constants to memory-mapped IO."

97% of attendees that completed a survey agreed the course was worth their time.

Advantages and challenges

Device drivers are often written in an object-oriented fashion for flexibility, even in C. Rust traits, which can be seen as a form of compile-time polymorphism, provide a useful high-level abstraction for this. In many cases this can be resolved entirely at compile time, with no runtime overhead of dynamic dispatch via vtables or structs of function pointers.

There have been some challenges. Safe Rust’s type system is designed with an implicit assumption that the only memory the program needs to care about is allocated by the program (be it on the stack, the heap, or statically), and only used by the program. Bare-metal programs often have to deal with MMIO and shared memory, which break this assumption. This tends to require a lot of unsafe code and raw pointers, with limited tools for encapsulation. There is some disagreement in the Rust community about the soundness of references to MMIO space, and the facilities for working with raw pointers in stable Rust are currently somewhat limited. The stabilisation of offset_of, slice_ptr_get, slice_ptr_len, offset_of and other nightly features will improve this, but it is still challenging to encapsulate cleanly. Better syntax for accessing struct fields and array indices via raw pointers without creating references would also be helpful.

The concurrency introduced by interrupt and exception handlers can also be awkward, as they often need to access shared mutable state but can’t rely on being able to take locks. Better abstractions for critical sections will help somewhat, but there are some exceptions that can’t practically be disabled, such as page faults used to implement copy-on-write or other on-demand page mapping strategies.

Another issue we’ve had is that some unsafe operations, such as manipulating the page table, can’t be encapsulated cleanly as they have safety implications for the whole program. Usually in Rust we are able to encapsulate unsafe operations (operations which may cause undefined behaviour in some circumstances, because they have contracts which the compiler can’t check) in safe wrappers where we ensure the necessary preconditions so that it is not possible for any caller to cause undefined behaviour. However, mapping or unmapping pages in one part of the program can make other parts of the program invalid, so we haven’t found a way to provide a fully general safe interface to this. It should be noted that the same concerns apply to a program written in C, where the programmer always has to reason about the safety of the whole program.

Some people adopting Rust for bare-metal use cases have raised concerns about binary size. We have seen this in some cases; for example our Rust pVM firmware binary is around 460 kB compared to 220 kB for the earlier C version. However, this is not a fair comparison as we also added more functionality which allowed us to remove other components from the boot chain, so the overall size of all VM boot chain components was comparable. We also weren’t particularly optimizing for binary size in this case; speed and correctness were more important. In cases where binary size is critical, compiling with size optimization, being careful about dependencies, and avoiding Rust’s string formatting machinery in release builds usually allows comparable results to C.

Architectural support is another concern. Rust is generally well supported on the Arm and RISC-V cores that we see most often, but support for more esoteric architectures (for example, the Qualcomm Hexagon DSP included in many Qualcomm SoCs used in Android phones) can be lacking compared to C.

The future of bare-metal Rust

Overall, despite these challenges and limitations, we’ve still found Rust to be a significant improvement over C (or C++), both in terms of safety and productivity, in all the bare-metal use cases where we’ve tried it so far. We plan to use it wherever practical.

As well as the work in the Android Virtualization Framework, the team working on Trusty (the open-source Trusted Execution Environment used on Pixel phones, among others) have been hard at work adding support for Trusted Applications written in Rust. For example, the reference KeyMint Trusted Application implementation is now in Rust. And there’s more to come in future Android devices, as we continue to use Rust to improve security of the devices you trust.

Expanding our exploit reward program to Chrome and Cloud


In 2020, we launched a novel format for our vulnerability reward program (VRP) with the kCTF VRP and its continuation kernelCTF. For the first time, security researchers could get bounties for n-day exploits even if they didn’t find the vulnerability themselves. This format proved valuable in improving our understanding of the most widely exploited parts of the linux kernel. Its success motivated us to expand it to new areas and we're now excited to announce that we're extending it to two new targets: v8CTF and kvmCTF.




Today, we're launching v8CTF, a CTF focused on V8, the JavaScript engine that powers Chrome. kvmCTF is an upcoming CTF focused on Kernel-based Virtual Machine (KVM) that will be released later in the year.




As with kernelCTF, we will be paying bounties for successful exploits against these platforms, n-days included. This is on top of any existing rewards for the vulnerabilities themselves. For example, if you find a vulnerability in V8 and then write an exploit for it, it can be eligible under both the Chrome VRP and the v8CTF.




We're always looking for ways to improve the security posture of our products, and we want to learn from the security community to understand how they will approach this challenge. If you're successful, you'll not only earn a reward, but you'll also help us make our products more secure for everyone. This is also a good opportunity to learn about technologies and gain hands-on experience exploiting them.




Besides learning about exploitation techniques, we’ll also leverage this program to experiment with new mitigation ideas and see how they perform against real-world exploits. For mitigations, it’s crucial to assess their effectiveness early on in the process, and you can help us battle test them.



How do I participate?

  • First, make sure to check out the rules for v8CTF or kvmCTF. This page contains up-to-date information about the types of exploits that are eligible for rewards, as well as the limits and restrictions that apply.

  • Once you have identified a vulnerability present in our deployed version, exploit it, and grab the flag. It doesn’t even have to be an 0-day!

  • Send us the flag by filling out the form linked in the rules and we’ll take it from there.




We're looking forward to seeing what you can find!

Expanding our exploit reward program to Chrome and Cloud


In 2020, we launched a novel format for our vulnerability reward program (VRP) with the kCTF VRP and its continuation kernelCTF. For the first time, security researchers could get bounties for n-day exploits even if they didn’t find the vulnerability themselves. This format proved valuable in improving our understanding of the most widely exploited parts of the linux kernel. Its success motivated us to expand it to new areas and we're now excited to announce that we're extending it to two new targets: v8CTF and kvmCTF.




Today, we're launching v8CTF, a CTF focused on V8, the JavaScript engine that powers Chrome. kvmCTF is an upcoming CTF focused on Kernel-based Virtual Machine (KVM) that will be released later in the year.




As with kernelCTF, we will be paying bounties for successful exploits against these platforms, n-days included. This is on top of any existing rewards for the vulnerabilities themselves. For example, if you find a vulnerability in V8 and then write an exploit for it, it can be eligible under both the Chrome VRP and the v8CTF.




We're always looking for ways to improve the security posture of our products, and we want to learn from the security community to understand how they will approach this challenge. If you're successful, you'll not only earn a reward, but you'll also help us make our products more secure for everyone. This is also a good opportunity to learn about technologies and gain hands-on experience exploiting them.




Besides learning about exploitation techniques, we’ll also leverage this program to experiment with new mitigation ideas and see how they perform against real-world exploits. For mitigations, it’s crucial to assess their effectiveness early on in the process, and you can help us battle test them.



How do I participate?

  • First, make sure to check out the rules for v8CTF or kvmCTF. This page contains up-to-date information about the types of exploits that are eligible for rewards, as well as the limits and restrictions that apply.

  • Once you have identified a vulnerability present in our deployed version, exploit it, and grab the flag. It doesn’t even have to be an 0-day!

  • Send us the flag by filling out the form linked in the rules and we’ll take it from there.




We're looking forward to seeing what you can find!