Stable Channel Update for ChromeOS / ChromeOS Flex

 The Stable channel is being updated to 126.0.6478.132 (Platform version: 15886.44.0) for most ChromeOS devices and will be rolled out over the next few days.

If you find new issues, please let us know one of the following ways

  1. File a bug
  2. Visit our ChromeOS communities
    1. General: Chromebook Help Community
    2. Beta Specific: ChromeOS Beta Help Community
  3. Report an issue or send feedback on Chrome

Interested in switching channels? Find out how.

Chrome Browser release notes can be found here.

Alon Bajayo
Google ChromeOS

Additional admin space management capabilities in Google Chat API are now available

What’s changing 

We recently announced several new features for the Google Chat API that enable admins to manage spaces at scale. These features include the ability to audit spaces, delete inactive spaces in bulk, and more. 

Today, we’re pleased to announce more space management capabilities, which include the ability to: 
  • Look up details about specific space.
  • Update space details, including the name of a space, space description, and space guidelines. 
  • Verify user’s membership status in a specific space. 
  • Upgrade a role from space member to space manager. 



These features are available now through our Developer Preview Program — see here for more information on how to enroll in the Developer Preview program


Getting started 

  • Admins and developers: 
    • If you are part of the Google Workspace Developer Preview, you will get these features by default. Otherwise, you must apply for access using this form. 
    • Use our Developer Documentation to learn how to authenticate and authorize using administrator privilege. 
  • End users: There is no end user impact or action required. 

Rollout pace 

Availability

  • New features for the Google Chat API scoped to admin users are available to participants of Google Workspace Developer Preview Program. 

Resources 

Grading periods API for Google Classroom is now available in Developer Preview

What’s changing

Last year, we introduced grading periods, an option that allows administrators and teachers to define and apply grading periods segmented from the entire school year to their Google Classroom assignments. 

Today, we’re excited to announce grading period endpoints and capabilities in the Classroom API, available through the Google Workspace Developer Preview Program. Specifically, developers can now: 

  • Create, modify, and delete grading periods on courses 
  • Read grading periods on courses 
  • Reference and set/read grading periods on CourseWork resources
  • Apply grading period settings to existing coursework items

Who’s impacted 

Developers 


Why you’d use it 

The new grading periods endpoints allows developers to create, modify, and read grading periods in Classroom on behalf of administrators and teachers. 


Getting started 

  • Admins: The Classroom API provides a RESTful interface for you to manage courses and rosters in Google Classroom. Learn more about the Classroom API overview. 
  • Developers: 
    • To use the grading periods API, developers can apply for access through our Google Workspace Developer Preview Program. 
    • Application developers can use the Classroom API to integrate their apps with Classroom. These apps need to use OAuth 2.0 to request permission to view classes and rosters from teachers. Admins can restrict whether teachers and students in their domain can authorize apps to access their Google Classroom data. 
    • All API and Classroom share button integrations should follow the Classroom brand guidelines. 

Rollout pace 

Availability 

Available for Google Workspace: 
  • Education Plus 

Resources 

Introducing Colab Pro and Colab Pro+ for Google Workspace

 What’s changing

Currently, Google Workspace admins can turn Colab on for their users, allowing them to access the free version of Colab. Beginning today, we’re pleased to announce the Colab Pro and Colab Pro+ standalone subscriptions for Google Workspace customers:

  • Colab Pro offers 100 compute units that grant access to additional powerful GPUs, memory, features, and productivity enhancements enabled with AI assistance.
  • Colab Pro+ gives you all the benefits of Colab Pro like productivity enhancements enabled with AI assistance, plus an additional 400 compute units for a total of 500 per month that grant access to additional powerful GPUs, along with background execution for our longest-running sessions.



Who’s impacted


Admins and end users


Why it’s important


Colab provides access to Google's powerful computational resources, which can be used to train machine learning models and perform other data-intensive tasks — all from your web browser. It can be used for a variety of data science and machine learning tasks, including:


  • Exploratory data analysis
  • Developing with the Gemini API
  • Machine learning model development
  • Natural language processing such as text classification and sentiment analysis
  • Image processing such as object detection and image classification.

We know that many Colab users are also Google Workspace users, especially those within educational institutions. Colab Pro and Colab Pro+ can help enhance their work and research with more compute units, faster GPUs, more memory and more.


Further, these offerings can help admins efficiently subscribe an entire team or organization to Colab, thereby minimizing the friction associated with setting up individual accounts for each user. By managing subscriptions at the group level, organizations can more effectively support collaborative workflows and ensure that all team members have access to the resources they need.


Getting started

  • Admins: 
    • Access to the Admin console is required to initiate purchase. To purchase Workspace Colab Pro or Colab Pro+, go to Billing > Get more services > More products. Visit the Help Center to learn more about how to assign licenses to users after purchase, as well as available Google Workspace subscriptions.
    • For Google Workspace for Education K-12 customers: The access control for Colab is OFF by default and requires Admins to enable this control if they want users in their domain to access Google Colab. Before turning access to Colab ON, Admins are required to obtain parental consent for <18 users because Colab is included as an Additional Service. Consent can be requested with this notice template



  • End users: 
    • Colab is OFF by default and can be enabled by your Admin. 
    • Note that if you have purchased Colab's “Pay As You Go” offering for yourself, you can continue using it in addition with Colab Pro and Pro+. 

Chrome Dev for Desktop Update

The Dev channel has been updated to 128.0.6555.2 for Windows, Mac and Linux.

A partial list of changes is available in the Git log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.

Prudhvi Bommana
Google Chrome

Hacking for Defenders: approaches to DARPA’s AI Cyber Challenge




The US Defense Advanced Research Projects Agency, DARPA, recently kicked off a two-year AI Cyber Challenge (AIxCC), inviting top AI and cybersecurity experts to design new AI systems to help secure major open source projects which our critical infrastructure relies upon. As AI continues to grow, it’s crucial to invest in AI tools for Defenders, and this competition will help advance technology to do so. 




Google’s OSS-Fuzz and Security Engineering teams have been excited to assist AIxCC organizers in designing their challenges and competition framework. We also playtested the competition by building a Cyber Reasoning System (CRS) tackling DARPA’s exemplar challenge. 




This blog post will share our approach to the exemplar challenge using open source technology found in Google’s OSS-Fuzz,  highlighting opportunities where AI can supercharge the platform’s ability to find and patch vulnerabilities, which we hope will inspire innovative solutions from competitors.



Leveraging OSS-Fuzz

AIxCC challenges focus on finding and fixing vulnerabilities in open source projects. OSS-Fuzz, our fuzz testing platform, has been finding vulnerabilities in open source projects as a public service for years, resulting in over 11,000 vulnerabilities found and fixed across 1200+ projects. OSS-Fuzz is free, open source, and its projects and infrastructure are shaped very similarly to AIxCC challenges. Competitors can easily reuse its existing toolchains, fuzzing engines, and sanitizers on AIxCC projects. Our baseline Cyber Reasoning System (CRS) mainly leverages non-AI techniques and has some limitations. We highlight these as opportunities for competitors to explore how AI can advance the state of the art in fuzz testing.



Fuzzing the AIxCC challenges

For userspace Java and C/C++ challenges, fuzzing with engines such as libFuzzer, AFL(++), and Jazzer is straightforward because they use the same interface as OSS-Fuzz.




Fuzzing the kernel is trickier, so we considered two options:



  • Syzkaller, an unsupervised coverage guided kernel fuzzer

  • A general purpose coverage guided fuzzer, such as AFL




Syzkaller has been effective at finding Linux kernel vulnerabilities, but is not suitable for AIxCC because Syzkaller generates sequences of syscalls to fuzz the whole Linux kernel, while AIxCC kernel challenges (exemplar) come with a userspace harness to exercise specific parts of the kernel. 




Instead, we chose to use AFL, which is typically used to fuzz userspace programs. To enable kernel fuzzing, we followed a similar approach to an older blog post from Cloudflare. We compiled the kernel with KCOV and KSAN instrumentation and ran it virtualized under QEMU. Then, a userspace harness acts as a fake AFL forkserver, which executes the inputs by executing the sequence of syscalls to be fuzzed. 




After every input execution, the harness read the KCOV coverage and stored it in AFL’s coverage counters via shared memory to enable coverage-guided fuzzing. The harness also checked the kernel dmesg log after every run to discover whether or not the input caused a KASAN sanitizer to trigger.




Some changes to Cloudflare’s harness were required in order for this to be pluggable with the provided kernel challenges. We needed to turn the harness into a library/wrapper that could be linked against arbitrary AIxCC kernel harnesses.




AIxCC challenges come with their own main() which takes in a file path. The main() function opens and reads this file, and passes it to the harness() function, which takes in a buffer and size representing the input. We made our wrapper work by wrapping the main() during compilation via $CC -Wl,--wrap=main harness.c harness_wrapper.a  




The wrapper starts by setting up KCOV, the AFL forkserver, and shared memory. The wrapper also reads the input from stdin (which is what AFL expects by default) and passes it to the harness() function in the challenge harness. 




Because AIxCC's harnesses aren't within our control and may misbehave, we had to be careful with memory or FD leaks within the challenge harness. Indeed, the provided harness has various FD leaks, which means that fuzzing it will very quickly become useless as the FD limit is reached.




To address this, we could either:


  • Forcibly close FDs created during the running of harness by checking for newly created FDs via /proc/self/fd before and after the execution of the harness, or

  • Just fork the userspace harness by actually forking in the forkserver. 




The first approach worked for us. The latter is likely most reliable, but may worsen performance.




All of these efforts enabled afl-fuzz to fuzz the Linux exemplar, but the vulnerability cannot be easily found even after hours of fuzzing, unless provided with seed inputs close to the solution.


Improving fuzzing with AI

This limitation of fuzzing highlights a potential area for competitors to explore AI’s capabilities. The input format being complicated, combined with slow execution speeds make the exact reproducer hard to discover. Using AI could unlock the ability for fuzzing to find this vulnerability quickly—for example, by asking an LLM to generate seed inputs (or a script to generate them) close to expected input format based on the harness source code. Competitors might find inspiration in some interesting experiments done by Brendan Dolan-Gavitt from NYU, which show promise for this idea.


Another approach: static analysis

One alternative to fuzzing to find vulnerabilities is to use static analysis. Static analysis traditionally has challenges with generating high amounts of false positives, as well as difficulties in proving exploitability and reachability of issues it points out. LLMs could help dramatically improve bug finding capabilities by augmenting traditional static analysis techniques with increased accuracy and analysis capabilities.


Proof of understanding (PoU)

Once fuzzing finds a reproducer, we can produce key evidence required for the PoU:

  1. The culprit commit, which can be found from git history bisection.

  2. The expected sanitizer, which can be found by running the reproducer to get the crash and parsing the resulting stacktrace.


Next step: “patching” via delta debugging

Once the culprit commit has been identified, one obvious way to “patch” the vulnerability is to just revert this commit. However, the commit may include legitimate changes that are necessary for functionality tests to pass. To ensure functionality doesn’t break, we could apply delta debugging: we progressively try to include/exclude different parts of the culprit commit until both the vulnerability no longer triggers, yet all functionality tests still pass.




This is a rather brute force approach to “patching.” There is no comprehension of the code being patched and it will likely not work for more complicated patches that include subtle changes required to fix the vulnerability without breaking functionality. 



Improving patching with AI

These limitations highlight a second area for competitors to apply AI’s capabilities. One approach might be to use an LLM to suggest patches. A 2024 whitepaper from Google walks through one way to build an LLM-based automated patching pipeline.




Competitors will need to address the following challenges:


  • Validating the patches by running crashes and tests to ensure the crash was prevented and the functionality was not impacted

  • Narrowing prompts to include only the functions present in the crashing stack trace, to fit prompt limitations

  • Building a validation step to filter out invalid patches




Using an LLM agent is likely another promising approach, where competitors could combine an LLM’s generation capabilities with the ability to compile and receive debug test failures or stacktraces iteratively.




Advancing security for everyone

Collaboration is essential to harness the power of AI as a widespread tool for defenders. As advancements emerge, we’ll integrate them into OSS-Fuzz, meaning that the outcomes from AIxCC will directly improve security for the open source ecosystem. We’re looking forward to the innovative solutions that result from this competition!

Bringing our Learning Interoperability Tools under one umbrella: Google Workspace LTI™

What’s changing 

Going forward, all Learning Interoperability Tools, including Assignments LTI™, and Google Drive LTI™, will be consolidated into a single category: Google Workspace LTI™. There are no functionality changes with this update, but you will notice the following: 
  • Assignments will be renamed to Google Workspace LTI™ in the Admin console and Google Workspace LTI™ in the Google Cloud console 
  • All LTI tools managed in the Admin console with the Google Workspace LTI™ service 
  • The Google Assignments Help Center will be rebranded to Google Workspace LTI™. The community forum will remain the same. 

Getting started 

Rollout pace

  • Available now.

Availability

Available for Google Workspace:
  • Education Fundamentals, Standard, Plus, and the Teaching & Learning Upgrade

Resources

Chrome for Android Update

   Hi, everyone! We've just released Chrome 126 (126.0.6478.122) for Android . It'll become available on Google Play over the next few days. 

This release includes stability and performance improvements. You can see a full list of the changes in the Git log. If you find a new issue, please let us know by filing a bug.

Android releases contain the same security fixes as their corresponding Desktop (Windows & Mac:  126.0.6478.126/127  and Linux:126.0.6478.126  ) unless otherwise noted.


Erhu Akpobaro
Google Chrome