Chrome Beta for Android Update

Hi everyone! We've just released Chrome Beta 137 (137.0.7151.23) for Android. It's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Chrome Release Team
Google Chrome

Chrome for Android Update

   Hi, everyone! We've just released Chrome 136 (136.0.7103.125) for Android. It'll become available on Google Play over the next few days. 

This release includes stability and performance improvements. You can see a full list of the changes in the Git log. If you find a new issue, please let us know by filing a bug.

Android releases contain the same security fixes as their corresponding Desktop (Windows, Mac:136.0.7103.113/114 and Linux: 136.0.7103.113) unless otherwise noted.


Krishna Govind
Google Chrome

Stable Channel Update for Desktop

 The Stable channel has been updated to 136.0.7103.113/.114 for Windows, Mac and 136.0.7103.113 for Linux which will roll out over the coming days/weeks. A full list of changes in this build is available in the Log.

Security Fixes and Rewards


Note: Access to bug details and links may be kept restricted until a majority of users are updated with a fix. We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven’t yet fixed.

This update includes 4 security fixes. Below, we highlight fixes that were contributed by external researchers. Please see the Chrome Security Page for more information.


[N/A][415810136] High CVE-2025-4664: Insufficient policy enforcement in Loader. Source: X post from @slonser_ on 2025-05-05

[TBD][412578726] High CVE-2025-4609: Incorrect handle provided in unspecified circumstances in Mojo. Reported by Micky on 2025-04-22


We would also like to thank all security researchers that worked with us during the development cycle to prevent security bugs from ever reaching the stable channel

Google is aware of reports that an exploit for CVE-2025-4664 exists in the wild.

As usual, our ongoing internal security work was responsible for a wide range of fixes:

[417268830] Various fixes from internal audits, fuzzing and other initiatives

Many of our security bugs are detected using AddressSanitizer, MemorySanitizer, UndefinedBehaviorSanitizer, Control Flow Integrity, libFuzzer, or AFL.

Interested in switching release channels? Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.


Srinivas Sista
Google Chrome

Extract and categorize data in AppSheet with the power of Gemini

What’s changing

At Google Cloud Next 2025, we introduced Gemini in AppSheet solutions for AppSheet Enterprise Plus users. Now you can automatically extract key information from uploaded photos, parse through complex PDFs, or even categorize, route and prioritize incoming requests based on their content – all seamlessly within your existing AppSheet apps. The new AI Task (Preview) feature, powered by Gemini, makes this a reality.

AppSheet Enterprise Plus users can now leverage this new 'AI Task' when building automations. These AI tasks make Gemini handle the heavy lifting, allowing it to supercharge your processes. Whether it's extracting details from equipment photos, processing PDF purchase orders, or categorizing incoming requests by their nature, Gemini is here to assist—all directly within your AppSheet.

To ensure these AI-powered features work precisely as you intend, you can quickly test individual steps within your workflow using the integrated AI Task Step Testing feature, now generally available (GA). This allows you to iterate with speed and confidence.

The AI Task, extract, and categorize functionality is available for all AppSheet Enterprise Plus users in preview. AI Task Step testing is now GA for all AppSheet Enterprise Plus users.

Testing the extract AI Task feature in AppSheet


Who’s impacted

Admins and end users


Why it’s important


AI Made Easy & Powerful: Integrating Gemini into AppSheet significantly simplifies how you incorporate advanced AI functions, such as data extraction and categorization, into your apps. It's designed to be user-friendly and effective:
  • Get started quickly: The extract and categorize AI capabilities work out-of-the-box once enabled (see "Getting started" below).
  • Customize with context: Easily add additional context or instructions to tailor the results to your specific needs.
Here are just a few examples of what you can do:


Extract Information:
  • Have technicians snap a photo of equipment; use the AI Task to automatically extract the Serial Number, Model Number, or Meter Reading into your AppSheet table.
  • Process uploaded purchase orders (PDFs) or photos of shipping labels to extract PO numbers, company names, tracking numbers, or addresses.
  • Extract key details like location, date, or names from incident reports.
  • Extract purchase details from a PDF contract, categorize order details by fulfillment team, and assign follow up action items.

Categorize Records:
  • Analyze the description in employee expense submissions and automatically categorize them by type ("Travel", "Meals", "Software", "Training").
  • Read incoming facility maintenance requests and categorize them by urgency ("High", "Medium", "Low") or equipment type ("HVAC", "Plumbing", "Electrical").
  • Classify customer survey responses or feedback form submissions into types like "Bug Report", "Feature Request", "Positive Feedback", or "Billing Inquiry".

Before deploying your AI-powered automations, remember you can use the AI Task Step Testing feature, now in GA, to rigorously validate how your AI Tasks (Preview) perform with your specific data and instructions. This crucial step allows you to iterate, refine configurations, and see immediate results on data from your table, ensuring your AI-powered automations work as intended without needing to test the entire bot.


Additional details

  • Usage of AI tasks during preview: During this Public Preview phase, AppSheet Enterprise Plus users have complimentary access to explore and learn these Gemini AI Task capabilities. We plan to track usage against entitled credits when these AI Task features become generally available.
  • Testing quotas: Running inline tests using the AI Task Step Testing feature will count towards your general AppSheet automation limits and entitled credits when AI Task becomes generally available.
  • Admin governance policies can be applied as described below in "Getting started."
  • App creators must enable preview features for their apps as described below in "Getting started."


Getting started

  • Admins: 
    • Optional: For AppSheet Enterprise Plus accounts, Gemini features are accessible to app creators who follow the steps below. However, Admins can define governance policies to control or disable the use of AI Tasks powered by Gemini. Learn more about controlling which app creators can use AI in automations.

  • End users:
    •  App Creators - Required for Preview access: 
      • To use Gemini in AppSheet solutions’ AI Tasks during this public preview, please follow the instructions to turn Gemini in AppSheet solutions on or off.

        As part of this process, you will need to ensure your app is included in the AppSheet preview program - learn more about the AppSheet preview program here. This step is required for access unless an organization or team policy prevents usage (see the Admin section above).

      • As you explore the new Gemini AI Task capabilities in AppSheet, your feedback is invaluable in helping us shape this feature for GA. We encourage you to take a few minutes to share your thoughts through our feedback survey. Your responses are confidential and will be used for research purposes only. 

Rollout pace

  • Gemini in AppSheet solutions’ Extract and Categorize AI tasks are now available in preview.
  • AI Task Step Testing is GA for all AppSheet Enterprise Customers. 

Availability

  • Gemini in AppSheet solutions (preview) and AI Task Step Testing require an AppSheet Enterprise Plus account

Resources



Announcing LMEval: An Open Source Framework for Cross-Model Evaluation

Announcing LMEval: An Open Source Framework for Cross-Model Evaluation

Authors: Elie Bursztein - Distinguished Research Scientist & David Tao - Software Engineer, Applied Security and Safety Research

Simplifying Cross-Provider Model Benchmarking

At InCyber Forum Europe in April, we open sourced LMEval, a large model evaluation framework, to help others accurately and efficiently compare how models from various providers perform across benchmark datasets. This announcement coincided with a joint talk with Giskard about our collaboration to increase trust in model safety and security. Giskard uses LMeval to run the Phare benchmark that independently evaluates popular models' security and safety.

Results from the Phare benchmark that leverages LMEval for evaluation
Example of LMEval running on a multimodal benchmark across two models.

Rapid Changes in the Landscape of Large Models

New Large Language Models (LLMs) are released constantly, often promising improvements and new features. To keep up with this fast-paced lifecycle, developers, researchers, and organizations must quickly and reliably evaluate if those newer models are better suited for their specific applications. So far, rapid model evaluation has proven difficult, as it requires tools that allow scalable, accurate, easy-to-use, cross-provider benchmarking.

Introducing LMEval: Simplifying Cross-Provider Model Benchmarking

To address this challenge, we are excited to introduce LMEval (Large Model Evaluator), an open source framework that Google developed to streamline the evaluation of LLMs across diverse benchmark datasets and model providers. LMEval is designed from the ground up to be accurate, multimodal, and easy-to-use. Its key features include:

  • Multi-Provider Compatibility: Evaluating models shouldn't require wrestling with different APIs for each provider. LMEval leverages the LiteLLM framework to offer out-of-the-box compatibility with major model providers including Google, OpenAI, Anthropic, Ollama, and Hugging Face. You can define your benchmark once and run it consistently across various models with minimal code changes.
  • Incremental & Efficient Evaluation: Re-running an entire benchmark suite every time a new model or version is released is slow, inefficient and costly. LMEval's intelligent evaluation engine plans and executes evaluations incrementally. It runs only the necessary evaluations for new models, prompts, or questions, saving significant time and compute resources. Its multi-threaded engine further accelerates this process.
  • Multimodal & Multi-Metric Support: Modern foundation models go beyond text. LMEval is designed for multimodal evaluation, supporting benchmarks that include text, images and code. Adding new modalities is straightforward. Furthermore, it supports various scoring metrics to support a wide range of benchmark formats from boolean questions, to multi-choices, to free form generation. Additionally, LMEval provides support for safety/punting detection.
  • Scalable & Secure Storage: To store benchmark results in a secure and efficient manner, LMEval utilizes a self-encrypting SQLite database. This approach protects benchmark data and results from inadvertent crawling/indexing while they stay easily accessible through LMEval.

Getting Started with LMEval

Creating and running evaluations with LMEval is designed to be intuitive. Here's a simplified example demonstrating how to evaluate two Gemini model versions on a benchmark:

 Example of LMEval running on a multimodal benchmark across two models.
Results from the Phare benchmark that leverages LMEval for evaluation

The LMEval GitHub repository includes example notebooks to help you get started.

Visualizing Results with LMEvalboard

Understanding benchmark results requires more than just summary statistics. To help with this, LMEval includes LMEvalboard, a companion dashboard tool that offers an interactive visualization of how models stack up against each other. LMEvalboard provides valuable insights into model strengths and weaknesses, complementing traditional raw evaluation data.

LMEvalboard UI allows to quickly analyze how models compares on a given benchmark
LMEvalboard UI allows to quickly analyze how models compares on a given benchmark

LMEvalboard allows you to:

  • View Overall Performance: Quickly compare all models' accuracy across the entire benchmark.
  • Analyze a Single Model: Dive deep into a specific model's performance characteristics across different categories using radar charts and drill down on specific examples of failures
  • Perform Head-to-Head Comparisons: Directly compare two models, visualizing their performance differences across categories and examine specific questions where they disagree.

Try LMEval Today!

We invite you to explore LMEval, use it for your own evaluations, and contribute to its development by heading to the LMEval GitHub repository: https://github.com/google/lmeval

Acknowledgements

LMEval would not have been possible without the help of many people, including: Luca Invernizzi, Lenin Simicich, Marianna Tishchenko, Amanda Walker, and many other Googlers.

Stable Channel Update for ChromeOS / ChromeOS Flex

M-136,  ChromeOS version 16238.47.0 (Browser version 136.0.7103.102), has rolled out to ChromeOS devices on the Stable channel.

If you find new issues, please let us know one of the following ways:

  1. File a bug

  2. Visit our ChromeOS communities

    1. General: Chromebook Help Community

    2. Beta Specific: ChromeOS Beta Help Community

  3. Report an issue or send feedback on Chrome

  4. Interested in switching channels? Find out how.


Andy Wu

Google ChromeOS


Google Workspace apps for Gmail, Google Drive, Google Docs, Calendar, Keep, and Tasks are now generally available for the Gemini app

What’s changing

Earlier this year, we launched Google Workspace apps (formerly known as "extensions") in Gemini in open beta. When enabled, Gemini can reference and incorporate data from these apps to generate even more informed and relevant responses, bringing Gemini’s capabilities more seamlessly into your daily workflows, helping enhance productivity. Workspace apps are available for:

  • Calendar
  • Docs
  • Drive
  • Gmail
  • Keep
  • Tasks

Beginning today, we’re pleased to announce that extensions for these apps are now generally available.

Who’s impacted

Admins and end users

Why you’d use them

When enabled, Gemini can interact with other Google apps and services to provide more contextual and relevant responses to your prompts and take certain actions across apps. For example, you can:

  • Gmail: ask what dates were proposed in the email about the team offsite.
  • Drive: find the document about the website clean-up project and summarize the proposal in a few bullet points.
  • Docs: reference a Doc that outlines your target audiences while performing customer research in parallel. 
  • Calendar: create an event based on specific details or based on the conversation with the Gemini app, find events for a specific day or date range, modify events in Calendar.
  • Tasks: add reminders and tasks, including those based on your conversations with Gemini, or view and update a list of your tasks.
  • Keep: create notes and lists, add items to an existing list, find content from your notes and lists, and reference your notes and lists in conversion with Gemini.

Additional details

  • Context aware access support:
    • Like all Google apps, context-access aware policies can be applied to the Gemini app. Note that at the moment, CAA policies can only be applied to Gemini on the web, they cannot be applied on mobile.
    • If these policies are not met, users will not be allowed to access Gemini.
    • If a user meets the Gemini app specific CAA policies, they can interact with Gemini. If a prompt requires access to Workspace data, CAA policies will determine if a user has access to the requested data. For example, if a user asks Gemini "What are my unread emails?", they will see the requested content if they have access or they’ll receive an error message.

  • If your prompt includes multiple actions that require separate apps or services, but one or more of the required services are not enabled, neither of the actions will be completed. For example, if you prompt Gemini to create an event on your calendar and a reminder for that event but the Tasks extension is not enabled, the event will not be added to your calendar and you will not get a reminder.

  • Google Workspace extensions are not available to Google Workspace users accessing Gemini as an additional Google service.

  • Google Workspace extensions are only available to users 13+.

  • All prompts, responses, or content the Gemini app gets via apps (fka extensions), is not reviewed by anyone to improve AI models, not used to train AI models, and not shared with other users or institutions.

Getting started

  • Admins: Access to Workspace apps is controlled by the new “Allow access to Workspace apps after Beta (upcoming general availability)” setting. Access is ON by default unless you previously took action on the setting leading into general availability. Visit the Help Center for more information on turning Workspace apps in Gemini on or off.

  • End users: If enabled by your admin, connecting Google Workspace allows users to summarize, get quick answers, and find information from Calendar, Keep and Tasks in addition to Gmail, Docs, and Drive directly in Gemini. Visit the Help Center to learn more about using Google Workspace extensions.

Rollout pace



Availability

Available for Google Workspace:
  • Business Starter, Standard, Plus
  • Enterprise Starter, Standard, Plus
  • Education Fundamentals, Standard, and Plus
  • Frontline Starter and Standard
  • Essentials, Enterprise Essentials, and Enterprise Essentials Plus
  • Nonprofits

Available for Google Workspace customers with these add-ons:
  • Gemini Business*
  • Gemini Enterprise*
  • Gemini Education
  • Gemini Education Premium
*As of January 15, 2025, we’re no longer offering the Gemini Business and Gemini Enterprise add-ons for sale. Please refer to this announcement for more details.

Availability