Monthly Archives: May 2023

Dev Channel Update for ChromeOS / ChromeOS Flex

The Dev channel is being updated to OS version: 15474.5.0 Browser version: 115.0.5790.7 for most ChromeOS devices.

If you find new issues, please let us know one of the following ways

  1. File a bug
  2. Visit our ChromeOS communities
    1. General: Chromebook Help Community
    2. Beta Specific: ChromeOS Beta Help Community
  3. Report an issue or send feedback on Chrome

Interested in switching channels? Find out how.

Daniel Gagnon,
Google ChromeOS

Chrome Beta for Android Update

Hi everyone! We've just released Chrome Beta 115 (115.0.5790.13) for Android. It's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Krishna Govind
Google Chrome

Beta Channel Update for Desktop

 The Chrome team is excited to announce the promotion of Chrome 115 to the Beta channel for Windows, Mac and Linux. Chrome 115.0.5790.13 contains our usual under-the-hood performance and stability tweaks, but there are also some cool new features to explore - please head to the Chromium blog to learn more!



A full list of changes in this build is available in the log. Interested in switching release channels? Find out how here. If you find a new issues, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.


Prudhvikumar Bommana Google Chrome

Large sequence models for software development activities

Software isn’t created in one dramatic step. It improves bit by bit, one little step at a time — editing, running unit tests, fixing build errors, addressing code reviews, editing some more, appeasing linters, and fixing more errors — until finally it becomes good enough to merge into a code repository. Software engineering isn’t an isolated process, but a dialogue among human developers, code reviewers, bug reporters, software architects and tools, such as compilers, unit tests, linters and static analyzers.

Today we describe DIDACT (​​Dynamic Integrated Developer ACTivity), which is a methodology for training large machine learning (ML) models for software development. The novelty of DIDACT is that it uses the process of software development as the source of training data for the model, rather than just the polished end state of that process, the finished code. By exposing the model to the contexts that developers see as they work, paired with the actions they take in response, the model learns about the dynamics of software development and is more aligned with how developers spend their time. We leverage instrumentation of Google's software development to scale up the quantity and diversity of developer-activity data beyond previous works. Results are extremely promising along two dimensions: usefulness to professional software developers, and as a potential basis for imbuing ML models with general software development skills.

DIDACT is a multi-task model trained on development activities that include editing, debugging, repair, and code review.

We built and deployed internally three DIDACT tools, Comment Resolution (which we recently announced), Build Repair, and Tip Prediction, each integrated at different stages of the development workflow. All three of these tools received enthusiastic feedback from thousands of internal developers. We see this as the ultimate test of usefulness: do professional developers, who are often experts on the code base and who have carefully honed workflows, leverage the tools to improve their productivity?

Perhaps most excitingly, we demonstrate how DIDACT is a first step towards a general-purpose developer-assistance agent. We show that the trained model can be used in a variety of surprising ways, via prompting with prefixes of developer activities, and by chaining together multiple predictions to roll out longer activity trajectories. We believe DIDACT paves a promising path towards developing agents that can generally assist across the software development process.


A treasure trove of data about the software engineering process

Google’s software engineering toolchains store every operation related to code as a log of interactions among tools and developers, and have done so for decades. In principle, one could use this record to replay in detail the key episodes in the “software engineering video” of how Google’s codebase came to be, step-by-step — one code edit, compilation, comment, variable rename, etc., at a time.

Google code lives in a monorepo, a single repository of code for all tools and systems. A software developer typically experiments with code changes in a local copy-on-write workspace managed by a system called Clients in the Cloud (CitC). When the developer is ready to package a set of code changes together for a specific purpose (e.g., fixing a bug), they create a changelist (CL) in Critique, Google’s code-review system. As with other types of code-review systems, the developer engages in a dialog with a peer reviewer about functionality and style. The developer edits their CL to address reviewer comments as the dialog progresses. Eventually, the reviewer declares “LGTM!” (“looks good to me”), and the CL is merged into the code repository.

Of course, in addition to a dialog with the code reviewer, the developer also maintains a “dialog” of sorts with a plethora of other software engineering tools, such as the compiler, the testing framework, linters, static analyzers, fuzzers, etc.

An illustration of the intricate web of activities involved in developing software: small actions by the developer, interactions with a code reviewer, and invocations of tools such as compilers.

A multi-task model for software engineering

DIDACT utilizes interactions among engineers and tools to power ML models that assist Google developers, by suggesting or enhancing actions developers take — in context — while pursuing their software-engineering tasks. To do that, we have defined a number of tasks about individual developer activities: repairing a broken build, predicting a code-review comment, addressing a code-review comment, renaming a variable, editing a file, etc. We use a common formalism for each activity: it takes some State (a code file), some Intent (annotations specific to the activity, such as code-review comments or compiler errors), and produces an Action (the operation taken to address the task). This Action is like a mini programming language, and can be extended for newly added activities. It covers things like editing, adding comments, renaming variables, marking up code with errors, etc. We call this language DevScript.

The DIDACT model is prompted with a task, code snippets, and annotations related to that task, and produces development actions, e.g., edits or comments.

This state-intent-action formalism enables us to capture many different tasks in a general way. What’s more, DevScript is a concise way to express complex actions, without the need to output the whole state (the original code) as it would be after the action takes place; this makes the model more efficient and more interpretable. For example, a rename might touch a file in dozens of places, but a model can predict a single rename action.


An ML peer programmer

DIDACT does a good job on individual assistive tasks. For example, below we show DIDACT doing code clean-up after functionality is mostly done. It looks at the code along with some final comments by the code reviewer (marked with “human” in the animation), and predicts edits to address those comments (rendered as a diff).

Given an initial snippet of code and the comments that a code reviewer attached to that snippet, the Pre-Submit Cleanup task of DIDACT produces edits (insertions and deletions of text) that address those comments.

The multimodal nature of DIDACT also gives rise to some surprising capabilities, reminiscent of behaviors emerging with scale. One such capability is history augmentation, which can be enabled via prompting. Knowing what the developer did recently enables the model to make a better guess about what the developer should do next.

An illustration of history-augmented code completion in action.

A powerful such task exemplifying this capability is history-augmented code completion. In the figure below, the developer adds a new function parameter (1), and moves the cursor into the documentation (2). Conditioned on the history of developer edits and the cursor position, the model completes the line (3) by correctly predicting the docstring entry for the new parameter.

An illustration of edit prediction, over multiple chained iterations.

In an even more powerful history-augmented task, edit prediction, the model can choose where to edit next in a fashion that is historically consistent. If the developer deletes a function parameter (1), the model can use history to correctly predict an update to the docstring (2) that removes the deleted parameter (without the human developer manually placing the cursor there) and to update a statement in the function (3) in a syntactically (and — arguably — semantically) correct way. With history, the model can unambiguously decide how to continue the “editing video” correctly. Without history, the model wouldn’t know whether the missing function parameter is intentional (because the developer is in the process of a longer edit to remove it) or accidental (in which case the model should re-add it to fix the problem).

The model can go even further. For example, we started with a blank file and asked the model to successively predict what edits would come next until it had written a full code file. The astonishing part is that the model developed code in a step-by-step way that would seem natural to a developer: It started by first creating a fully working skeleton with imports, flags, and a basic main function. It then incrementally added new functionality, like reading from a file and writing results, and added functionality to filter out some lines based on a user-provided regular expression, which required changes across the file, like adding new flags.


Conclusion

DIDACT turns Google's software development process into training demonstrations for ML developer assistants, and uses those demonstrations to train models that construct code in a step-by-step fashion, interactively with tools and code reviewers. These innovations are already powering tools enjoyed by Google developers every day. The DIDACT approach complements the great strides taken by large language models at Google and elsewhere, towards technologies that ease toil, improve productivity, and enhance the quality of work of software engineers.


Acknowledgements

This work is the result of a multi-year collaboration among Google Research, Google Core Systems and Experiences, and DeepMind. We would like to acknowledge our colleagues Jacob Austin, Pascal Lamblin, Pierre-Antoine Manzagol, and Daniel Zheng, who join us as the key drivers of this project. This work could not have happened without the significant and sustained contributions of our partners at Alphabet (Peter Choy, Henryk Michalewski, Subhodeep Moitra, Malgorzata Salawa, Vaibhav Tulsyan, and Manushree Vijayvergiya), as well as the many people who collected data, identified tasks, built products, strategized, evangelized, and helped us execute on the many facets of this agenda (Ankur Agarwal, Paige Bailey, Marc Brockschmidt, Rodrigo Damazio Bovendorp, Satish Chandra, Savinee Dancs, Matt Frazier, Alexander Frömmgen, Nimesh Ghelani, Chris Gorgolewski, Chenjie Gu, Vincent Hellendoorn, Franjo Ivančić, Marko Ivanković, Emily Johnston, Luka Kalinovcic, Lera Kharatyan, Jessica Ko, Markus Kusano, Kathy Nix, Sara Qu, Marc Rasi, Marcus Revaj, Ballie Sandhu, Michael Sloan, Tom Small, Gabriela Surita, Maxim Tabachnyk, David Tattersall, Sara Toth, Kevin Villela, Sara Wiltberger, and Donald Duo Zhao) and our extremely supportive leadership (Martín Abadi, Joelle Barral, Jeff Dean, Madhura Dudhgaonkar, Douglas Eck, Zoubin Ghahramani, Hugo Larochelle, Chandu Thekkath, and Niranjan Tulpule). Thank you!

Source: Google AI Blog


Adding Chrome Browser Cloud Management remediation actions in Splunk using Alert Actions

Introduction

Chrome is trusted by millions of business users as a secure enterprise browser. Organizations can use Chrome Browser Cloud Management to help manage Chrome browsers more effectively. As an admin, they can use the Google Admin console to get Chrome to report critical security events to third-party service providers such as Splunk® to create custom enterprise security remediation workflows.

Security remediation is the process of responding to security events that have been triggered by a system or a user. Remediation can be done manually or automatically, and it is an important part of an enterprise security program.

Why is Automated Security Remediation Important?

When a security event is identified, it is imperative to respond as soon as possible to prevent data exfiltration and to prevent the attacker from gaining a foothold in the enterprise. Organizations with mature security processes utilize automated remediation to improve the security posture by reducing the time it takes to respond to security events. This allows the usually over burdened Security Operations Center (SOC) teams to avoid alert fatigue.

Automated Security Remediation using Chrome Browser Cloud Management and Splunk

Chrome integrates with Chrome Enterprise Recommended partners such as Splunk® using Chrome Enterprise Connectors to report security events such as malware transfer, unsafe site visits, password reuse. Other supported events can be found on our support page.

The Splunk integration with Chrome browser allows organizations to collect, analyze, and extract insights from security events. The extended security insights into managed browsers will enable SOC teams to perform better informed automated security remediations using Splunk® Alert Actions.

Splunk Alert Actions are a great capability for automating security remediation tasks. By creating alert actions, enterprises can automate the process of identifying, prioritizing, and remediating security threats.

In Splunk®, SOC teams can use alerts to monitor for and respond to specific Chrome Browser Cloud Management events. Alerts use a saved search to look for events in real time or on a schedule and can trigger an Alert Action when search results meet specific conditions as outlined in the diagram below.

Use Case

If a user downloads a malicious file after bypassing a Chrome “Dangerous File” message their managed browser/managed CrOS device should be quarantined.

Prerequisites

Setup

  1. Install the Google Chrome Add-on for Splunk App

    Please follow installation instructions here depending on your Splunk Installation to install the Google Chrome Add-on for Splunk App.

  2. Setting up Chrome Browser Cloud Management and Splunk Integration

    Please follow the guide here to set up Chrome Browser Cloud Management and Splunk® integration.

  3. Setting up Chrome Browser Cloud Management API access

    To call the Chrome Browser Cloud Management API, use a service account properly configured in the Google admin console. Create a (or use an existing) service account and download the JSON representation of the key.

    Create a (or use an existing) role in the admin console with all the “Chrome Management” privileges as shown below.

    Assign the created role to the service account using the “Assign service accounts” button.

  4. Setting up Chrome Browser Cloud Management App in Splunk®

    Install the App i.e. Alert Action from our Github page. You will notice that the Splunk App uses the below directory structure. Please take some time to understand the directory structure layout.

  5. Setting up a Quarantine OU in Chrome Browser Cloud Management

    Create a “Quarantine” OU to move managed browsers into. Apply restrictive policies to this OU which will then be applied to managed browsers and managed CrOS devices that are moved to this OU. In our case we set the below policies for our “Quarantine” OU called Investigate.These policies ensure that the quarantined CrOS device/browser can only open a limited set of approved URLS.

Configuration

  1. Start with a search for the Chrome Browser Cloud Management events in the Google Chrome Add-on for Splunk App. For our instance we used the below search query to search for known malicious file download events.
  2. Save the search as an alert. The alert uses the saved search to check for events. Adjust the alert type to configure how often the search runs. Use a scheduled alert to check for events on a regular basis. Use a real-time alert to monitor for events continuously. An alert does not have to trigger every time it generates search results. Set trigger conditions to manage when the alert triggers. Customize the alert settings as per enterprise security policies. For our example we used a real time alert with a per-result trigger. The setup we used is as shown below.

  3. As seen in the screenshot we have configured the Chrome Browser Cloud Management Remediation Alert Action App with

    • The OU Path of the Quarantine OU i.e. /Investigate
    • The Customer Id of the workspace domain
    • Service Account Key JSON value

    Test the setup

    Use the testsafebrowsing website to generate sample security events to test the setup.

    1. Open the testsafebrowsing website
    2. Click the link for line item 4 under the Desktop Download Warnings section i.e. “Should show an "uncommon" warning, for .exe”
    3. You will see a Dangerous Download blocked warning giving you two options to either Discard or Keep the downloaded file. Click on Keep
    4. This will trigger the alert action and move your managed browser or managed CrOS device to the “Quarantine” OU (OU name Investigate in our example) with restricted policies.

    Conclusion

    Security remediation is vital to any organization’s security program. In this blog we discussed configuring automated security remediation of Chrome Browser Cloud Management security events using Splunk alert actions. This scalable approach can be used to protect a company from online security threats by detecting and quickly responding to high fidelity Chrome Browser Cloud Management security events thereby greatly reducing the time to respond.

    Our team will be at the Gartner Security and Risk Management Summit in National Harbor, MD, next week. Come see us in action if you’re attending the summit.

Existing spaces organized by conversation topic will be upgraded to the new in-line threaded experience by Q4 2023

What’s changing

In 2022, we introduced in-line threading for Google Chat and since March 2023, all newly created spaces in Google Chat are in-line threaded by default. 


As we continue to move toward a single, streamlined flow of conversation in Google Chat, all existing spaces organized by conversation topic will be upgraded to the in-line threaded experience. We anticipate that this upgrade will take place in early Q4 2023. In the meantime, existing legacy threaded spaces will continue to function as is until the upgrade is complete. 

Space organized by conversation topic


In-line threaded spaces




Ahead of the upgrade, we’ll provide more details on the Workspace Updates Blog, and Google Workspace admins, partners, and resellers will receive an email with more information about what to expect before the upgrade. 


Who’s impacted 

Admins and end users 


Why it’s important 

Google Chat continues to evolve to meet the needs of our users with new and agile features. In order to provide a consistent user experience and reduce complexity, all Google Chat spaces will use the in-line threading by early Q4 2023. 


In-line threading allows users to reply to any message and create a discussion separate from the main conversation. Users can also follow specific threads, whereby they’ll receive notifications for replies and @ mentions in that thread, helping to cut through clutter and stay on top of what matters most. Additionally, users will be able to take advantage of other upcoming helpful features such as quoting messages, message linking, and more. 


Additional details 

All messages sent before the migration will be retained without any data loss, and once upgraded, they will be arranged chronologically, instead of by topic. There will also be a separator titled “Begin New Topic” to indicate every time a new topic was started. 


In some cases, when people have responded on older topics, the new chronological order takes precedence. This means that messages may not appear next to the original topic, but rather according to their timestamp. When this occurs, the new response will quote the last corresponding message that it is replying to, as seen in the second image below. 


You’ll also see a separator between the last message sent before the migration, and a message indicating that the space has been upgraded to a space with in-line replies. Going forward, all new messages will feature in-line threaded functionality — more information on the post-migration experience will be shared via email to admins, partners, and resellers. 

Before the upgrade: conversations grouped by topic 

After the space has been upgraded to in-line threading, messages sent before the upgrade will be arranged chronologically, instead of by topic. 


Getting started 

You’ll see a banner in all the spaces that will be impacted by the migration as FYI before the actual migration.



Availability 

  • This update impacts all Google Workspace customers 

Resources