Last year, we introduced stronger safeguards around sensitive actions taken in your Google Workspace accounts. We’re extending these protections to sensitive actions taken in Gmail, specifically actions related to:
Filters: creating a new filter, editing an existing filter, or importing filters.
Forwarding: Adding a new forwarding address from the Forwarding and POP/IMAP settings.
IMAP access: Enabling the IMAP access status from the settings. (Workspace admins control whether this setting is visible to end users or not)
When these actions are taken, Google will evaluate the session attempting the action, and if it’s deemed risky, it will be challenged with a “Verify it’s you” prompt. Through a second and trusted factor, such as a 2-step verification code, users can confirm the validity of the action. If a verification challenge is failed or not completed, users are sent a “Critical security alert” notification on trusted devices.
If a risky action is taken, you'll be prompted with a "Verify it's you" challenge.
Additional details
Note that this feature only supports users that use Google as their identity provider and actions taken within Google products. SAML users are not supported at this time. See below for more information.
End users: There is no end user setting for this feature, you'll see "Verify it’s you" challenges if an account action is deemed risky. We recommend that you enable 2-step verification if you haven’t already.
Rollout pace
Rapid Release domains: Gradual rollout (up to 15 days for feature visibility) starting on August 23, 2023
Scheduled Release domains: Full rollout (1-3 days for feature visibility) starting on September 6, 2023
Availability
Available to all Google Workspace customers and users with personal Google Accounts
The Beta channel has been updated to 117.0.5938.22 for Windows, Mac and Linux.
A partial list of changes is available in the Git log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.
Earlier this year, we added a new feature that allows multiple people to present a Google Slides presentation together in Meet. Starting today, co-presenters are now also able to view speaker notes.
Who’s impacted
End users
Why it’s important
Primary and co-presenters can now read from the same speaker notes while engaging with their audience during a presentation. This allows everyone to present with greater confidence and reduces context switching between Meet and Slides.
Additional details
This feature requires a computer with a Google Chrome or Edge browser.
Getting started
Admins: There is no admin control for this feature.
End users:
As the main presenter:
To start a presentation, select “present a tab” in Meet > “start slideshow”.
To add a co-presenter, select "Add co-presenter" in the people panel drop down menus.
To view speaker notes, click the speaker notes button in the controls at the bottom corner of the presentation.
As a co-presenter:
You’ll be notified that the primary presenter assigned you as a co-presenter.
You’ll get control over the Slides presentation, allowing you to navigate the deck for everyone in the meeting.
To view speaker notes, click the speaker notes button in the controls at the bottom corner of the presentation.
Note: co-presenters must have edit access to the Slides presentation in order to view speaker notes
Rapid Release domains: Gradual rollout (up to 15 days for feature visibility) starting on August 23, 2023
Scheduled Release domains: Gradual rollout (up to 15 days for feature visibility) starting on August September 5, 2023
Availability
Available to Google Workspace Business Standard, Business Plus, Enterprise Starter, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Education Plus, the Teaching & Learning Upgrade, and Workspace Individual customers
Hi, everyone! We've just released Chrome 116 (116.0.5845.114) for Android: it'll become available on Google Play over the next few days.
This release includes stability and performance improvements. You can see a full list of the changes in the Git log. If you find a new issue, please let us know by filing a bug.
Android releases contain the same security fixes as their corresponding Desktop release (Windows: 116.0.5845110/.111 Mac& Linux: 116.0.5845.110), unless otherwise noted.
The Stable and Extended stable channels has been updated to 116.0.5845.110 for Mac and Linux and 116.0.5845.110/.111 for Windows, which will roll out over the coming days/weeks. A full list of changes in this build is available in the log.
Security Fixes and Rewards
Note: Access to bug details and links may be kept restricted until a majority of users are updated with a fix. We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven’t yet fixed.
This update includes 5 security fixes. Below, we highlight fixes that were contributed by external researchers. Please see the Chrome Security Page for more information.
[$10000][1469542] High CVE-2023-4430: Use after free in Vulkan. Reported by Cassidy Kim(@cassidy6564) on 2023-08-02
[$3000][1469754] High CVE-2023-4429: Use after free in Loader. Reported by Anonymous on 2023-08-03
[$2000][1470477] High CVE-2023-4428: Out of bounds memory access in CSS. Reported by Francisco Alonso (@revskills) on 2023-08-06
[$NA][1470668] High CVE-2023-4427: Out of bounds memory access in V8. Reported by Sergei Glazunov of Google Project Zero on 2023-08-07
[$NA][1469348] Medium CVE-2023-4431: Out of bounds memory access in Fonts. Reported by Microsoft Security Researcher on 2023-08-01
We would also like to thank all security researchers that worked with us during the development cycle to prevent security bugs from ever reaching the stable channel.
Interested in switching release channels? Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.
We're pleased to announce that v202308 of the Google Ad Manager API is available starting today, August 23, 2023. This release brings support for new ThirdPartyMeasurementSettings providers.
For the full list of changes, check the release notes. Feel free to contact us on the Ad Manager API forum with any API-related questions.
Today, the Google Identity team is announcing a beta program for developers to test our integration with the Chrome browser’s new FedCM (Federated Credential Management) API.
Audience
This update is for all Google Identity Services (GIS) web developers who rely on the Chrome browser and use:
One Tap, or
Auto Sign-In
Why FedCM
GIS currently uses third-party cookies to enable users to easily sign up and sign in to websites, making sign-in more secure for users by reducing reliance on passwords. However, as part of the Privacy Sandbox initiative, Chrome is phasing out support for third-party cookies in 2024 to protect user privacy online.
The W3C FedID community group developed the FedCM API as a new privacy-preserving alternative to third-party cookies for federated identity providers. This new FedCM solution enables Google to continue providing a secure, streamlined experience for signing up and signing in to websites via GIS.
The benefits of FedCM include:
Improved privacy. As part of its design, FedCM prevents identity providers from viewing users' activity across the web without their permission, keeping privacy central in user interactions.
High-quality user experience. FedCM enables GIS to rely on a new browser-native UI, resulting in a faster, more consistent sign-in experience for users across websites and across identity providers. This new browser-provided user experience takes inspiration from prior versions of the GIS library, making the transition easier for users and developers.
Additional browser support. We expect other browsers to support FedCM, such as Firefox and other Chromium-based browsers like Edge, Opera, and Samsung browsers. We are excited to support a consistent, low-friction sign-in experience across the web.
What is changing for GIS developers
We expect to migrate developers using GIS’s One Tap and Auto Sign-In features to FedCM over the next year as a result of the Chrome browser’s plans to deprecate third-party cookies. For most developers, this migration will occur seamlessly through backwards-compatible updates to the GIS JavaScript library; the GIS JavaScript library will call the FedCM APIs behind the scenes, without any developer changes required. However, some websites may require minor changes, such as updates to custom layouts or positioning of sign-in prompts. To learn if your website may require changes, please review the migration guide article, and we encourage you to participate in the FedCM beta program.
More on the FedCM User Experience
With FedCM, GIS will continue to offer a seamless user experience, even when third-party cookies are no longer available. The new FedCM APIs have minimal changes to existing user flows and websites. The updated One Tap and Auto Sign-In user prompts are shown below:
GIS One Tap user experience using FedCM API
Auto Sign-In user experience using FedCM API
Timeline
GIS will migrate traffic to FedCM gradually beginning in November 2023, with minimal changes required for most developers. If you are using GIS One Tap or Auto Sign-in on your website, please be aware of the following timelines:
August 2023: Developers have the ability to participate in a GIS FedCM beta release. We also encourage all new apps to participate in the FedCM beta from the start.
Early 2024: We will begin a gradual transition of developers to FedCM, reaching 100% in mid-2024, to support the Chrome browser’s announced plans to block third-party cookies by default starting in the second half of 2024.
Early 2024: Chrome intends to begin scaled testing of third-party cookie blocking in advance of their aforementioned plans.
Once the Chrome browser blocks third-party cookies by default, the use of FedCM will be required for GIS One Tap to function.
Join our Beta program to prepare
We encourage all our developers to participate and test the FedCM APIs in our beta program available today, so you can prepare for these upcoming changes. To get started and learn more about the program, visit our developer site and check out the google-signin tag on Stack Overflow for technical assistance. We invite developers to share their feedback on the FedCM beta program with us at [email protected].
Hi everyone! We've just released Chrome Stable 116 (116.0.5845.118) for iOS; it'll become available on App Store in the next few hours.
This release includes stability and performance improvements. You can see a full list of the changes in the Git log. If you find a new issue, please let us know by filing a bug.
Posted by Wenhao Yu and Fei Xia, Research Scientists, Google
Empowering end-users to interactively teach robots to perform novel tasks is a crucial capability for their successful integration into real-world applications. For example, a user may want to teach a robot dog to perform a new trick, or teach a manipulator robot how to organize a lunch box based on user preferences. The recent advancements in large language models (LLMs) pre-trained on extensive internet data have shown a promising path towards achieving this goal. Indeed, researchers have explored diverse ways of leveraging LLMs for robotics, from step-by-step planning and goal-oriented dialogue to robot-code-writing agents.
While these methods impart new modes of compositional generalization, they focus on using language to link together new behaviors from an existing library of control primitives that are either manually engineered or learned a priori. Despite having internal knowledge about robot motions, LLMs struggle to directly output low-level robot commands due to the limited availability of relevant training data. As a result, the expression of these methods are bottlenecked by the breadth of the available primitives, the design of which often requires extensive expert knowledge or massive data collection.
In “Language to Rewards for Robotic Skill Synthesis”, we propose an approach to enable users to teach robots novel actions through natural language input. To do so, we leverage reward functions as an interface that bridges the gap between language and low-level robot actions. We posit that reward functions provide an ideal interface for such tasks given their richness in semantics, modularity, and interpretability. They also provide a direct connection to low-level policies through black-box optimization or reinforcement learning (RL). We developed a language-to-reward system that leverages LLMs to translate natural language user instructions into reward-specifying code and then applies MuJoCo MPC to find optimal low-level robot actions that maximize the generated reward function. We demonstrate our language-to-reward system on a variety of robotic control tasks in simulation using a quadruped robot and a dexterous manipulator robot. We further validate our method on a physical robot manipulator.
The language-to-reward system consists of two core components: (1) a Reward Translator, and (2) a Motion Controller. TheReward Translator maps natural language instruction from users to reward functions represented as python code. The Motion Controller optimizes the given reward function using receding horizon optimization to find the optimal low-level robot actions, such as the amount of torque that should be applied to each robot motor.
LLMs cannot directly generate low-level robotic actions due to lack of data in pre-training dataset. We propose to use reward functions to bridge the gap between language and low-level robot actions, and enable novel complex robot motions from natural language instructions.
Reward Translator: Translating user instructions to reward functions
The Reward Translator module was built with the goal of mapping natural language user instructions to reward functions. Reward tuning is highly domain-specific and requires expert knowledge, so it was not surprising to us when we found that LLMs trained on generic language datasets are unable to directly generate a reward function for a specific hardware. To address this, we apply the in-context learning ability of LLMs. Furthermore, we split the Reward Translator into two sub-modules: Motion Descriptor and Reward Coder.
Motion Descriptor
First, we design a Motion Descriptor that interprets input from a user and expands it into a natural language description of the desired robot motion following a predefined template. This Motion Descriptor turns potentially ambiguous or vague user instructions into more specific and descriptive robot motions, making the reward coding task more stable. Moreover, users interact with the system through the motion description field, so this also provides a more interpretable interface for users compared to directly showing the reward function.
To create the Motion Descriptor, we use an LLM to translate the user input into a detailed description of the desired robot motion. We design prompts that guide the LLMs to output the motion description with the right amount of details and format. By translating a vague user instruction into a more detailed description, we are able to more reliably generate the reward function with our system. This idea can also be potentially applied more generally beyond robotics tasks, and is relevant to Inner-Monologue and chain-of-thought prompting.
Reward Coder
In the second stage, we use the same LLM from Motion Descriptor for Reward Coder, which translates generated motion description into the reward function. Reward functions are represented using python code to benefit from the LLMs’ knowledge of reward, coding, and code structure.
Ideally, we would like to use an LLM to directly generate a reward function R (s, t) that maps the robot state s and time t into a scalar reward value. However, generating the correct reward function from scratch is still a challenging problem for LLMs and correcting the errors requires the user to understand the generated code to provide the right feedback. As such, we pre-define a set of reward terms that are commonly used for the robot of interest and allow LLMs to composite different reward terms to formulate the final reward function. To achieve this, we design a prompt that specifies the reward terms and guide the LLM to generate the correct reward function for the task.
The internal structure of the Reward Translator, which is tasked to map user inputs to reward functions.
Motion Controller: Translating reward functions to robot actions
The Motion Controller takes the reward function generated by the Reward Translator and synthesizes a controller that maps robot observation to low-level robot actions. To do this, we formulate the controller synthesis problem as a Markov decision process (MDP), which can be solved using different strategies, including RL, offline trajectory optimization, or model predictive control (MPC). Specifically, we use an open-source implementation based on the MuJoCo MPC (MJPC).
MJPC has demonstrated the interactive creation of diverse behaviors, such as legged locomotion, grasping, and finger-gaiting, while supporting multiple planning algorithms, such as iterative linear–quadratic–Gaussian (iLQG) and predictive sampling. More importantly, the frequent re-planning in MJPC empowers its robustness to uncertainties in the system and enables an interactive motion synthesis and correction system when combined with LLMs.
Examples
Robot dog
In the first example, we apply the language-to-reward system to a simulated quadruped robot and teach it to perform various skills. For each skill, the user will provide a concise instruction to the system, which will then synthesize the robot motion by using reward functions as an intermediate interface.
Dexterous manipulator
We then apply the language-to-reward system to a dexterous manipulator robot to perform a variety of manipulation tasks. The dexterous manipulator has 27 degrees of freedom, which is very challenging to control. Many of these tasks require manipulation skills beyond grasping, making it difficult for pre-designed primitives to work. We also include an example where the user can interactively instruct the robot to place an apple inside a drawer.
Validation on real robots
We also validate the language-to-reward method using a real-world manipulation robot to perform tasks such as picking up objects and opening a drawer. To perform the optimization in Motion Controller, we use AprilTag, a fiducial marker system, and F-VLM, an open-vocabulary object detection tool, to identify the position of the table and objects being manipulated.
Conclusion
In this work, we describe a new paradigm for interfacing an LLM with a robot through reward functions, powered by a low-level model predictive control tool, MuJoCo MPC. Using reward functions as the interface enables LLMs to work in a semantic-rich space that plays to the strengths of LLMs, while ensuring the expressiveness of the resulting controller. To further improve the performance of the system, we propose to use a structured motion description template to better extract internal knowledge about robot motions from LLMs. We demonstrate our proposed system on two simulated robot platforms and one real robot for both locomotion and manipulation tasks.
Acknowledgements
We would like to thank our co-authors Nimrod Gileadi, Chuyuan Fu, Sean Kirmani, Kuang-Huei Lee, Montse Gonzalez Arenas, Hao-Tien Lewis Chiang, Tom Erez, Leonard Hasenclever, Brian Ichter, Ted Xiao, Peng Xu, Andy Zeng, Tingnan Zhang, Nicolas Heess, Dorsa Sadigh, Jie Tan, and Yuval Tassa for their help and support in various aspects of the project. We would also like to acknowledge Ken Caluwaerts, Kristian Hartikainen, Steven Bohez, Carolina Parada, Marc Toussaint, and the greater teams at Google DeepMind for their feedback and contributions.