Chrome Beta for Desktop Update

The Beta channel has been updated to 120.0.6099.35 for Windows, Mac and Linux.

A partial list of changes is available in the Git log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.

Srinivas Sista
Google Chrome

Improving simulations of clouds and their effects on climate

Today’s climate models successfully capture broad global warming trends. However, because of uncertainties about processes that are small in scale yet globally important, such as clouds and ocean turbulence, these models’ predictions of upcoming climate changes are not very accurate in detail. For example, predictions of the time by which the global mean surface temperature of Earth will have warmed 2℃, relative to preindustrial times, vary by 40–50 years (a full human generation) among today’s models. As a result, we do not have the accurate and geographically granular predictions we need to plan resilient infrastructure, adapt supply chains to climate disruption, and assess the risks of climate-related hazards to vulnerable communities.

In large part this is because clouds dominate errors and uncertainties in climate predictions for the coming decades [1, 2, 3]. Clouds reflect sunlight and exert a greenhouse effect, making them crucial for regulating Earth's energy balance and mediating the response of the climate system to changes in greenhouse gas concentrations. However, they are too small in scale to be directly resolvable in today’s climate models. Current climate models resolve motions at scales of tens to a hundred kilometers, with a few pushing toward the kilometer-scale. However, the turbulent air motions that sustain, for example, the low clouds that cover large swaths of tropical oceans have scales of meters to tens of meters. Because of this wide difference in scale, climate models use empirical parameterizations of clouds, rather than simulating them directly, which result in large errors and uncertainties.

While clouds cannot be directly resolved in global climate models, their turbulent dynamics can be simulated in limited areas by using high-resolution large eddy simulations (LES). However, the high computational cost of simulating clouds with LES has inhibited broad and systematic numerical experimentation, and it has held back the generation of large datasets for training parameterization schemes to represent clouds in coarser-resolution global climate models.

In “Accelerating Large-Eddy Simulations of Clouds with Tensor Processing Units”, published in Journal of Advances in Modeling Earth Systems (JAMES), and in collaboration with a Climate Modeling Alliance (CliMA) lead who is a visiting researcher at Google, we demonstrate that Tensor Processing Units (TPUs) — application-specific integrated circuits that were originally developed for machine learning (ML) applications — can be effectively used to perform LES of clouds. We show that TPUs, in conjunction with tailored software implementations, can be used to simulate particularly computationally challenging marine stratocumulus clouds in the conditions observed during the Dynamics and Chemistry of Marine Stratocumulus (DYCOMS) field study. This successful TPU-based LES code reveals the utility of TPUs, with their large computational resources and tight interconnects, for cloud simulations.

Climate model accuracy for critical metrics, like precipitation or the energy balance at the top of the atmosphere, has improved roughly 10% per decade in the last 20 years. Our goal is for this research to enable a 50% reduction in climate model errors by improving their representation of clouds.


Large-eddy simulations on TPUs

In this work, we focus on stratocumulus clouds, which cover ~20% of the tropical oceans and are the most prevalent cloud type on earth. Current climate models are not yet able to reproduce stratocumulus cloud behavior correctly, which has been one of the largest sources of errors in these models. Our work will provide a much more accurate ground truth for large-scale climate models.

Our simulations of clouds on TPUs exhibit unprecedented computational throughput and scaling, making it possible, for example, to simulate stratocumulus clouds with 10× speedup over real-time evolution across areas up to about 35 × 54 km2. Such domain sizes are close to the cross-sectional area of typical global climate model grid boxes. Our results open up new avenues for computational experiments, and for substantially enlarging the sample of LES available to train parameterizations of clouds for global climate models.

Rendering of the cloud evolution from a simulation of a 285 x 285 x 2 km3 stratocumulus cloud sheet. This is the largest cloud sheet of its kind ever simulated. Left: An oblique view of the cloud field with the camera cruising. Right: Top view of the cloud field with the camera gradually pulled away.

The LES code is written in TensorFlow, an open-source software platform developed by Google for ML applications. The code takes advantage of TensorFlow’s graph computation and Accelerated Linear Algebra (XLA) optimizations, which enable the full exploitation of TPU hardware, including the high-speed, low-latency inter-chip interconnects (ICI) that helped us achieve this unprecedented performance. At the same time, the TensorFlow code makes it easy to incorporate ML components directly within the physics-based fluid solver.

We validated the code by simulating canonical test cases for atmospheric flow solvers, such as a buoyant bubble that rises in neutral stratification, and a negatively buoyant bubble that sinks and impinges on the surface. These test cases show that the TPU-based code faithfully simulates the flows, with increasingly fine turbulent details emerging as the resolution increases. The validation tests culminate in simulations of the conditions during the DYCOMS field campaign. The TPU-based code reliably reproduces the cloud fields and turbulence characteristics observed by aircraft during a field campaign — a feat that is notoriously difficult to achieve for LES because of the rapid changes in temperature and other thermodynamic properties at the top of the stratocumulus decks.

One of the test cases used to validate our TPU Cloud simulator. The fine structures from the density current generated by the negatively buoyant bubble impinging on the surface are much better resolved with a high resolution grid (10m, bottom row) compared to a low resolution grid (200 m, top row).


Outlook

With this foundation established, our next goal is to substantially enlarge existing databases of high-resolution cloud simulations that researchers building climate models can use to develop better cloud parameterizations — whether these are for physics-based models, ML models, or hybrids of the two. This requires additional physical processes beyond that described in the paper; for example, the need to integrate radiative transfer processes into the code. Our goal is to generate data across a variety of cloud types, e.g., thunderstorm clouds.

Rendering of a thunderstorm simulation using the same simulator as the stratocumulus simulation work. Rainfall can also be observed near the ground.

This work illustrates how advances in hardware for ML can be surprisingly effective when repurposed in other research areas — in this case, climate modeling. These simulations provide detailed training data for processes such as in-cloud turbulence, which are not directly observable, yet are crucially important for climate modeling and prediction.


Acknowledgements

We would like to thank the co-authors of the paper: Sheide Chammas, Qing Wang, Matthias Ihme, and John Anderson. We’d also like to thank Carla Bromberg, Rob Carver, Fei Sha, and Tyler Russell for their insights and contributions to the work.

Source: Google AI Blog


Kramden Institute Interns- Bridging the digital divide.

Since 2016, Google Fiber has partnered with Kramden Institute in Durham, North Carolina to address digital equity needs in the state. Kramden is a digital inclusion nonprofit with a mission to provide technology tools and training to bridge the digital divide. The digital divide is the gap between those who have affordable access, skills and support to effectively engage online, and those who do not. For the past seven years, GFiber has supported Kramden’s work, funding computer distributions and digital skills training for economically disadvantaged individuals across the state.

In 2022, GFiber’s donation was directed towards stipends for internship positions at Kramden. Nadel Comper was the first intern who received a stipend supported by GFiber’s donation, who joined the team in February 2022. She was ultimately hired as a permanent member of the Kramden team and is now the organization’s Lead Technician. Nadel shares her intern experience:

As a part of my Applied Science degree work study requirement at Wake Tech Community College, I participated in the Kramden Institute internship program, funded by GFiber. The program director suggested Kramden Institute as a great opportunity to gain practical work experience. I researched Kramden and immediately knew it would be the perfect fit.

I enjoyed my internship at Kramden Institute from day one. Since my focus was tech support, I wanted to learn how to troubleshoot and repair devices, the process involved in data transfer, and get experience doing support calls. I got all the above and more.

Thumbnail

My favorite part was helping the education team with digital literacy classes and clubs. Meeting and helping some of the people who directly benefited from Kramden’s mission, really put into perspective how important it is for people to have access to a computer and computer training. I hear all the time from Kramden recipients how they have used the devices we provide to get new jobs, learn new skills, further their education, or help their children in school.

Recently a woman who participated in our digital literacy class had transportation difficulties and needed assistance getting a computer and setting up her Wi-Fi connection. She had previously never owned a computer, and would have been at risk of losing her job. I was glad Kramden was able to provide her with a computer and I was able to assist her to work remotely. Parents have also shared with me how computers provided by Kramden have allowed their children to have access to digital resources, and participate in STEM (Science, Technology, Engineering and Math) related activities at school- especially during the pandemic.

Now that I work full-time at Kramden Institute as a lead technician, I still get opportunities to work closely with other Kramden interns. I often get feedback from new interns about their experience- each person stating having learned something unique from the internship. One intern who works with our technology manager (who was tasked recently with making changes to how we deploy new devices through our server management), had a great opportunity to learn a little more about server management and deployment, and even inventory management from him.

With the continued growth of Kramden Institute and support from GFiber towards bridging the digital divide, we added two program interns in 2023 who assist our Education team with data entry and computer distribution.

I am grateful to GFiber for assisting me with the opportunity to further my education and experience. I learned a lot during my time as an intern- this set the stage for a bright rewarding career.

Posted by Nadel Comper, Lead Technician

Kramden Institute Interns- Bridging the digital divide.

Since 2016, Google Fiber has partnered with Kramden Institute in Durham, North Carolina to address digital equity needs in the state. Kramden is a digital inclusion nonprofit with a mission to provide technology tools and training to bridge the digital divide. The digital divide is the gap between those who have affordable access, skills and support to effectively engage online, and those who do not. For the past seven years, GFiber has supported Kramden’s work, funding computer distributions and digital skills training for economically disadvantaged individuals across the state.

In 2022, GFiber’s donation was directed towards stipends for internship positions at Kramden. Nadel Comper was the first intern who received a stipend supported by GFiber’s donation, who joined the team in February 2022. She was ultimately hired as a permanent member of the Kramden team and is now the organization’s Lead Technician. Nadel shares her intern experience:

As a part of my Applied Science degree work study requirement at Wake Tech Community College, I participated in the Kramden Institute internship program, funded by GFiber. The program director suggested Kramden Institute as a great opportunity to gain practical work experience. I researched Kramden and immediately knew it would be the perfect fit.

I enjoyed my internship at Kramden Institute from day one. Since my focus was tech support, I wanted to learn how to troubleshoot and repair devices, the process involved in data transfer, and get experience doing support calls. I got all the above and more.

Thumbnail

My favorite part was helping the education team with digital literacy classes and clubs. Meeting and helping some of the people who directly benefited from Kramden’s mission, really put into perspective how important it is for people to have access to a computer and computer training. I hear all the time from Kramden recipients how they have used the devices we provide to get new jobs, learn new skills, further their education, or help their children in school.

Recently a woman who participated in our digital literacy class had transportation difficulties and needed assistance getting a computer and setting up her Wi-Fi connection. She had previously never owned a computer, and would have been at risk of losing her job. I was glad Kramden was able to provide her with a computer and I was able to assist her to work remotely. Parents have also shared with me how computers provided by Kramden have allowed their children to have access to digital resources, and participate in STEM (Science, Technology, Engineering and Math) related activities at school- especially during the pandemic.

Now that I work full-time at Kramden Institute as a lead technician, I still get opportunities to work closely with other Kramden interns. I often get feedback from new interns about their experience- each person stating having learned something unique from the internship. One intern who works with our technology manager (who was tasked recently with making changes to how we deploy new devices through our server management), had a great opportunity to learn a little more about server management and deployment, and even inventory management from him.

With the continued growth of Kramden Institute and support from GFiber towards bridging the digital divide, we added two program interns in 2023 who assist our Education team with data entry and computer distribution.

I am grateful to GFiber for assisting me with the opportunity to further my education and experience. I learned a lot during my time as an intern- this set the stage for a bright rewarding career.

Posted by Nadel Comper, Lead Technician

Open sourcing Project Guideline: A platform for computer vision accessibility technology

Two years ago we announced Project Guideline, a collaboration between Google Research and Guiding Eyes for the Blind that enabled people with visual impairments (e.g., blindness and low-vision) to walk, jog, and run independently. Using only a Google Pixel phone and headphones, Project Guideline leverages on-device machine learning (ML) to navigate users along outdoor paths marked with a painted line. The technology has been tested all over the world and even demonstrated during the opening ceremony at the Tokyo 2020 Paralympic Games.

Since the original announcement, we set out to improve Project Guideline by embedding new features, such as obstacle detection and advanced path planning, to safely and reliably navigate users through more complex scenarios (such as sharp turns and nearby pedestrians). The early version featured a simple frame-by-frame image segmentation that detected the position of the path line relative to the image frame. This was sufficient for orienting the user to the line, but provided limited information about the surrounding environment. Improving the navigation signals, such as alerts for obstacles and upcoming turns, required a much better understanding and mapping of the users’ environment. To solve these challenges, we built a platform that can be utilized for a variety of spatially-aware applications in the accessibility space and beyond.

Today, we announce the open source release of Project Guideline, making it available for anyone to use to improve upon and build new accessibility experiences. The release includes source code for the core platform, an Android application, pre-trained ML models, and a 3D simulation framework.


System design

The primary use-case is an Android application, however we wanted to be able to run, test, and debug the core logic in a variety of environments in a reproducible way. This led us to design and build the system using C++ for close integration with MediaPipe and other core libraries, while still being able to integrate with Android using the Android NDK.

Under the hood, Project Guideline uses ARCore to estimate the position and orientation of the user as they navigate the course. A segmentation model, built on the DeepLabV3+ framework, processes each camera frame to generate a binary mask of the guideline (see the previous blog post for more details). Points on the segmented guideline are then projected from image-space coordinates onto a world-space ground plane using the camera pose and lens parameters (intrinsics) provided by ARCore. Since each frame contributes a different view of the line, the world-space points are aggregated over multiple frames to build a virtual mapping of the real-world guideline. The system performs piecewise curve approximation of the guideline world-space coordinates to build a spatio-temporally consistent trajectory. This allows refinement of the estimated line as the user progresses along the path.

Project Guideline builds a 2D map of the guideline, aggregating detected points in each frame (red) to build a stateful representation (blue) as the runner progresses along the path.

A control system dynamically selects a target point on the line some distance ahead based on the user’s current position, velocity, and direction. An audio feedback signal is then given to the user to adjust their heading to coincide with the upcoming line segment. By using the runner’s velocity vector instead of camera orientation to compute the navigation signal, we eliminate noise caused by irregular camera movements common during running. We can even navigate the user back to the line while it’s out of camera view, for example if the user overshot a turn. This is possible because ARCore continues to track the pose of the camera, which can be compared to the stateful line map inferred from previous camera images.

Project Guideline also includes obstacle detection and avoidance features. An ML model is used to estimate depth from single images. To train this monocular depth model, we used SANPO, a large dataset of outdoor imagery from urban, park, and suburban environments that was curated in-house. The model is capable of detecting the depth of various obstacles, including people, vehicles, posts, and more. The depth maps are converted into 3D point clouds, similar to the line segmentation process, and used to detect the presence of obstacles along the user’s path and then alert the user through an audio signal.

Using a monocular depth ML model, Project Guideline constructs a 3D point cloud of the environment to detect and alert the user of potential obstacles along the path.

A low-latency audio system based on the AAudio API was implemented to provide the navigational sounds and cues to the user. Several sound packs are available in Project Guideline, including a spatial sound implementation using the Resonance Audio API. The sound packs were developed by a team of sound researchers and engineers at Google who designed and tested many different sound models. The sounds use a combination of panning, pitch, and spatialization to guide the user along the line. For example, a user veering to the right may hear a beeping sound in the left ear to indicate the line is to the left, with increasing frequency for a larger course correction. If the user veers further, a high-pitched warning sound may be heard to indicate the edge of the path is approaching. In addition, a clear “stop” audio cue is always available in the event the user veers too far from the line, an anomaly is detected, or the system fails to provide a navigational signal.

Project Guideline has been built specifically for Google Pixel phones with the Google Tensor chip. The Google Tensor chip enables the optimized ML models to run on-device with higher performance and lower power consumption. This is critical for providing real-time navigation instructions to the user with minimal delay. On a Pixel 8 there is a 28x latency improvement when running the depth model on the Tensor Processing Unit (TPU) instead of CPU, and 9x improvement compared to GPU.



Testing and simulation

Project Guideline includes a simulator that enables rapid testing and prototyping of the system in a virtual environment. Everything from the ML models to the audio feedback system runs natively within the simulator, giving the full Project Guideline experience without needing all the hardware and physical environment set up.

Screenshot of Project Guideline simulator.


Future direction

To launch the technology forward, WearWorks has become an early adopter and teamed up with Project Guideline to integrate their patented haptic navigation experience, utilizing haptic feedback in addition to sound to guide runners. WearWorks has been developing haptics for over 8 years, and previously empowered the first blind marathon runner to complete the NYC Marathon without sighted assistance. We hope that integrations like these will lead to new innovations and make the world a more accessible place.

The Project Guideline team is also working towards removing the painted line completely, using the latest advancements in mobile ML technology, such as the ARCore Scene Semantics API, which can identify sidewalks, buildings, and other objects in outdoor scenes. We invite the accessibility community to build upon and improve this technology while exploring new use cases in other fields.


Acknowledgements

Many people were involved in the development of Project Guideline and the technologies behind it. We’d like to thank Project Guideline team members: Dror Avalon, Phil Bayer, Ryan Burke, Lori Dooley, Song Chun Fan, Matt Hall, Amélie Jean-aimée, Dave Hawkey, Amit Pitaru, Alvin Shi, Mikhail Sirotenko, Sagar Waghmare, John Watkinson, Kimberly Wilber, Matthew Willson, Xuan Yang, Mark Zarich, Steven Clark, Jim Coursey, Josh Ellis, Tom Hoddes, Dick Lyon, Chris Mitchell, Satoru Arao, Yoojin Chung, Joe Fry, Kazuto Furuichi, Ikumi Kobayashi, Kathy Maruyama, Minh Nguyen, Alto Okamura, Yosuke Suzuki, and Bryan Tanaka. Thanks to ARCore contributors: Ryan DuToit, Abhishek Kar, and Eric Turner. Thanks to Alec Go, Jing Li, Liviu Panait, Stefano Pellegrini, Abdullah Rashwan, Lu Wang, Qifei Wang, and Fan Yang for providing ML platform support. We’d also like to thank Hartwig Adam, Tomas Izo, Rahul Sukthankar, Blaise Aguera y Arcas, and Huisheng Wang for their leadership support. Special thanks to our partners Guiding Eyes for the Blind and Achilles International.

Source: Google AI Blog


Raise your hand with gesture detection in Google Meet

What’s changing 

Until now, raising your hand to ask a question in Google Meet is done by clicking the hand-raise icon. Starting today, you can also raise your physical hand and Meet will recognize it with gesture detection. To ensure the gesture is detected, make sure your camera is enabled and your hand is visible to the camera, away from your face and body. As an active speaker, the gesture detection will not be triggered; when you’re no longer the active speaker, gesture detection will resume.


Getting started

  • Admins: There is no admin control for this feature.
  • End users: This feature will be OFF by default and can be turned on by selecting More options > Reactions > Hand Raise Gesture. Visit the Help Center to learn more about raising your hand in Google Meet.

Rollout pace


Availability
  • Available for Google Workspace Business Plus, Business Standard, Enterprise Essentials, Enterprise Plus, Enterprise Standard, Enterprise Starter, Education Plus, the Teaching and Learning Upgrade customers, and Google Workspace Individual subscribers

Resources