Monthly Archives: April 2022

An update on our work to counter extremism in Singapore

For an example of a harmonious, multicultural society, look no further than Singapore, where people of different ethnicities, religious backgrounds, and who speak varying languages live and work together peacefully. It’s a remarkable achievement — one of Singapore’s great strengths as a global hub for trade, travel and technology. It’s also something that all of us in Singapore have to work hard to preserve.

At Google, and YouTube, we’re committed to doing everything we can to promote and celebrate Singapore’s diversity — and to protect it from threats. Today, in collaboration with the Ministry of Culture, Community and Youth, we’re kicking off a series of workshops developed with Ministry of Funny. The aim is to help creators from local interfaith groups and religious organizations start meaningful discussions on issues of online extremism and hate, while fostering awareness, tolerance and empathy.

Participants in the workshops will learn the basics of video production, content strategy, and data analytics, as well as how to sustain an audience on YouTube. Select organizations will receive additional support in the form of grants and mentoring by four YouTube creators: Our Grandfather Story, The Daily Ketchup Podcast, itsclarityco and Overthink.

By amplifying positive voices and constructive dialogue, we believe we can help counter the impact of online extremism — building on the steps we’re already taking.

Taking strong actions against extremism

Over recent years, YouTube has made deep investments in machine learning to enable better detection and faster removal of harmful content that breaches its guidelines. Since 2019, YouTube has removed more than 2.6 million videos for violating its policies around violent extremism — as well as reducing the spread of content that comes close to violating these policies but doesn’t cross the line. YouTube is also holding itself to high standards of accountability, through a dedicated violent extremism section in the YouTube Community Guidelines Enforcement Report.

Across all Google products, we have long standing policies that prohibit harmful content, including incitement to violence and hate speech. We’re working closely with other major technology companies, through coalitions like the Global Internet Forum to Counter Terrorism. And we’re focused on developing other technology-based solutions. For example, teams at Jigsaw have developed the Redirect Method, an open-source program which uses targeted ads and videos uploaded by people around the world to confront online radicalization.

We’re looking forward to expanding on these efforts in collaboration with the Singapore Government, Ministry of Funny, and other leaders in the YouTube ecosystem. We see first hand the positive impact creators make all over the world every day, and with the right support, we know they can be powerful voices for tolerance and inclusion in Singapore’s diverse communities.

Google Workspace Updates Weekly Recap – April 29, 2022

New updates 

Updated rollout schedule for additional Calendar statuses in Google Chat 
We’d like to provide updated rollout information for additional Calendar statuses in Google Chat, previously announced on March 14, 2022
  • Rollout for Rapid release domains will be complete on Wednesday, May 5, 2022. 
  • Rollout for Scheduled release domains will begin on Wednesday, May 11, 2022 and is expected to be complete by Tuesday, May 24, 2022. 


Previous announcements 

The announcements below were published on the Workspace Updates blog earlier this week. Please refer to the original blog posts for complete details. 



Easily manage storage related activity and policies through new storage management tools in the Admin console 
In the Admin console, storage related activities can now be accessed and managed from a single source. | Learn more. 



Quick access to additional actions when composing a message in Google Chat on iOS 
When using Google Chat on iOS, you can now easily take additional actions by hovering over the plus (“+”) icon next to the compose bar. You’ll see a variety of options such as: 
  • Sharing a Google Meet link 
  • Creating a meeting in Calendar 
  • Accessing Google Drive Text formatting options and more. 




Enhanced menus in Google Docs improves findability of key features on desktop 
We’re updating the menus in Google Docs to make it easier to locate the most commonly-used features. In this update you’ll notice: 
  • Shortened menus for better navigation 
  • Reorganization for more intuitive feature location 
  • Prominent icons for faster recognition 



Warning banners alert users of suspicious Google Docs, Sheets, or Slides files on web 
Previously, we announced warning banners for potentially malicious or dangerous files in Google Drive. We’re extending these warnings at the file-level — going forward, if you open a Google Docs, Sheets, or Slides file on the web, you’ll see these warnings. | Learn more. 


For a recap of announcements in the past six months, check out What’s new in Google Workspace (recent releases).

Road tripping on Route 66

Ninety-six years ago on April 30th, one of the original highways in the U.S. Highway System was assigned its numerical designation of 66, creating what we know today as Route 66. But to say Route 66 is just a highway is a grave understatement. After all, it is the most-searched U.S. highway of all time.

One of the perks of working as a Doodler (I promise, it’s a real job) was getting to drive the 2,448-mile journey from Chicago to Los Angeles in my ‘72 Chevelle. I got to experience this captivating road trip firsthand, to create a Doodle celebrating Route 66.

This Doodle, which is essentially an animated sketchbook of various historic spots along the route, is the product of more than 100 paintings and sketches I created from the side of the road and countless U-turns. I remember being utterly lost one day, driving further and further down an old dirt road, when I finally saw an old man sitting on a lawn mower. “Is this Route 66?” I enquired. “Boy, this isn’t even Route 6!” he responded. Even the dead ends were interesting.

If this Doodle has you feeling inspired to take a trip across Route 66, we also caught up with a member of Google Maps’ Local Guides community who has some tips of his own to help you hit the road and explore.

Local tips from a Local Guide

Rhys Martin is a Level 6 Local Guide from Tulsa, Oklahoma who also serves as the President of the Oklahoma Route 66 Association. Having driven all 2,400 miles of the existing route, Rhys is passionate about adding photos and reviews to Google Maps that help raise awareness for the variety of experiences — from big cities and rural communities, to farmland, mountains, deserts, mom-and-pop motels and kitschy roadside attractions — a road trip down the historic highway provides. We asked him to share his best tips, tricks and recommendations to discover and experience his favorite spots along the route.

  • Discover local businesses along the route: By searching for something like “U.S Route 66 Restaurants” on Google Maps you can virtually explore restaurants or other businesses across all eight states along the route. This way, you can familiarize yourself with attractions, view how much certain restaurants cost, read reviews and even see popular menu items to help you choose places you want to visit.
  • Plan your road trip with Lists in Google Maps: Once you discover the places you’re interested in visiting, save them to a list that can serve as an itinerary so you can support local businesses — and help preserve history – along the route. You can even share your list with others, or make them collaborative so you can plan together!
  • A picture is worth a thousand words: Photographing the details of a place — like the decades-old neon signage or the original menus hanging behind the counter — and sharing them through reviews on Google Maps helps capture the essence of an establishment and helps others discover places they want to visit.

While Oklahoma has the most drivable miles of Route 66, Rhys says there’s so much to see in all eight states along the route. If you’re itching to plan the perfect summer road trip, check out a list of his must-see spots across Route 66 from Illinois to California.

Architecture MAD Skills series wrap up

Posted by Manuel Vicente Vivo, Developer Relations Engineer

MADSkills Jetpack DataStore 

Now that our MAD Skills series on Architecture is complete, let’s do a quick wrap up of all the things we’ve covered in each episode!

Episode 1 — The data layer

Learn about the data layer and its two basic components: repositories and data sources. We'll also cover data immutability, error handling, threading, testing and more tricks and recommendations with Jose Alcérreca.


Episode 2 — The UI layer

Learn about the UI layer and its state. Tunji Dahunsi covers UI state representation, production and consumption all within the context of a unidirectional data flow app!


Episode 3 — Handling UI events

Learn all about UI events. I—Manuel Vivo—cover the different types of UI events, the best practices for handling them, and more!


Episode 4 — The domain layer

The Domain layer is an optional layer which sits between the UI and Data layers. Don Turner explains how the domain layer can simplify your app architecture, making it easier to understand and test.


Episode 5 — Organizing modules

Emily Kager shares a tip around organizing modules in Android apps.


Episode 6 — Entities

Garima Jain shares a tip about creating separate data models based on various Architecture layers in your project.


Q&A

Tunji Dahunsi, Miłosz Moczkowski, Yigit Boyar, and I hung out together in a live Q&A session to answer all the questions you had!

Extracting Skill-Centric State Abstractions from Value Functions

Advances in reinforcement learning (RL) for robotics have enabled robotic agents to perform increasingly complex tasks in challenging environments. Recent results show that robots can learn to fold clothes, dexterously manipulate a rubik’s cube, sort objects by color, navigate complex environments and walk on difficult, uneven terrain. But "short-horizon" tasks such as these, which require very little long-term planning and provide immediate failure feedback, are relatively easy to train compared to many tasks that may confront a robot in a real-world setting. Unfortunately, scaling such short-horizon skills to the abstract, long horizons of real-world tasks is difficult. For example, how would one train a robot capable of picking up objects to rearrange a room?

Hierarchical reinforcement learning (HRL), a popular way of solving this problem, has achieved some success in a variety of long-horizon RL tasks. HRL aims to solve such problems by reasoning over a bank of low-level skills, thus providing an abstraction for actions. However, the high-level planning problem can be further simplified by abstracting both states and actions. For example, consider a tabletop rearrangement task, where a robot is tasked with interacting with objects on a desk. Using recent advances in RL, imitation learning, and unsupervised skill discovery, it is possible to obtain a set of primitive manipulation skills such as opening or closing drawers, picking or placing objects, etc. However, even for the simple task of putting a block into the drawer, chaining these skills together is not straightforward. This may be attributed to a combination of (i) challenges with planning and reasoning over long horizons, and (ii) dealing with high dimensional observations while parsing the semantics and affordances of the scene, i.e., where and when the skill can be used.

In “Value Function Spaces: Skill-Centric State Abstractions for Long-Horizon Reasoning”, presented at ICLR 2022, we address the task of learning suitable state and action abstractions for long-range problems. We posit that a minimal, but complete, representation for a higher-level policy in HRL must depend on the capabilities of the skills available to it. We present a simple mechanism to obtain such a representation using skill value functions and show that such an approach improves long-horizon performance in both model-based and model-free RL and enables better zero-shot generalization.

Our method, VFS, can compose low-level primitives (left) to learn complex long-horizon behaviors (right).

Building a Value Function Space
The key insight motivating this work is that the abstract representation of actions and states is readily available from trained policies via their value functions. The notion of “value” in RL is intrinsically linked to affordances, in that the value of a state for skill reflects the probability of receiving a reward for successfully executing the skill. For any skill, its value function captures two key properties: 1) the preconditions and affordances of the scene, i.e., where and when the skill can be used, and 2) the outcome, which indicates whether the skill executed successfully when it was used.

Given a decision process with a finite set of k skills trained with sparse outcome rewards and their corresponding value functions, we construct an embedding space by stacking these skill value functions. This gives us an abstract representation that maps a state to a k-dimensional representation that we call the Value Function Space, or VFS for short. This representation captures functional information about the exhaustive set of interactions that the agent can have with the environment, and is thus a suitable state abstraction for downstream tasks.

Consider a toy example of the tabletop rearrangement setup discussed earlier, with the task of placing the blue object in the drawer. There are eight elementary actions in this environment. The bar plot on the right shows the values of each skill at any given time, and the graph at the bottom shows the evolution of these values over the course of the task.

Value functions corresponding to each skill (top-right; aggregated in bottom) capture functional information about the scene (top-left) and aid decision-making.

At the beginning, the values corresponding to the “Place on Counter” skill are high since the objects are already on the counter; likewise, the values corresponding to “Close Drawer” are high. Through the trajectory, when the robot picks up the blue cube, the corresponding skill value peaks. Similarly, the values corresponding to placing the objects in the drawer increase when the drawer is open and peak when the blue cube is placed inside it. All the functional information required to affect each transition and predict its outcome (success or failure) is captured by the VFS representation, and in principle, allows a high-level agent to reason over all the skills and chain them together — resulting in an effective representation of the observations.

Additionally, since VFS learns a skill-centric representation of the scene, it is robust to exogenous factors of variation, such as background distractors and appearances of task-irrelevant components of the scene. All configurations shown below are functionally equivalent — an open drawer with the blue cube in it, a red cube on the countertop, and an empty gripper — and can be interacted with identically, despite apparent differences.

The learned VFS representation can ignore task-irrelevant factors such as arm pose, distractor objects (green cube) and background appearance (brown desk).

Robotic Manipulation with VFS
This approach enables VFS to plan out complex robotic manipulation tasks. Take, for example, a simple model-based reinforcement learning (MBRL) algorithm that uses a simple one-step predictive model of the transition dynamics in value function space and randomly samples candidate skill sequences to select and execute the best one in a manner similar to the model-predictive control. Given a set of primitive pushing skills of the form “move Object A near Object B” and a high-level rearrangement task, we find that VFS can use MBRL to reliably find skill sequences that solve the high-level task.

A rollout of VFS performing a tabletop rearrangement task using a robotic arm. VFS can reason over a sequence of low-level primitives to achieve the desired goal configuration.

To better understand the attributes of the environment captured by VFS, we sample the VFS-encoded observations from a large number of independent trajectories in the robotic manipulation task and project them onto a two-dimensional axis using the t-SNE technique, which is useful for visualizing clusters in high-dimensional data. These t-SNE embeddings reveal interesting patterns identified and modeled by VFS. Looking at some of these clusters closely, we find that VFS can successfully capture information about the contents (objects) in the scene and affordances (e.g., a sponge can be manipulated when held by the robot’s gripper), while ignoring distractors like the relative positions of the objects on the table and the pose of the robotic arm. While these factors are certainly important to solve the task, the low-level primitives available to the robot abstract them away and hence, make them functionally irrelevant to the high-level controller.

Visualizing the 2D t-SNE projections of VFS embeddings show emergent clustering of equivalent configurations of the environment while ignoring task-irrelevant factors like arm pose.

Conclusions and Connections to Future Work
Value function spaces are representations built on value functions of underlying skills, enabling long-horizon reasoning and planning over skills. VFS is a compact representation that captures the affordances of the scene and task-relevant information while robustly ignoring distractors. Empirical experiments reveal that such a representation improves planning for model-based and model-free methods and enables zero-shot generalization. Going forward, this representation has the promise to continue improving along with the field of multitask reinforcement learning. The interpretability of VFS further enables integration into fields such as safe planning and grounding language models.

Acknowledgements
We thank our co-authors Sergey Levine, Ted Xiao, Alex Toshev, Peng Xu and Yao Lu for their contributions to the paper and feedback on this blog post. We also thank Tom Small for creating the informative visualizations used in this blog post.

Source: Google AI Blog


What is black and white and read all over?

Noto emoji, a new black and white emoji font with less color, may gain us more in the long run

Posted by Jennifer Daniel, Creative Director - Emoji & Expression

Seven different black and white emojis in 5 collumns: cat, donut, chicken, flower, sheep, mouse, donut, doll 

In 1999 — back when Snake ? was the best thing about your phone ? — there were three phone carriers in Japan ? . On these phones were tiny, beautiful pictures called emoji (meaning “picture” and “character” in Japanese ?). These 176 images were very simple — think 8-bit tech — and as a result were exquisitely abstract and tremendously useful when texting Twenty years later ??????????????, emoji are a global phenomenon ?. Now, our phones have fancy retina screens and somewhere along the way an important part of what made emoji so handy was left by the wayside: their simplicity. That’s why we’ve created a new emoji font: a monochrome Noto Emoji (a black and white companion to Noto Emoji Color).

Noto Emoji works like any other font you might use: You can change any character's color, size and weight. Download it and give it a whirl.

Noto Emoji webpage

What’s old is new again ?

Over time, emoji have become more detailed. Instead of representing broad concepts there has been a trend to design emoji to be hyper realistic. This wouldn't be a problem except skeuomorphism's specificity has resulted in the exclusion of other similar concepts in your keyboard. Today we have "?" … but what about other types of dance? Hula dancing? Belly dancing? Salsa dancing? Boogie woogie? By removing as much detail as possible, emoji could be more flexible, representing the idea of something instead of specifically what is in front of you (that … is what your camera is for ?).

Example of Noto Emoji cycling through different customizations like font color, size, and variable weights 

We also want to make sure emoji keep up with platform technology. We’ve got dark mode … we’ve got light mode … and now you can change the color of your emoji font so it can operate with the same dynamism as your operating system. Noto Emoji is also a variable font — opt for a "light" grade if it appears small ? or "bold" if you want them to have some weight. ? .

When translating our color emoji to black and white: some details can be removed, others will need to be completely redrawn. 

New designs, fewer details

To design something simple seems like it would be … well, simple … but it's deceptively complex ?

At first the approach seemed obvious — simply redraw the Noto Emoji Color designs in black and white. They are iconic, they will be legible. Done deal. Easy peasy, lemon squeezy. Not so fast. The removal of color is no trivial task. Take for example: Flags.

four flags in color - Sweeden, Denmark, USA, Brazil. Then four flags in black and white - SE, DK, US, BR.

You can't simply convert flags into black and white. You wouldn't be able to tell the difference between Finland and Sweden. You could redraw the flags but that puts them at risk of being incorrect. Instead, we leveraged the ISO's country codes. These sequences of letters are unique and represent each country. As a result, black and white flags have a much more contemporary aesthetic — kind of like bumper stickers ?.

Let's also take a look at the process involved to redesign the people emoji. For some characters, color is baked into the concept (like skin tone or hair color). It simply didn’t look right to replace color with hash marks or polka dots. And that my dear is how the blobs came back. Say hello (again) to our little friends.

Early sketch as we explored black and white designs

Likable. Nostalgic. Relatable without maintaining a distinction between genders. Google’s blob emoji were really something special. Cute, squishy, and remarkably friendly. We were able to bring back a little bit of what made them special while simultaneously discarding the parts that weren’t working. Most notably, the blobs’ facial expressions were wildly inconsistent but that was very easily fixed in black and white mode. It’s important emoji work cross-platform. The real world is not black and white but in emoji land we can finally have our favorite little dancer back ?.

So here we are today, dancing into the future with our favorite new emoji font. We can't wait to see how you use it. Visit Google Fonts to download or embed onto your website. Happy emoji-ing! ????

Ease back into your office routine with Google

As many people start returning to the office, we know there’s a lot to (re)figure out — like what to wear on the first day back, how long your commute will take and how to stay productive. So we’re sharing some tips for getting back into the office groove with a little help from Google products.

Rebuild a routine

Google Assistant Routines can help you automate tasks so you have less to do and think about before you head to work. Just say "Hey Google, good morning" and your Assistant can share news, weather or traffic updates, tell you what’s on your calendar, and even get your smart coffee maker started on your morning brew. You can create a Routine based on a specific schedule or when the sun rises or sets every day.

Commute with confidence

Whether you usually hop on public transit, get behind the wheel or hit the pavement, your commute may have changed since the pandemic — or, like me, you might have just forgotten how long it takes. Check Google Maps to find the ideal time to commute and the greenest route for an eco-friendlier way to get to work.

Trying to get to the office by a certain time? Set the time you’re departing or want to arrive by to see how long it’ll take you to get to your destination (and to avoid getting stuck in traffic). The “Leave on Time” feature in Google Assistant Routines can also remind you when to leave, giving you the extra nudge to head out the door.

Find your new food spot

Once you get there, Google Maps can help you find the best (and most efficient) lunch options near your office.

Use Maps’ popular times and live busyness information to see when restaurants are most crowded and which spots are likely to seat you immediately. To save even more time, you can also scan popular dishes and photos on the restaurant’s Business Profile in advance.

If you’re getting takeout, no need to miss a meeting waiting around for your delivery in your office lobby or at the restaurant. Live takeout and delivery status information lets you see the expected wait time, delivery fee and status of your order right from the Maps app — so you can make the most of your workday.

A phone screen shows the arrival time of a food delivery for a restaurant through Google Maps.

Style comfortably

Heading back to the office but not ready to dust off your work clothes? You’re not alone. In fact, “how to style sweatpants” and “work-appropriate leggings” have both been trending on Google.

Search on Google Shopping and filter by style, like joggers or leggings, to find your own office-ready sweats. Pair that with “comfortable shoes for work,” currently the most-searched shoe query, and you’ll find the perfect blend of your work-from-home and office styles.

Meanwhile, this season’s hottest work accessories are right at your fingertips. Nails are in the top-five fashion searches for back-to-the-office shopping. Check out the manicure options yourself on Google Shopping.

Beta Channel Update for ChromeOS

The Beta channel is being updated to 102.0.5005.22 (Platform version: 14695.25.0) for most ChromeOS devices.


If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser).


Cole Brown,
Google ChromeOS

Visualizing Google Cloud with 101 illustrated references

Let’s say you make cat posters, and you want to sell them online. You can create a website, but you need to host it online so people can access it. A server hosts the code that lets customers select which cat poster they want, and then buy it. Another server hosts the database of your inventory, which tells you which posters are available to purchase, and it also hosts the image files to show customers pictures of the posters.

Now imagine your posters go viral online, and are incredibly popular. Everyone is interested in going to your website — and that means you need more servers to keep your website up and running. And that server can’t be on your computer, because imagine what happens if you have a power outage, or your computer crashes?

That’s where the cloud comes in — hosting your website on the cloud lets you just focus on the cat posters. But for someone who’s not an engineer, this stuff can get confusing.

I’m a senior developer advocate at Google Cloud, and I’m also an artist. As part of my job at Google, I constantly learn new things and find new ways to share that information with other developers in the community. I’ve learned the power of visual storytelling from my art, and I recently decided to pair up my two skill sets to help explain exactly what the cloud is in a new book, “Visualizing Google Cloud.”

Though my book, which is available for preorder, is aimed at cloud engineers and architects, there are a few lessons that anyone could find useful. For example: What is Cloud? How does it work? Why do you need storage? What is a database and what are the different types? How do you build apps? How do you analyze data? My goal with this book is to give you a visual learning path to all things cloud. And my goal is also to contribute to a good cause; part of the books’ proceeds go directly to a charity that fights malnutrition and supports the right to education.

Long Term Support Channel Update

LTS-96 has been updated in the LTS channel to 96.0.4664.207 (Platform Version: 14268.82.0) for most ChromeOS devices. Want to know more about Long-term Support? Click here



This update contains multiple Security fixes, including:

1311701  High  CVE-2022-1312 Security: UAF in DumpDatabaseHandler

1283050  High  CVE-2022-1308 Heap-use-after-free in RenderViewHostImpl::ActivatePrerenderedPage

1310717  High  CVE-2022-1311 Use-after-Free on crostini::CrostiniExportImport::OpenFileDialog

1292261  High  CVE-2022-1125 Security: Heap-use-after-free in BrowserList::AddBrowser

1268541  Medium  CVE-2022-1139 Security: Another Cross-Origin Response Size Leak Via BackgroundFetch

1315901  High  CVE-2022-1364 Security: [day 0] JIT optimization issue



Giuliana Pritchard

Google Chrome OS