Stable Channel Update for ChromeOS

The Stable channel is being updated to 93.0.4577.95 (Platform version: 14092.66.0) for most Chrome OS devices. Systems will be receiving updates over the next several days.

This build contains bug fixes and security updates.

If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser). 

Geo Hsu

Google Chrome OS

YouTube Shorts Fund expands to Australia

Australian creators are now eligible for a Shorts Fund bonus! We started the YouTube Shorts Fund to reward creators who make creative and unique Shorts - and now we’re expanding eligibility to over 30 new countries, including Australia! 
First announced by Robert Kyncl, YouTube’s Chief Business Officer in August, the US$100M fund will be distributed over 2021-2022. Each month, we’ll invite thousands of eligible creators to claim a payment from the Fund. This is the first step in our journey to build a monetisation model for Shorts on YouTube and any creator that meets our eligibility criteria can participate. 
We’re excited about what this means for creators in Australia. Not only does the Shorts Fund reward the next generation of mobile creators for their original contributions to Shorts, it also offers them a new way to earn money and build a business on YouTube. 
YouTube has helped a growing community of creators and artists to transform their creativity into viable businesses. We laid the groundwork for this modern-day creator economy over 14 years ago when we launched the YouTube Partner Program (YPP) — a first-of-its kind business model that shares the majority of revenue generated on the platform with creators. Along the way, we’ve continued investing in new monetisation options for creators beyond advertising, including, Merchandise, Channel Memberships, Super Chats and Super Stickers. In just over the last three years, we’ve paid more than $30 billion to creators, artists, and media companies. And in Q2 2021, we paid more to YouTube creators and partners than in any quarter in our history. 
What do How Ridiculous, Economics Explained and Saksham Magic all have in common? They’re storytellers, directors, editors, marketers, and entrepreneurs — all in one. The incredible range of talents and skills of creators is inspiring. To give creators the opportunities they need to find success, YouTube has evolved from being just a place where people upload and share videos. It’s now a destination where creators can find new audiences, connect with fans in different ways, and build growing businesses. Our shared goal with creators is to help them build robust and diversified business models that work with both their unique content and community of fans. 
Alongside the Shorts Fund, here are more ways creators can make money and build a business on YouTube: 
  • Shorts Fund 
    • The YouTube Shorts Fund, a US$100M fund distributed over 2021-2022, launches today! Each month, we'll reach out to thousands of eligible creators to claim a payment from the Fund - creators can make anywhere from US$100 to US$10,000 based on viewership and engagement on their Shorts. The Shorts Fund is the first step in our journey to build a monetisation model for Shorts on YouTube and is not limited to just creators in YPP — any creator that meets our eligibility criteria can participate. We're also dedicated to providing funding via our Black Voices Fund
  • Ads 
    • Ads have been at the core of creators’ revenue streams, and continue to be the main way that creators can earn money on YouTube. Creators receive the majority of the revenue generated from ads on YouTube. 
  • YouTube Premium 
    • YouTube Premium is a paid subscription option which enables members to enjoy ad-free content, background playback, downloads, and premium access to the YouTube Music app. The majority of subscription revenue goes to YouTube partners. 
  • Channel memberships 
    • With channel memberships, creators can offer exclusive perks and content to viewers who join their channel as a monthly paying member at prices set by the creator. 
  • Super Chat 
    • Fans watching livestreams and Premieres can purchase a Super Chat: a highlighted message in the chat stream that stands out from the crowd to get even more of their favorite creator’s attention. 
  • Super Thanks 
    • Now viewers can give thanks and appreciation on uploaded videos as well through Super Thanks. As an added bonus, fans will get a distinct, colourful comment to highlight the purchase, which creators can respond to. 
  • Super Stickers 
    • Another way followers can show support during livestreams and Premieres is with Super Stickers, which allows fans to purchase a fun sticker that stands out. 
  • Merchandise 
    • The merch shelf allows channels to showcase their official branded merchandise right on their watch page on YouTube. Creators can choose from 30 different retailers globally. 
  • Ticketing 
    • Music fans can learn about upcoming concert listings and with a simple click, go directly to our ticketing partners’ sites to purchase tickets. 
Every new fan that subscribes to their favourite creators’ channels, every new member that joins, every like, comment received and every dollar earned goes into building the business ventures of tomorrow. At YouTube, the passion and ambition of our creators fuels us to continue innovating new ways to help them realise their goals and we are committed to introducing more revenue opportunities for our creators. As creators become the next generation of media companies, we’ll continue to deliver more ways to help them do just that. 

Improving Generalization in Reinforcement Learning using Policy Similarity Embeddings

Reinforcement learning (RL) is a sequential decision-making paradigm for training intelligent agents to tackle complex tasks such as robotic locomotion, playing video games, flying stratospheric balloons and designing hardware chips. While RL agents have shown promising results in a variety of activities, it is difficult to transfer the capabilities of these agents to new tasks, even when these tasks are semantically equivalent. For example, consider a jumping task, where an agent, learning from image observations, needs to jump over an obstacle. Deep RL agents trained on a few of these tasks with varying obstacle positions struggle to successfully jump with obstacles at previously unseen locations.

Jumping task: The agent (white block), learning from pixels, needs to jump over an obstacle (gray square). The challenge is to generalize to unseen obstacle positions and floor heights in test tasks using a small number of training tasks. In a given task, the agent needs to time the jump precisely, at a specific distance from the obstacle, otherwise it will eventually hit the obstacle.

In “Contrastive Behavioral Similarity Embeddings for Generalization in Reinforcement Learning”, presented as a spotlight at ICLR 2021, we incorporate the inherent sequential structure in RL into the representation learning process to enhance generalization in unseen tasks. This is orthogonal to the predominant approaches before this work, which were typically adapted from supervised learning, and, as such, largely ignore this sequential aspect. Our approach exploits the fact that an agent, when operating in tasks with similar underlying mechanics, exhibits at least short sequences of behaviors that are similar across these tasks.

Prior work on generalization was typically adapted from supervised learning and revolved around enhancing the learning process. These approaches rarely exploit properties of the sequential aspect such as similarity in actions across temporal observations.

Our approach trains the agent to learn a representation in which states are close when the agent’s optimal behavior in these states and future states are similar. This notion of proximity, which we call behavioral similarity, generalizes to observations across different tasks. To measure behavioral similarity between states across various tasks (e.g., distinct obstacle positions in the jumping task), we introduce the policy similarity metric (PSM), a theoretically motivated state-similarity metric inspired by bisimulation. For example, the image below shows that the agent’s future actions in the two visually different states are the same, making these states similar according to PSM.

Understanding behavioral similarity. The agent (blue icon) needs to obtain the reward while maintaining distance from danger. Even though the initial states are visually different, they are similar in terms of their optimal behavior at current states as well as future states following the current state. Policy similarity metric (PSM) assigns high similarity to such behaviorally similar states and low similarity to dissimilar states.

For enhancing generalization, our approach learns state embeddings, which correspond to neural-network–based representations of task states, that bring together behaviorally similar states (such as in the figure above) while pushing behaviorally dissimilar states apart. To do so, we present contrastive metric embeddings (CMEs) that harness the benefits of contrastive learning for learning representations based on a state-similarity metric. We instantiate contrastive embeddings with the policy similarity metric (PSM) to learn policy similarity embeddings (PSEs). PSEs assign similar representations to states with similar behavior at both those states and future states, such as the two initial states shown in the image above.

As shown in the results below, PSEs considerably enhance generalization on the jumping task from pixels mentioned earlier, outperforming prior methods.

Method Grid Configuration
“Wide” “Narrow” “Random”
Regularization 17.2 (2.2) 10.2 (4.6) 9.3 ( 5.4)
PSEs 33.6 (10.0) 9.3 (5.3) 37.7 (10.4)
Data Augmentation    50.7 (24.2)       33.7 (11.8)       71.3 (15.6)   
Data Aug. + Bisimulation    41.4 (17.6) 17.4 (6.7) 33.4 (15.6)
Data Aug. + PSEs 87.0 (10.1) 52.4 (5.8) 83.4 (10.1)
Jumping Task Results: Percentage (%) of test tasks solved by different methods without and with data augmentation. The “wide”, “narrow”, and “random” grids are configurations shown in the figure below containing 18 training tasks and 268 test tasks. We report average performance across 100 runs with different random initializations, with standard deviation in parentheses.
Jumping Task Grid Configurations: Visualization of average performance of PSEs with data augmentation across different configurations. For each grid configuration, the height varies along the y-axis (11 heights) while the obstacle position varies along the x-axis (26 locations). The red letter T indicates the training tasks. Beige tiles are tasks PSEs solved while black tiles are unsolved tasks, in conjunction with data augmentation.

We also visualize the representations learned by PSEs and baseline methods by projecting them to 2D points with UMAP, a popular visualization technique for high dimensional data. As shown by the visualization, PSEs cluster behaviorally-similar states together and dissimilar states apart, unlike prior methods. Furthermore, PSEs partition the states into two sets: (1) all states before the jump and (2) states where actions do not affect the outcome (states after jump).

Visualizing learned representations. (a) Optimal trajectories on the jumping task (visualized as coloured blocks) with varying obstacle positions. Points with the same number label correspond to the same distance of the agent from the obstacle, the underlying optimal invariant feature across various jumping tasks. (b-d) We visualize the hidden representations using UMAP, where the color of points indicate the tasks of the corresponding observations. (b) PSEs capture the correct invariant feature as can be seen from points with the same number label being clustered together. That is, after the jump action (numbered block 2), all other actions (non-numbered blocks) are similar as shown by the overlapping curve. Contrary to PSEs, baselines including (c) l2-loss embeddings (instead of contrastive loss) and (d) reward-based bisimulation metrics do not put behaviorally similar states with similar number labels together. Poor generalization for (c, d) is likely due to states with the similar optimal behavior ending up with distant embeddings.

Conclusion
Overall, this work shows the benefits of exploiting the inherent structure in RL for learning effective representations. Specifically, this work advances generalization in RL by two contributions: the policy similarity metric and contrastive metric embeddings. PSEs combine these two ideas to enhance generalization. Exciting avenues for future work include finding better ways for defining behavior similarity and leveraging this structure for representation learning.

Acknowledgements
This is a joint work with Pablo Samuel Castro, Marlos C. Machado and Marc G. Bellemare. We would also like to thank David Ha, Ankit Anand, Alex Irpan, Rico Jonschkowski, Richard Song, Ofir Nachum, Dale Schuurmans, Aleksandra Faust and Dibya Ghosh for their insightful comments on this work.

Source: Google AI Blog


Improving Generalization in Reinforcement Learning using Policy Similarity Embeddings

Reinforcement learning (RL) is a sequential decision-making paradigm for training intelligent agents to tackle complex tasks such as robotic locomotion, playing video games, flying stratospheric balloons and designing hardware chips. While RL agents have shown promising results in a variety of activities, it is difficult to transfer the capabilities of these agents to new tasks, even when these tasks are semantically equivalent. For example, consider a jumping task, where an agent, learning from image observations, needs to jump over an obstacle. Deep RL agents trained on a few of these tasks with varying obstacle positions struggle to successfully jump with obstacles at previously unseen locations.

Jumping task: The agent (white block), learning from pixels, needs to jump over an obstacle (gray square). The challenge is to generalize to unseen obstacle positions and floor heights in test tasks using a small number of training tasks. In a given task, the agent needs to time the jump precisely, at a specific distance from the obstacle, otherwise it will eventually hit the obstacle.

In “Contrastive Behavioral Similarity Embeddings for Generalization in Reinforcement Learning”, presented as a spotlight at ICLR 2021, we incorporate the inherent sequential structure in RL into the representation learning process to enhance generalization in unseen tasks. This is orthogonal to the predominant approaches before this work, which were typically adapted from supervised learning, and, as such, largely ignore this sequential aspect. Our approach exploits the fact that an agent, when operating in tasks with similar underlying mechanics, exhibits at least short sequences of behaviors that are similar across these tasks.

Prior work on generalization was typically adapted from supervised learning and revolved around enhancing the learning process. These approaches rarely exploit properties of the sequential aspect such as similarity in actions across temporal observations.

Our approach trains the agent to learn a representation in which states are close when the agent’s optimal behavior in these states and future states are similar. This notion of proximity, which we call behavioral similarity, generalizes to observations across different tasks. To measure behavioral similarity between states across various tasks (e.g., distinct obstacle positions in the jumping task), we introduce the policy similarity metric (PSM), a theoretically motivated state-similarity metric inspired by bisimulation. For example, the image below shows that the agent’s future actions in the two visually different states are the same, making these states similar according to PSM.

Understanding behavioral similarity. The agent (blue icon) needs to obtain the reward while maintaining distance from danger. Even though the initial states are visually different, they are similar in terms of their optimal behavior at current states as well as future states following the current state. Policy similarity metric (PSM) assigns high similarity to such behaviorally similar states and low similarity to dissimilar states.

For enhancing generalization, our approach learns state embeddings, which correspond to neural-network–based representations of task states, that bring together behaviorally similar states (such as in the figure above) while pushing behaviorally dissimilar states apart. To do so, we present contrastive metric embeddings (CMEs) that harness the benefits of contrastive learning for learning representations based on a state-similarity metric. We instantiate contrastive embeddings with the policy similarity metric (PSM) to learn policy similarity embeddings (PSEs). PSEs assign similar representations to states with similar behavior at both those states and future states, such as the two initial states shown in the image above.

As shown in the results below, PSEs considerably enhance generalization on the jumping task from pixels mentioned earlier, outperforming prior methods.

Method Grid Configuration
“Wide” “Narrow” “Random”
Regularization 17.2 (2.2) 10.2 (4.6) 9.3 ( 5.4)
PSEs 33.6 (10.0) 9.3 (5.3) 37.7 (10.4)
Data Augmentation    50.7 (24.2)       33.7 (11.8)       71.3 (15.6)   
Data Aug. + Bisimulation    41.4 (17.6) 17.4 (6.7) 33.4 (15.6)
Data Aug. + PSEs 87.0 (10.1) 52.4 (5.8) 83.4 (10.1)
Jumping Task Results: Percentage (%) of test tasks solved by different methods without and with data augmentation. The “wide”, “narrow”, and “random” grids are configurations shown in the figure below containing 18 training tasks and 268 test tasks. We report average performance across 100 runs with different random initializations, with standard deviation in parentheses.
Jumping Task Grid Configurations: Visualization of average performance of PSEs with data augmentation across different configurations. For each grid configuration, the height varies along the y-axis (11 heights) while the obstacle position varies along the x-axis (26 locations). The red letter T indicates the training tasks. Beige tiles are tasks PSEs solved while black tiles are unsolved tasks, in conjunction with data augmentation.

We also visualize the representations learned by PSEs and baseline methods by projecting them to 2D points with UMAP, a popular visualization technique for high dimensional data. As shown by the visualization, PSEs cluster behaviorally-similar states together and dissimilar states apart, unlike prior methods. Furthermore, PSEs partition the states into two sets: (1) all states before the jump and (2) states where actions do not affect the outcome (states after jump).

Visualizing learned representations. (a) Optimal trajectories on the jumping task (visualized as coloured blocks) with varying obstacle positions. Points with the same number label correspond to the same distance of the agent from the obstacle, the underlying optimal invariant feature across various jumping tasks. (b-d) We visualize the hidden representations using UMAP, where the color of points indicate the tasks of the corresponding observations. (b) PSEs capture the correct invariant feature as can be seen from points with the same number label being clustered together. That is, after the jump action (numbered block 2), all other actions (non-numbered blocks) are similar as shown by the overlapping curve. Contrary to PSEs, baselines including (c) l2-loss embeddings (instead of contrastive loss) and (d) reward-based bisimulation metrics do not put behaviorally similar states with similar number labels together. Poor generalization for (c, d) is likely due to states with the similar optimal behavior ending up with distant embeddings.

Conclusion
Overall, this work shows the benefits of exploiting the inherent structure in RL for learning effective representations. Specifically, this work advances generalization in RL by two contributions: the policy similarity metric and contrastive metric embeddings. PSEs combine these two ideas to enhance generalization. Exciting avenues for future work include finding better ways for defining behavior similarity and leveraging this structure for representation learning.

Acknowledgements
This is a joint work with Pablo Samuel Castro, Marlos C. Machado and Marc G. Bellemare. We would also like to thank David Ha, Ankit Anand, Alex Irpan, Rico Jonschkowski, Richard Song, Ofir Nachum, Dale Schuurmans, Aleksandra Faust and Dibya Ghosh for their insightful comments on this work.

Source: Google AI Blog


Improving Generalization in Reinforcement Learning using Policy Similarity Embeddings

Reinforcement learning (RL) is a sequential decision-making paradigm for training intelligent agents to tackle complex tasks such as robotic locomotion, playing video games, flying stratospheric balloons and designing hardware chips. While RL agents have shown promising results in a variety of activities, it is difficult to transfer the capabilities of these agents to new tasks, even when these tasks are semantically equivalent. For example, consider a jumping task, where an agent, learning from image observations, needs to jump over an obstacle. Deep RL agents trained on a few of these tasks with varying obstacle positions struggle to successfully jump with obstacles at previously unseen locations.

Jumping task: The agent (white block), learning from pixels, needs to jump over an obstacle (gray square). The challenge is to generalize to unseen obstacle positions and floor heights in test tasks using a small number of training tasks. In a given task, the agent needs to time the jump precisely, at a specific distance from the obstacle, otherwise it will eventually hit the obstacle.

In “Contrastive Behavioral Similarity Embeddings for Generalization in Reinforcement Learning”, presented as a spotlight at ICLR 2021, we incorporate the inherent sequential structure in RL into the representation learning process to enhance generalization in unseen tasks. This is orthogonal to the predominant approaches before this work, which were typically adapted from supervised learning, and, as such, largely ignore this sequential aspect. Our approach exploits the fact that an agent, when operating in tasks with similar underlying mechanics, exhibits at least short sequences of behaviors that are similar across these tasks.

Prior work on generalization was typically adapted from supervised learning and revolved around enhancing the learning process. These approaches rarely exploit properties of the sequential aspect such as similarity in actions across temporal observations.

Our approach trains the agent to learn a representation in which states are close when the agent’s optimal behavior in these states and future states are similar. This notion of proximity, which we call behavioral similarity, generalizes to observations across different tasks. To measure behavioral similarity between states across various tasks (e.g., distinct obstacle positions in the jumping task), we introduce the policy similarity metric (PSM), a theoretically motivated state-similarity metric inspired by bisimulation. For example, the image below shows that the agent’s future actions in the two visually different states are the same, making these states similar according to PSM.

Understanding behavioral similarity. The agent (blue icon) needs to obtain the reward while maintaining distance from danger. Even though the initial states are visually different, they are similar in terms of their optimal behavior at current states as well as future states following the current state. Policy similarity metric (PSM) assigns high similarity to such behaviorally similar states and low similarity to dissimilar states.

For enhancing generalization, our approach learns state embeddings, which correspond to neural-network–based representations of task states, that bring together behaviorally similar states (such as in the figure above) while pushing behaviorally dissimilar states apart. To do so, we present contrastive metric embeddings (CMEs) that harness the benefits of contrastive learning for learning representations based on a state-similarity metric. We instantiate contrastive embeddings with the policy similarity metric (PSM) to learn policy similarity embeddings (PSEs). PSEs assign similar representations to states with similar behavior at both those states and future states, such as the two initial states shown in the image above.

As shown in the results below, PSEs considerably enhance generalization on the jumping task from pixels mentioned earlier, outperforming prior methods.

Method Grid Configuration
“Wide” “Narrow” “Random”
Regularization 17.2 (2.2) 10.2 (4.6) 9.3 ( 5.4)
PSEs 33.6 (10.0) 9.3 (5.3) 37.7 (10.4)
Data Augmentation    50.7 (24.2)       33.7 (11.8)       71.3 (15.6)   
Data Aug. + Bisimulation    41.4 (17.6) 17.4 (6.7) 33.4 (15.6)
Data Aug. + PSEs 87.0 (10.1) 52.4 (5.8) 83.4 (10.1)
Jumping Task Results: Percentage (%) of test tasks solved by different methods without and with data augmentation. The “wide”, “narrow”, and “random” grids are configurations shown in the figure below containing 18 training tasks and 268 test tasks. We report average performance across 100 runs with different random initializations, with standard deviation in parentheses.
Jumping Task Grid Configurations: Visualization of average performance of PSEs with data augmentation across different configurations. For each grid configuration, the height varies along the y-axis (11 heights) while the obstacle position varies along the x-axis (26 locations). The red letter T indicates the training tasks. Beige tiles are tasks PSEs solved while black tiles are unsolved tasks, in conjunction with data augmentation.

We also visualize the representations learned by PSEs and baseline methods by projecting them to 2D points with UMAP, a popular visualization technique for high dimensional data. As shown by the visualization, PSEs cluster behaviorally-similar states together and dissimilar states apart, unlike prior methods. Furthermore, PSEs partition the states into two sets: (1) all states before the jump and (2) states where actions do not affect the outcome (states after jump).

Visualizing learned representations. (a) Optimal trajectories on the jumping task (visualized as coloured blocks) with varying obstacle positions. Points with the same number label correspond to the same distance of the agent from the obstacle, the underlying optimal invariant feature across various jumping tasks. (b-d) We visualize the hidden representations using UMAP, where the color of points indicate the tasks of the corresponding observations. (b) PSEs capture the correct invariant feature as can be seen from points with the same number label being clustered together. That is, after the jump action (numbered block 2), all other actions (non-numbered blocks) are similar as shown by the overlapping curve. Contrary to PSEs, baselines including (c) l2-loss embeddings (instead of contrastive loss) and (d) reward-based bisimulation metrics do not put behaviorally similar states with similar number labels together. Poor generalization for (c, d) is likely due to states with the similar optimal behavior ending up with distant embeddings.

Conclusion
Overall, this work shows the benefits of exploiting the inherent structure in RL for learning effective representations. Specifically, this work advances generalization in RL by two contributions: the policy similarity metric and contrastive metric embeddings. PSEs combine these two ideas to enhance generalization. Exciting avenues for future work include finding better ways for defining behavior similarity and leveraging this structure for representation learning.

Acknowledgements
This is a joint work with Pablo Samuel Castro, Marlos C. Machado and Marc G. Bellemare. We would also like to thank David Ha, Ankit Anand, Alex Irpan, Rico Jonschkowski, Richard Song, Ofir Nachum, Dale Schuurmans, Aleksandra Faust and Dibya Ghosh for their insightful comments on this work.

Source: Google AI Blog


Mark your calendars: Android Dev Summit, Chrome Dev Summit and Firebase Summit are coming your way in a few weeks!

Posted by the Google Developer Team

Developers: it’s time to start marking your calendars, we’re hard at work on a busy slate of summits coming your way in just a few weeks. Here’s a quick rundown of three summits we just announced this week:

  • Android Dev Summit: October 27-28
  • Chrome Dev Summit: November 3
  • Firebase Summit: November 10

Android Dev Summit is back, October 27-28

Directly from the team who builds Android, the Android Dev Summit returns this year on October 27-28. Join us to hear about the latest updates in Android development, centered on this year’s theme: excellent apps, across devices. We have over 30 sessions on a range of technical Android development topics. Plus, we’ve assembled the team that builds Android to get your burning #AskAndroid questions answered live. Interested in learning more? Be sure to sign up for updates through our Android newsletter here.

Discover, Connect, Inspire at Chrome Dev Summit 2021

The countdown to Chrome Dev Summit 2021 is on — and we can’t wait to share what we have in store. We’ll kick things off on November 3 by sharing the latest product updates in our keynote and hosting a live ask me anything (AMA) with Chrome leaders. You’ll also have the chance to chat live with Googlers and developers around the world, participate in workshops with industry experts, attend interactive learning lounges to consult with engineers in a group setting, and receive personalized support during one-on-one office hours. Everyone can tune into the keynote and AMA, but space is limited for the workshops, office hours, and learning lounges. Request an invite to secure your spot — we’ll see you on November 3!

And follow the Firebase Twitter channel for more updates on Firebase Summit, which will be coming to you on November 10!

How AI is making information more useful

Today, there’s more information accessible at people’s fingertips than at any point in human history. And advances in artificial intelligence will radically transform the way we use that information, with the ability to uncover new insights that can help us both in our daily lives and in the ways we are able to tackle complex global challenges.


At our Search On livestream event today, we shared how we’re bringing the latest in AI to Google’s products, giving people new ways to search and explore information in more natural and intuitive ways.


Making multimodal search possible with MUM
Earlier this year at Google I/O, we announced we’ve reached a critical milestone for understanding information with Multitask Unified Model, or MUM for short.


We’ve been experimenting with using MUM’s capabilities to make our products more helpful and enable entirely new ways to search. Today, we’re sharing an early look at what will be possible with MUM.


In the coming months, we’ll introduce a new way to search visually, with the ability to ask questions about what you see. Here are a couple of examples of what will be possible with MUM.
With this new capability, you can tap on the Lens icon when you’re looking at a picture of a shirt, and ask Google to find you the same pattern — but on another article of clothing, like socks. This helps when you’re looking for something that might be difficult to describe accurately with words alone. You could type “white floral Victorian socks,” but you might not find the exact pattern you’re looking for. By combining images and text into a single query, we’re making it easier to search visually and express your questions in more natural ways.
Some questions are even trickier: Your bike has a broken thingamajig, and you need some guidance on how to fix it. Instead of poring over catalogs of parts and then looking for a tutorial, the point-and-ask mode of searching will make it easier to find the exact moment in a video that can help.


Helping you explore with a redesigned Search page
We’re also announcing how we’re applying AI advances like MUM to redesign Google Search. These new features are the latest steps we’re taking to make searching more natural and intuitive.


First, we’re making it easier to explore and understand new topics with “Things to know.” Let’s say you want to decorate your apartment, and you’re interested in learning more about creating acrylic paintings.
If you search for “acrylic painting,” Google understands how people typically explore this topic, and shows the aspects people are likely to look at first. For example, we can identify more than 350 topics related to acrylic painting, and help you find the right path to take.


We’ll be launching this feature in the coming months. In the future, MUM will unlock deeper insights you might not have known to search for — like “how to make acrylic paintings with household items” — and connect you with content on the web that you wouldn’t have otherwise found.
Second, to help you further explore ideas, we’re making it easy to zoom in and out of a topic with new features to refine and broaden searches.


In this case, you can learn more about specific techniques, like puddle pouring, or art classes you can take. You can also broaden your search to see other related topics, like other painting methods and famous painters. These features will launch in the coming months.
Third, we’re making it easier to find visual inspiration with a newly designed, browsable results page. If puddle pouring caught your eye, just search for “pour painting ideas" to see a visually rich page full of ideas from across the web, with articles, images, videos and more that you can easily scroll through.

This new visual results page is designed for searches that are looking for inspiration, like “Halloween decorating ideas” or “indoor vertical garden ideas,” and you can try it today.

Get more from videos
We already use advanced AI systems to identify key moments in videos, like the winning shot in a basketball game, or steps in a recipe. Today, we’re taking this a step further, introducing a new experience that identifies related topics in a video, with links to easily dig deeper and learn more.
Using MUM, we can even show related topics that aren’t explicitly mentioned in the video, based on our advanced understanding of information in the video. In this example, while the video doesn’t say the words “macaroni penguin’s life story,” our systems understand that topics contained in the video relate to this topic, like how macaroni penguins find their family members and navigate predators. The first version of this feature will roll out in the coming weeks, and we’ll add more visual enhancements in the coming months.


Across all these MUM experiences, we look forward to helping people discover more web pages, videos, images and ideas that they may not have come across or otherwise searched for.

A more helpful Google
The updates we’re announcing today don’t end with MUM, though. We’re also making it easier to shop from the widest range of merchants, big and small, no matter what you’re looking for. And we’re helping people better evaluate the credibility of information they find online. Plus, for the moments that matter most, we’re finding new ways to help people get access to information and insights.


All this work not only helps people around the world, but creators, publishers and businesses as well. Every day, we send visitors to well over 100 million different websites, and every month, Google connects people with more than 120 million businesses that don't have websites, by enabling phone calls, driving directions and local foot traffic.


As we continue to build more useful products and push the boundaries of what it means to search, we look forward to helping people find the answers they’re looking for, and inspiring more questions along the way.



Posted by Prabhakar Raghavan, Senior Vice President

How AI is making information more useful

Today, there’s more information accessible at people’s fingertips than at any point in human history. And advances in artificial intelligence will radically transform the way we use that information, with the ability to uncover new insights that can help us both in our daily lives and in the ways we are able to tackle complex global challenges.


At our Search On livestream event today, we shared how we’re bringing the latest in AI to Google’s products, giving people new ways to search and explore information in more natural and intuitive ways.


Making multimodal search possible with MUM

Earlier this year at Google I/O, we announced we’ve reached a critical milestone for understanding information with Multitask Unified Model, or MUM for short.


We’ve been experimenting with using MUM’s capabilities to make our products more helpful and enable entirely new ways to search. Today, we’re sharing an early look at what will be possible with MUM. 


In the coming months, we’ll introduce a new way to search visually, with the ability to ask questions about what you see. Here are a couple of examples of what will be possible with MUM.




With this new capability, you can tap on the Lens icon when you’re looking at a picture of a shirt, and ask Google to find you the same pattern — but on another article of clothing, like socks. This helps when you’re looking for something that might be difficult to describe accurately with words alone. You could type “white floral Victorian socks,” but you might not find the exact pattern you’re looking for. By combining images and text into a single query, we’re making it easier to search visually and express your questions in more natural ways. 



Some questions are even trickier: Your bike has a broken thingamajig, and you need some guidance on how to fix it. Instead of poring over catalogs of parts and then looking for a tutorial, the point-and-ask mode of searching will make it easier to find the exact moment in a video that can help.


Helping you explore with a redesigned Search page

We’re also announcing how we’re applying AI advances like MUM to redesign Google Search. These new features are the latest steps we’re taking to make searching more natural and intuitive.


First, we’re making it easier to explore and understand new topics with “Things to know.” Let’s say you want to decorate your apartment, and you’re interested in learning more about creating acrylic paintings.



If you search for “acrylic painting,” Google understands how people typically explore this topic, and shows the aspects people are likely to look at first. For example, we can identify more than 350 topics related to acrylic painting, and help you find the right path to take.


We’ll be launching this feature in the coming months. In the future, MUM will unlock deeper insights you might not have known to search for — like “how to make acrylic paintings with household items” — and connect you with content on the web that you wouldn’t have otherwise found.

Second, to help you further explore ideas, we’re making it easy to zoom in and out of a topic with new features to refine and broaden searches. 


In this case, you can learn more about specific techniques, like puddle pouring, or art classes you can take. You can also broaden your search to see other related topics, like other painting methods and famous painters. These features will launch in the coming months.


Third, we’re making it easier to find visual inspiration with a newly designed, browsable results page. If puddle pouring caught your eye, just search for “pour painting ideas" to see a visually rich page full of ideas from across the web, with articles, images, videos and more that you can easily scroll through. 

This new visual results page is designed for searches that are looking for inspiration, like “Halloween decorating ideas” or “indoor vertical garden ideas,” and you can try it today.

Get more from videos

We already use advanced AI systems to identify key moments in videos, like the winning shot in a basketball game, or steps in a recipe. Today, we’re taking this a step further, introducing a new experience that identifies related topics in a video, with links to easily dig deeper and learn more. 


Using MUM, we can even show related topics that aren’t explicitly mentioned in the video, based on our advanced understanding of information in the video. In this example, while the video doesn’t say the words “macaroni penguin’s life story,” our systems understand that topics contained in the video relate to this topic, like how macaroni penguins find their family members and navigate predators. The first version of this feature will roll out in the coming weeks, and we’ll add more visual enhancements in the coming months.


Across all these MUM experiences, we look forward to helping people discover more web pages, videos, images and ideas that they may not have come across or otherwise searched for. 


A more helpful Google

The updates we’re announcing today don’t end with MUM, though. We’re also making it easier to shop from the widest range of merchants, big and small, no matter what you’re looking for. And we’re helping people better evaluate the credibility of information they find online. Plus, for the moments that matter most, we’re finding new ways to help people get access to information and insights. 


All this work not only helps people around the world, but creators, publishers and businesses as well.  Every day, we send visitors to well over 100 million different websites, and every month, Google connects people with more than 120 million businesses that don't have websites, by enabling phone calls, driving directions and local foot traffic.


As we continue to build more useful products and push the boundaries of what it means to search, we look forward to helping people find the answers they’re looking for, and inspiring more questions along the way.


Posted by Prabhakar Raghavan, Senior Vice President




How 5 cities plan to use Tree Canopy to fight climate change

Planting trees in cities helps provide shade, lower temperatures and contribute to cleaner air — all of which are huge benefits when it comes to adapting to the effects of climate change. That’s why we’re expanding our Environmental Insights Explorer Tree Canopy insights to more than 100 cities around the world next year, helping local governments fight climate change. We chatted with city officials in Los Angeles, Louisville, Chicago, Austin and Miami to learn more about how they plan to use Tree Canopy insights to build thriving, sustainable cities in 2021 and beyond.

Los Angeles

An image showing tree canopy coverage in Los Angeles

Tree canopy coverage in Los Angeles

Los Angeles was the first city to pilot Tree Canopy Insights. Since then it’s become an essential part of the city’s goal to increase tree canopy coverage by 50% by 2028 in areas of the city with the highest need. The city is working to plant 90,000 trees this year, and Tree Canopy Insights helps them prioritize which neighborhoods need tree shade the most.Rachel Malarich, Los Angeles’ City Forest Officer, and her team use Tree Canopy Insights alongside their inventory system to look at canopy acreage projections, current canopy cover and temperatures. The land use types within the tool allows them to consider the type of outreach needed and opportunities that exist in a given neighborhood. Most importantly, it helps Rachel and her team know which program initiatives are working and which aren’t.

“Tree Canopy Insights’ ability to give us timely feedback allows me to have data to make arguments for changes to the City's policies and procedures, as well as  potentially see the impact of different outreach activities going forward.” - Rachel Malarich, Los Angeles City Forest Officer

Louisville


An image showing tree canopy coverage in Louisville

Tree canopy coverage in Louisville

Similar to other cities, Louisville officials found that monitoring tree coverage on their own was hugely expensive and time intensive. Sometimes it took years to get the accurate, up-to-date data needed to make decisions. 

With Tree Canopy Insights, they’ve been able to glean actionable insights about tree cover faster. In just a few weeks, they’ve pinpointed that the west side of town was losing tree shade at an unprecedented rate and jump started a plan to plant more trees in the area. 

“Planting trees is one of the simplest ways we can reduce the impacts and slow the progress of climate change on our city. With support from Google’s Tree Canopy Insights, Louisville can enhance its ongoing surveillance of hot spots and heat islands and understand the impact of land use and development patterns on tree canopy coverage.“ – Louisville Mayor Greg Fischer

Austin

An image showing tree canopy coverage in Austin

Tree canopy coverage in Austin

Austin’s summers are hot with the heat regularly reaching over 90 degrees. Using Tree Canopy Insights, Marc Coudert, an environmental program manager for the city, noticed a troubling trend: ambient temperatures were higher in the eastern part of the city, known as the Eastern Crescent. With these insights, Marc and the City’s forestry team developed Austin’s Community Tree Priority Map and doubled down on planting trees in neighborhoods in the Eastern Crescent to make sure there was equitable tree canopy coverage across the city. 

“At the city of Austin, we’re committed to making data-backed decisions that bring equity to all of our communities. Google’s Tree Canopy Insights empowers us to do exactly that.” - Austin Mayor Steve Adler

Chicago

An image showing tree canopy coverage in Chicago

Tree canopy coverage in Chicago

Chicago’s Department of Public Health understands that planting trees is an essential part of promoting health and racial equity. After all, a lack of trees can be associated with chronic diseases like asthma, heart disease and mental health conditions. With Tree Canopy Insights, the department discovered that their hottest neighborhoods are often also the most disadvantaged — making these communities extremely vulnerable. With the use of this tool, the City of Chicago is committed to focusing their tree planting efforts specifically on these high-risk areas. 

"Trees not only provide our city with shade, green spaces and beauty, but they are also precious resources that produce clean air — making them key to shaping our sustainable future. Through this partnership with Google, our sustainability and public health teams will have access to real-time insights on our tree coverage that will inform how we develop and execute our equitable approach to building a better Chicago landscape. I look forward to seeing how this technology uses our city's natural resources to benefit all of our residents."  - Chicago Mayor Lori E. Lightfoot.

Miami

An image showing tree canopy coverage in Miami

Tree canopy coverage in Miami

Miami gets over 60 inches of rain per year, leading to potentially devastating effects from flooding and infrastructure damage. To address this, the city recently launched their Stormwater Master Plan. The multi-year initiative has already resulted in over 4,000 trees planted, translating to an additional 400,000 gallons of water absorption capacity per day. Moving forward, the city plans to use Tree Canopy Insights to evolve and improve this plan.

“Google’s Tree Canopy Insights is going to help us build on the progress of our Stormwater Master Plan in smarter, more effective ways. We believe that every city needs to be a “tech city,” and leveraging Google’s AI capabilities to improve every Miamians quality of life is exactly what I mean by that.” – Miami Mayor Francis Suarez

If you’re part of a local government and think Tree Canopy Insights could help your community, please get in touch with our team by filling out this form.

How AI is making information more useful

Today, there’s more information accessible at people’s fingertips than at any point in human history. And advances in artificial intelligence will radically transform the way we use that information, with the ability to uncover new insights that can help us both in our daily lives and in the ways we are able to tackle complex global challenges.

At our Search On livestream event today, we shared how we’re bringing the latest in AI to Google’s products, giving people new ways to search and explore information in more natural and intuitive ways.


Making multimodal search possible with MUM

Earlier this year at Google I/O, we announced we’ve reached a critical milestone for understanding information with Multitask Unified Model, or MUM for short.

We’ve been experimenting with using MUM’s capabilities to make our products more helpful and enable entirely new ways to search. Today, we’re sharing an early look at what will be possible with MUM. 

In the coming months, we’ll introduce a new way to search visually, with the ability to ask questions about what you see. Here are a couple of examples of what will be possible with MUM.

Animated GIF showing how you can tap on the Lens icon when you’re looking at a picture of a shirt, and ask Google to find you the same pattern — but on another article of clothing, like socks.

With this new capability, you can tap on the Lens icon when you’re looking at a picture of a shirt, and ask Google to find you the same pattern — but on another article of clothing, like socks. This helps when you’re looking for something that might be difficult to describe accurately with words alone. You could type “white floral Victorian socks,” but you might not find the exact pattern you’re looking for. By combining images and text into a single query, we’re making it easier to search visually and express your questions in more natural ways.

Animated GIF showing the point-and-ask mode of searching that can make it easier to find the exact moment in a video that can help you with instructions on fixing your bike.

Some questions are even trickier: Your bike has a broken thingamajig, and you need some guidance on how to fix it. Instead of poring over catalogs of parts and then looking for a tutorial, the point-and-ask mode of searching will make it easier to find the exact moment in a video that can help.


Helping you explore with a redesigned Search page

We’re also announcing how we’re applying AI advances like MUM to redesign Google Search. These new features are the latest steps we’re taking to make searching more natural and intuitive.

First, we’re making it easier to explore and understand new topics with “Things to know.” Let’s say you want to decorate your apartment, and you’re interested in learning more about creating acrylic paintings.

The search results page for the query “acrylic painting” that scrolls to a new feature called “Things to know”, which lists out various aspects of the topic like, “step by step”, “styles” and “using household items."

If you search for “acrylic painting,” Google understands how people typically explore this topic, and shows the aspects people are likely to look at first. For example, we can identify more than 350 topics related to acrylic painting, and help you find the right path to take.

We’ll be launching this feature in the coming months. In the future, MUM will unlock deeper insights you might not have known to search for — like “how to make acrylic paintings with household items” — and connect you with content on the web that you wouldn’t have otherwise found.

Two phone screens side by side highlight a set of queries and tappable features that allow you to refine to more specific searches for acrylic painting or broaden to concepts like famous painters.

Second, to help you further explore ideas, we’re making it easy to zoom in and out of a topic with new features to refine and broaden searches. 

In this case, you can learn more about specific techniques, like puddle pouring, or art classes you can take. You can also broaden your search to see other related topics, like other painting methods and famous painters. These features will launch in the coming months.

A scrolling results page for the query “pour painting ideas” that shows results with bold images and video thumbnails.

Third, we’re making it easier to find visual inspiration with a newly designed, browsable results page. If puddle pouring caught your eye, just search for “pour painting ideas" to see a visually rich page full of ideas from across the web, with articles, images, videos and more that you can easily scroll through. 

This new visual results page is designed for searches that are looking for inspiration, like “Halloween decorating ideas” or “indoor vertical garden ideas,” and you can try it today.

Get more from videos

We already use advanced AI systems to identify key moments in videos, like the winning shot in a basketball game, or steps in a recipe. Today, we’re taking this a step further, introducing a new experience that identifies related topics in a video, with links to easily dig deeper and learn more. 

Using MUM, we can even show related topics that aren’t explicitly mentioned in the video, based on our advanced understanding of information in the video. In this example, while the video doesn’t say the words “macaroni penguin’s life story,” our systems understand that topics contained in the video relate to this topic, like how macaroni penguins find their family members and navigate predators. The first version of this feature will roll out in the coming weeks, and we’ll add more visual enhancements in the coming months.

Across all these MUM experiences, we look forward to helping people discover more web pages, videos, images and ideas that they may not have come across or otherwise searched for. 

A more helpful Google

The updates we’re announcing today don’t end with MUM, though. We’re also making it easier to shop from the widest range of merchants, big and small, no matter what you’re looking for. And we’re helping people better evaluate the credibility of information they find online. Plus, for the moments that matter most, we’re finding new ways to help people get access to information and insights. 

All this work not only helps people around the world, but creators, publishers and businesses as well.  Every day, we send visitors to well over 100 million different websites, and every month, Google connects people with more than 120 million businesses that don't have websites, by enabling phone calls, driving directions and local foot traffic.

As we continue to build more useful products and push the boundaries of what it means to search, we look forward to helping people find the answers they’re looking for, and inspiring more questions along the way.