Look to Speak launches in Ukraine

Nearly two years ago, Google launched Look to Speak, an Android app that allows people to use their eyes to select pre-written phrases and have them spoken aloud. Since then, the app has launched in 18 additional languages. Most recently, we made the app available in Ukrainian to help refugees and veterans of the war.

As a speech and language therapist working at Google, I’ve seen how technology can help people express their everyday needs, feelings and identity. To hear from someone about how Look to Speak can be particularly helpful in Ukraine where people are dealing with the injuries and side effects of war, I spoke with Oksana Lyalka, the founder and president of the Ukrainian Society for Speech and Language Therapy.

What is the situation like for veterans and refugees of the war in Ukraine who have speech and motor impairments?

Due to direct injuries and conditions caused by the war, the number of people with both motor and speech impairments are likely increasing. In addition, indirect impact like stress and malnutrition causes an increased risk of strokes, which can also lead to motor and speech impairments — and access to care remains limited. Also, for many refugees who left Ukraine and are in foreign countries, it’s difficult to get the help they need because many of them already have chronic impairments and their insurance does not cover therapy for communication disorders in another country. Plus, communication is a language-specific impairment. Meaning, it’s difficult to get the help they need in their native language outside of Ukraine.

What are the specific challenges that people are facing?

They are mainly left on their own with speech and motor impairments. Because: 1) There’s a shortage of speech language therapists. 2) There are even fewer who understand what these patients deal with. 3) Therapy is costly and not everyone has the resources to afford it, especially in war time.

How could a tool like Look to Speak be helpful in Ukraine?

When someone has only a speech disorder, they can still write to communicate. But when there are also motor disorders like we’ve discussed, people can end up with no way to communicate. With Look to Speak, even if someone can’t communicate using their mouth, they can communicate with their eyes. This allows caregivers and others in their environment to listen and understand in new ways. Communication is a two-way process, and the Look to Speak app can act as a bridge.

The First Lady of Ukraine Olena Zelenska on the Look to Speak app:

"One of everyone’s fundamental needs is the ability to communicate and interact with those around them. For most people, it is unnoticeable and automatic, similar to breathing. However, due to various factors, a person may lose this ability and be unable to talk or use a computer, tablet, mobile phone, or other devices. Especially now, in times when the war daily multiplies the chances of finding oneself in such conditions, we as a society must unite and help each other as much as we can to overcome these terrible circumstances. One of the examples of Ukraine's responsible cooperation with world technological leaders is the localization of Google’s Look to Speak app. It helps people with motor and speech impairments to communicate using their eye movements. It is good to know that Ukrainian public organizations in health care, medical institutions, and everyone who needs it can use advanced digital solutions now adapted to the needs of Ukrainian users. I am sure that initiatives like Look to Speak will not only provide new opportunities for our citizens but will also serve as a model for other technological companies that are now supporting Ukraine."

To learn more about Look to Speak in Ukraine, watch this video in Ukrainian.

Beyond Tabula Rasa: Reincarnating Reinforcement Learning

Reinforcement learning (RL) is an area of machine learning that focuses on training intelligent agents using related experiences so they can learn to solve decision making tasks, such as playing video games, flying stratospheric balloons, and designing hardware chips. Due to the generality of RL, the prevalent trend in RL research is to develop agents that can efficiently learn tabula rasa, that is, from scratch without using previously learned knowledge about the problem. However, in practice, tabula rasa RL systems are typically the exception rather than the norm for solving large-scale RL problems. Large-scale RL systems, such as OpenAI Five, which achieves human-level performance on Dota 2, undergo multiple design changes (e.g., algorithmic or architectural changes) during their developmental cycle. This modification process can last months and necessitates incorporating such changes without re-training from scratch, which would be prohibitively expensive. 

Furthermore, the inefficiency of tabula rasa RL research can exclude many researchers from tackling computationally-demanding problems. For example, the quintessential benchmark of training a deep RL agent on 50+ Atari 2600 games in ALE for 200M frames (the standard protocol) requires 1,000+ GPU days. As deep RL moves towards more complex and challenging problems, the computational barrier to entry in RL research will likely become even higher.

To address the inefficiencies of tabula rasa RL, we present “Reincarnating Reinforcement Learning: Reusing Prior Computation To Accelerate Progress” at NeurIPS 2022. Here, we propose an alternative approach to RL research, where prior computational work, such as learned models, policies, logged data, etc., is reused or transferred between design iterations of an RL agent or from one agent to another. While some sub-areas of RL leverage prior computation, most RL agents are still largely trained from scratch. Until now, there has been no broader effort to leverage prior computational work for the training workflow in RL research. We have also released our code and trained agents to enable researchers to build on this work.

Tabula rasa RL vs. Reincarnating RL (RRL). While tabula rasa RL focuses on learning from scratch, RRL is based on the premise of reusing prior computational work (e.g., prior learned agents) when training new agents or improving existing agents, even in the same environment. In RRL, new agents need not be trained from scratch, except for initial forays into new problems.

Why Reincarnating RL?

Reincarnating RL (RRL) is a more compute and sample-efficient workflow than training from scratch. RRL can democratize research by allowing the broader community to tackle complex RL problems without requiring excessive computational resources. Furthermore, RRL can enable a benchmarking paradigm where researchers continually improve and update existing trained agents, especially on problems where improving performance has real-world impact, such as balloon navigation or chip design. Finally, real-world RL use cases will likely be in scenarios where prior computational work is available (e.g., existing deployed RL policies).

RRL as an alternative research workflow. Imagine a researcher who has trained an agent A1 for some time, but now wants to experiment with better architectures or algorithms. While the tabula rasa workflow requires retraining another agent from scratch, RRL provides the more viable option of transferring the existing agent A1 to another agent and training this agent further, or simply fine-tuning A1.

While there have been some ad hoc large-scale reincarnation efforts with limited applicability, e.g., model surgery in Dota2, policy distillation in Rubik’s cube, PBT in AlphaStar, RL fine-tuning a behavior-cloned policy in AlphaGo / Minecraft, RRL has not been studied as a research problem in its own right. To this end, we argue for developing general-purpose RRL approaches as opposed to prior ad-hoc solutions.


Case Study: Policy to Value Reincarnating RL

Different RRL problems can be instantiated depending on the kind of prior computational work provided. As a step towards developing broadly applicable RRL approaches, we present a case study on the setting of Policy to Value reincarnating RL (PVRL) for efficiently transferring an existing sub-optimal policy (teacher) to a standalone value-based RL agent (student). While a policy directly maps a given environment state (e.g., a game screen in Atari) to an action, value-based agents estimate the effectiveness of an action at a given state in terms of achievable future rewards, which allows them to learn from previously collected data.

For a PVRL algorithm to be broadly useful, it should satisfy the following requirements:

  • Teacher Agnostic: The student shouldn’t be constrained by the existing teacher policy’s architecture or training algorithm.
  • Weaning off the teacher: It is undesirable to maintain dependency on past suboptimal teachers for successive reincarnations.
  • Compute / Sample Efficient: Reincarnation is only useful if it is cheaper than training from scratch.

Given the PVRL algorithm requirements, we evaluate whether existing approaches, designed with closely related goals, will suffice. We find that such approaches either result in small improvements over tabula rasa RL or degrade in performance when weaning off the teacher.

To address these limitations, we introduce a simple method, QDagger, in which the agent distills knowledge from the suboptimal teacher via an imitation algorithm while simultaneously using its environment interactions for RL. We start with a deep Q-network (DQN) agent trained for 400M environment frames (a week of single-GPU training) and use it as the teacher for reincarnating student agents trained on only 10M frames (a few hours of training), where the teacher is weaned off over the first 6M frames. For benchmark evaluation, we report the interquartile mean (IQM) metric from the RLiable library. As shown below for the PVRL setting on Atari games, we find that the QDagger RRL method outperforms prior approaches.

Benchmarking PVRL algorithms on Atari, with teacher-normalized scores aggregated across 10 games. Tabula rasa DQN (–·–) obtains a normalized score of 0.4. Standard baseline approaches include kickstarting, JSRL, rehearsal, offline RL pre-training and DQfD. Among all methods, only QDagger surpasses teacher performance within 10 million frames and outperforms the teacher in 75% of the games.

Reincarnating RL in Practice

We further examine the RRL approach on the Arcade Learning Environment, a widely used deep RL benchmark. First, we take a Nature DQN agent that uses the RMSProp optimizer and fine-tune it with the Adam optimizer to create a DQN (Adam) agent. While it is possible to train a DQN (Adam) agent from scratch, we demonstrate that fine-tuning Nature DQN with the Adam optimizer matches the from-scratch performance using 40x less data and compute.

Reincarnating DQN (Adam) via Fine-Tuning. The vertical separator corresponds to loading network weights and replay data for fine-tuning. Left: Tabula rasa Nature DQN nearly converges in performance after 200M environment frames. Right: Fine-tuning this Nature DQN agent using a reduced learning rate with the Adam optimizer for 20 million frames obtains similar results to DQN (Adam) trained from scratch for 400M frames.

Given the DQN (Adam) agent as a starting point, fine-tuning is restricted to the 3-layer convolutional architecture. So, we consider a more general reincarnation approach that leverages recent architectural and algorithmic advances without training from scratch. Specifically, we use QDagger to reincarnate another RL agent that uses a more advanced RL algorithm (Rainbow) and a better neural network architecture (Impala-CNN ResNet) from the fine-tuned DQN (Adam) agent.

Reincarnating a different architecture / algorithm via QDagger. The vertical separator is the point at which we apply offline pre-training using QDagger for reincarnation. Left: Fine-tuning DQN with Adam. Right: Comparison of a tabula rasa Impala-CNN Rainbow agent (sky blue) to an Impala-CNN Rainbow agent (pink) trained using QDagger RRL from the fine-tuned DQN (Adam). The reincarnated Impala-CNN Rainbow agent consistently outperforms its scratch counterpart. Note that further fine-tuning DQN (Adam) results in diminishing returns (yellow).

Overall, these results indicate that past research could have been accelerated by incorporating a RRL approach to designing agents, instead of re-training agents from scratch. Our paper also contains results on the Balloon Learning Environment, where we demonstrate that RRL allows us to make progress on the problem of navigating stratospheric balloons using only a few hours of TPU-compute by reusing a distributed RL agent trained on TPUs for more than a month.


Discussion

Fairly comparing reincarnation approaches involves using the exact same computational work and workflow. Furthermore, the research findings in RRL that broadly generalize would be about how effective an algorithm is given access to existing computational work, e.g., we successfully applied QDagger developed using Atari for reincarnation on Balloon Learning Environment. As such, we speculate that research in reincarnating RL can branch out in two directions:

  • Standardized benchmarks with open-sourced computational work: Akin to NLP and vision, where typically a small set of pre-trained models are common, research in RRL may also converge to a small set of open-sourced computational work (e.g., pre-trained teacher policies) on a given benchmark.
  • Real-world domains: Since obtaining higher performance has real-world impact in some domains, it incentivizes the community to reuse state-of-the-art agents and try to improve their performance.

See our paper for a broader discussion on scientific comparisons, generalizability and reproducibility in RRL. Overall, we hope that this work motivates researchers to release computational work (e.g., model checkpoints) on which others could directly build. In this regard, we have open-sourced our code and trained agents with their final replay buffers. We believe that reincarnating RL can substantially accelerate research progress by building on prior computational work, as opposed to always starting from scratch.


Acknowledgements

This work was done in collaboration with Pablo Samuel Castro, Aaron Courville and Marc Bellemare. We’d like to thank Tom Small for the animated figure used in this post. We are also grateful for feedback by the anonymous NeurIPS reviewers and several members of the Google Research team, DeepMind and Mila.

Source: Google AI Blog


Expanding opportunities for Indigenous communities

November is Native American Heritage Month in the U.S. and is an opportune time to educate and raise awareness about the achievements and unique challenges that Tribal Nations face — both historically and presently — and how tribal citizens have worked together to overcome those challenges. One such example is the impact the pandemic has had on tribal economies and Native American-owned businesses.

As the President of the National Congress of American Indians (NCAI) — a long-time partner of Google — I’ve met many Native artisans and small-business owners over the past two years, who once had thriving businesses but now struggle to transition to online platforms to keep their businesses afloat. According to the U.S. Census, Native American-owned businesses contribute over $35 billion to the economy and employ over 200,000 people, yet one in six businesses (16.7%) have reported complete revenue losses due to the lingering impacts of the pandemic. Now, more than ever, our businesses require adequate resources for them to thrive, and there is no denying that technology is helping create that pathway forward.

I’m thrilled to share that the Grow with Google Digital Coaches program, which equips businesses with robust digital skills to unlock growth opportunities, is expanding to train Native-led businesses with the help of a dedicated Digital Coach from the community. We’ll be able to further support Native-led businesses thanks to a new grant from Google.org to help NCAI strengthen digital skill training.

Headshot of Jake Foreman, a Grow with Google Indigenous Community Digital Coach

I’m honored to introduce Henry Jake Foreman as Grow with Google’s first-ever Indigenous Community Digital Coach. As a Digital Coach, Jake will empower tribal small businesses with monthly digital skills workshops, local hands-on coaching opportunities, and events for businesses to come together and learn from one another. Jake is an Absentee Shawnee citizen who resides in Albuquerque, New Mexico, and is a Program Director at New Mexico Community Capital. As a trainer for NCAI’s training program, incubated with support from Grow with Google, Jake has helped hundreds of Native American entrepreneurs in New Mexico develop skill sets to support and build their businesses. We’re excited to see Grow with Google build on the success of that collaboration as they expand the Digital Coaches program. Jake will now bring these trainings directly to tribal communities across Indian Country and partner with NCAI to host monthly national webinars beginning in 2023.

In addition, Google.org is providing a $750,000 grant to support NCAI’s own institutional capacity and positioning in the Indigenous digital skilling space. NCAI will directly invest this into IT capabilities to foster a community of learning and distill best IT practices to contribute toward the broader ecosystem. This investment builds on the previous $1.25M in grants used to help support Native-owned businesses — all done in service of helping more Indigenous people to achieve success and help bridge Indian Country’s digital divide.

Because Native-led businesses serve as the backbone for many tribal communities, it was a very special moment to first share this news with tribal leaders, NCAI members and Native youth at our NCAI 79th Annual Convention & Marketplace earlier this week. At the conference, we also had the pleasure of hosting Jake’s first digital skills workshop as a new Digital Coach. Undoubtedly, these tailored workshops and resources will help our businesses thrive online and grow tremendously. To learn more and sign up, visit g.co/grow/digitalcoachIC.

Celebrate Native American artists in Chrome and ChromeOS

It’s Native American Heritage Month in the U.S., a time when we honor the history, traditions and contributions of Native Americans. As a citizen of the Cherokee Nation, I celebrate this month by taking time to reflect and express gratitude for my ancestors, the resilience of my tribe and other Indigenous people, and future generations carrying our tribal traditions forward.

As a product manager at Google, I’m also proud of how we’re celebrating across our products. On Google Assistant, for example, just say “Happy Native American Heritage Month” or “Give me a fact about Native American Heritage” throughout the month of November to hear a collection of historical facts and stories from the Native American community. Meanwhile, a recent Doodle on Google’s homepage celebrated the history of Stickball, a traditional sport created by Indigenous tribes.

An image of a recent Doodle on Google’s homepage with 5 abstract characters playing Stickball, a traditional sport created by Indigenous tribes.

We also commissioned five Native American artists to create a collection of themes for Chromebooks and Chrome browser. This collection has a special meaning to me because it showcases important traditions and reminds me of home. Richard D. York’s piece “ᎤᎧᏖᎾ (Uktena, or Horned Serpent)” in particular brings me back to my childhood listening to the stories of Uktena and other tales from my elders. A more solemn work, “A Lot Meant,” reminded me of growing up in Oklahoma and how historical policies like allotment impacted my family and so many others.

Now available globally, these themes reflect the unique experiences and identities of each artist. Here’s what they shared about their work:

To apply one of these themes (or others from Black, Latino and LGBTQ+ artists) to your Chrome browser, visit the Chrome Web Store collection, select a theme and click "Add to Chrome." You can also open a new tab and click the “Customize Chrome” button on the bottom right to explore background collections. To apply one of these wallpapers to your Chromebook, right-click your desktop, choose "Set wallpaper and style," then select "Native American Artists.”

Source: Google Chrome


Telling powerful African stories through color

African culture is joyful, expressive and vivid, and intrinsically linked to color – from rallying shades of liberation to evocative hues of optimism, color is embraced as an unspoken language. With a vibrant palette and gift for storytelling, as Africans, we tell powerful stories through color, and it is this unique phenomenon that led to the development of the ‘Colors of Africa’ project. This ambitious initiative shares stories from Africa by Africans.

Design Indaba collaborated with Google Arts & Culture on this brand-new, cross-continental project. In order to tell the full story of such a diverse continent, we approached 60 African creatives and asked them each to create a unique work that depicts their home country through the symbolism of color.

At the same time we asked what being African meant to them. The resulting works and thoughts offer personal insights into African lived experience and add the ever evolving kaleidoscope that is the African continent.

The stories of each creative have been woven into a colorful tapestry which is available on Google Arts & Culture. And this bespoke, online exhibit dives into the artist’s experience of their country – as well as navigating the intricacies of life as an African. In addition to the exhibits, you can spin the kaleidoscope to explore and collect the colors of Africa. Experience the different countries and travel through Africa guided by the eyes of local artists.

Each work is a personal and completely unique experience of a country. Discover some of the colors of Africa below:

I invite you to discover more about each artist and artwork on the dedicated hub on Google Arts & Culture, or travel through the kaleidoscope here and share your colors with the world.

3 ways to keep audiences engaged in a content-driven world

Consumers navigate a widening variety of digital content from text to image to video—and they choose to spend their time in the experiences that feel most natural and intuitive for them. Working with Ipsos, we've identified three ways to keep your audience engaged across platforms throughout the holiday season, plus new retail-ready formats and creative tools to help you drive growth into the new year.

1. Create more tailored experiences for deeper engagement

Over half of mobile consumers use Google and YouTube alongside other platforms when researching products or brands to try,[722e02]and 91% say they took action immediately after discovering new products or brands.[b8e752]Once they do, consumers also expect brands to deliver experiences that are relevant and helpful. In fact, two in five say they enjoy exploring Google feeds for shopping ideas because they are more personalized.[1fe581]

With product feeds for Discovery ads now available in beta, advertisers can show shoppers items based on their interests and intent. Specifically, individual retailers can now use lifestyle images and short text with their Google Merchant Center catalog to deliver more relevant ad experiences. For example, a consumer interested in fitness and fashion might see sneakers in a variety of colors and styles from a new brand in their content feed on YouTube or the Google app.

An example of ads for Puma across YouTube, Discover, and Gmail.

Puma switched to product feeds from standard Discovery ads to promote its catalog during key seasons, and saw a +46% increase in return-on-ad-spend while lowering costs by 19%. “Product feeds for Discovery ads offered more ways to expand our social-style assets across new platforms,” says Ashley Anderson, Senior Director of Digital Marketing at Puma. “Personalization and great performance made it easy to efficiently scale our spend.”

For tips to help you craft engaging Discovery ads with product feeds, see our creative best practices.

2. Take an asset-first approach for bigger creative impact

Keeping today's consumer engaged across platforms requires creative finesse at scale—and a multi-asset approach for campaigns like social and video to enable more visual and authentic storytelling. Nearly half of consumers say they are more likely to purchase a new product or brand they see in a video ad.[1adffc]

You can now scale your assets to Shorts, YouTube's new short-form video experience, to drive visual momentum with consumers watching everything from workout clips to recipe walkthroughs. You can make the most of this mobile canvas by bringing your best vertical video and image assets to the Shorts experience with Video action and now Discovery campaigns for images.

We've seen this asset-first creative approach across campaigns drive better results: more than 60% of advertisers who combine Video action campaigns with Discovery ads see incremental conversions at or below their original cost-per-action.[e34b92]

An image that shows the research insight about consumers’ preference for authentic and diverse creative

Source: Google/Ipsos, Consumer Feed Behavior Research, August 2022

3. Upgrade your storytelling with authentic and diverse representation

And last, but not least: building assets to scale doesn't have to mean relying on stock photos and generic visuals. 43% of consumers say that they are more likely to click on ads featuring people from a variety of backgrounds,[bd5d5d]and over half say that they are more likely to click on ads that feature people using a product.[094bee]For the biggest impact with your audience, frame your visuals and calls to action around the wide variety of consumers who reflect your market.

We're committed to helping you build more helpful and relevant ads that help consumers engage with your business. Check out our latest tools and resources below to learn how you can:

We hope these insights and creative tools help you drive growth now and in the new year with more effective, engaging ads.

Telling powerful African stories through colour

Editor's note:

Ravi Naidoo, the creator of the Interactive Africa and Design Indaba, an annual three-day design conference held in Cape Town, South Africa, contributed today's post. He discusses the Colours of Africa project, which showcases the best of African craft, product, industrial design, fashion, film, animation, graphic, cuisine, music, jewellery, and architecture.


-----




African culture is joyful, expressive and vivid – and intrinsically linked to color – from rallying shades of liberation to evocative hues of optimism, color is embraced as an unspoken language. With a vibrant palette and gift for storytelling, as Africans, we tell powerful stories through color, and it is this unique phenomenon that led to the development of the ‘Colors of Africa’ project. This ambitious initiative shares stories from Africa by Africans.


Design Indaba collaborated with Google Arts & Culture on this brand-new, cross-continental project. In order to tell the full story of such a diverse continent, we approached 60 African creatives and asked them each to create a unique work that depicts their home country through the symbolism of color.


At the same time we asked what being African meant to them. The resulting works and thoughts offer personal insights into African lived experiences and add the ever evolving kaleidoscope that is the African continent.




The stories of each creative have been woven into a colourful tapestry which is available on Google Arts & Culture. And this bespoke, online exhibit dives into the artist’s experience of their country – as well grappling with the intricacies of identity. In addition to the exhibits, you can spin the kaleidoscope to explore and collect the colours of Africa. Experience the different countries and travel through Africa guided by the eyes of local artists.


Each work is a personal and completely unique experience of a country. Discover some of the colours of Africa below:


I invite you to discover more about each artist and artwork on the dedicated hub on Google Arts & Culture, or travel through the kaleidoscope here and share your colours with the world.







Posted by Ravi Naaido, Founder & CEO, Design Indaba, 

 
 ==== 

Beta Channel Update for ChromeOS

The Beta channel is being updated to 108.0.5359.24 (Platform version: 15183.28.0) for most ChromeOS devices. This build contains a number of bug fixes and security updates and will be rolled out over the next couple days.

If you find new issues, please let us know one of the following ways

  1. File a bug
  2. Visit our ChromeOS communities
    1. General: Chromebook Help Community
    2. Beta Specific: ChromeOS Beta Help Community
  3. Report an issue or send feedback on Chrome

Interested in switching channels? Find out how.


Google ChromeOS

Manage projects & tasks with a new timeline view on Google Sheets

This announcement was made at Google Cloud Next ‘22. Visit the Cloud Blog to learn more about the latest Google Workspace innovations for the ever-changing world of work. 


What’s changing

To extend the power of smart canvas, we’re introducing an interactive timeline view that allows you to track projects in Google Sheets. This new visual layer displays project information stored in Sheets, such as the task start and end date, description, and owner.
timeline view

Who’s impacted 

End users 


Why you’d use it 

The timeline view enables you to easily interact with project information and can help you manage things like marketing campaigns, project milestones, schedules, cross-team collaboration, and more. 


Additional details 

By clicking on a card within the timeline, you can view more information about the project in the sidebar. You can also view your timeline at various time intervals (day, week, month, quarters, year, and multiyear). 


Getting started 

  • Admins: There is no admin control for this feature. 
  • End users: To create a timeline, navigate to Insert > Timeline > select a data range > configure the attributes in the timeline settings sidebar. Once created, you can view the timeline at different time intervals, jump to the current date, change the visual appearance of the timeline by adjusting spacing or using colors, and more. Visit the Help Center to learn more about Timeline View.

Rollout pace 


Availability 

  • Available to Google Workspace Essentials, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Education Fundamentals, Education Plus, Education Standard, the Teaching and Learning Upgrade, and Nonprofits customers 
  • Not available to legacy G Suite Basic and Business customers 
  • Not available to users with personal Google Accounts 

Resources 

Roadmap 

Beta Channel Update for Desktop

 The Beta channel has been updated to 108.0.5359.30 for Mac and Linux and 108.0.5359.29 for Windows.

A full list of changes in this build is available in the log. Interested in switching release channels? Find out how here. If you find a new issues, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.


Srinivas Sista Google Chrome