Announcing v202208 of the Google Ad Manager API

We're pleased to announce that v202208 of the Google Ad Manager API is available starting today, August 16th. This release adds support for AMP URLs in ThirdPartyCreatives, InventorySizeTargeting in traffic forecasts, and Real-time video reporting.

For the full list of changes, check the release notes. Feel free to reach out to us on the Ad Manager API forum with any API-related questions.

Conveniently connect site visitors with social channels in new Sites

Quick summary 

Site editors can now insert stylized social media links into pages within their site. This update enables you to more conveniently connect site visitors with additional information and content on your social channels. 



Getting started 

  • Admins: There is no admin control for this feature. 
  • End users: In a Site, navigate to Insert > Social Links. Supported links will automatically generate the corresponding icon or you can upload your own next to each link. You can then drag the tile to the desired location on a page and adjust the styling, size, color, and alignment. Visit the Help Center to learn more about inserting social links in a Site

Rollout pace 


Availability 

  • Available to all Google Workspace customers, as well as legacy G Suite Basic and Business customers 
  • Available to users with personal Google accounts 

Resources 

See how much noise is being removed during Google Meet video calls

Quick summary 

Google Meet can remove background noises such as typing, construction sounds, or background chatter. Noise cancellation helps make video calls more productive by reducing distractions that can divert attention away from the content of the meeting, so you can stay focused on the conversation. 


The voice indicator now shows how much noise is being removed. If you see the ring expanding, Meet is reducing the background noise. 


If you don't see the voice indicator, no audio (voice or noise) is coming from you.


A burst of noise is being filtered out while you speak, and other participants will only hear your voice. This visual cue is triggered at most once per meeting.



Noise is being filtered out on a continuous basis while you speak, and other participants will only hear your voice. The noise level is reflected by the ring size.





Getting started 

  • Admins: There is no admin control for this feature. 
  • End users: The noise indicator is displayed when noise cancellation is enabled. Visit the Help Center to learn how to enable noise cancellation

Rollout pace 


Availability 

  • Available to Google Workspace Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Education Plus, and Workspace Individual customers. 
  • Not available to Google Workspace Essentials, Business Starter, Education Fundamentals, Frontline, and Nonprofits, as well as G Suite Basic and Business customers. 
  • Not available to users with personal Google Accounts. 

Resources 

Towards Helpful Robots: Grounding Language in Robotic Affordances

Over the last several years, we have seen significant progress in applying machine learning to robotics. However, robotic systems today are capable of executing only very short, hard-coded commands, such as “Pick up an apple,” because they tend to perform best with clear tasks and rewards. They struggle with learning to perform long-horizon tasks and reasoning about abstract goals, such as a user prompt like “I just worked out, can you get me a healthy snack?”

Meanwhile, recent progress in training language models (LMs) has led to systems that can perform a wide range of language understanding and generation tasks with impressive results. However, these language models are inherently not grounded in the physical world due to the nature of their training process: a language model generally does not interact with its environment nor observe the outcome of its responses. This can result in it generating instructions that may be illogical, impractical or unsafe for a robot to complete in a physical context. For example, when prompted with “I spilled my drink, can you help?” the language model GPT-3 responds with “You could try using a vacuum cleaner,” a suggestion that may be unsafe or impossible for the robot to execute. When asking the FLAN language model the same question, it apologizes for the spill with "I'm sorry, I didn't mean to spill it,” which is not a very useful response. Therefore, we asked ourselves, is there an effective way to combine advanced language models with robot learning algorithms to leverage the benefits of both?

In “Do As I Can, Not As I Say: Grounding Language in Robotic Affordances”, we present a novel approach, developed in partnership with Everyday Robots, that leverages advanced language model knowledge to enable a physical agent, such as a robot, to follow high-level textual instructions for physically-grounded tasks, while grounding the language model in tasks that are feasible within a specific real-world context. We evaluate our method, which we call PaLM-SayCan, by placing robots in a real kitchen setting and giving them tasks expressed in natural language. We observe highly interpretable results for temporally-extended complex and abstract tasks, like “I just worked out, please bring me a snack and a drink to recover.” Specifically, we demonstrate that grounding the language model in the real world nearly halves errors over non-grounded baselines. We are also excited to release a robot simulation setup where the research community can test this approach.

With PaLM-SayCan, the robot acts as the language model’s “hands and eyes,” while the language model supplies high-level semantic knowledge about the task.

A Dialog Between User and Robot, Facilitated by the Language Model
Our approach uses the knowledge contained in language models (Say) to determine and score actions that are useful towards high-level instructions. It also uses an affordance function (Can) that enables real-world-grounding and determines which actions are possible to execute in a given environment. Using the the PaLM language model, we call this PaLM-SayCan.

Our approach selects skills based on what the language model scores as useful to the high level instruction and what the affordance model scores as possible.

Our system can be seen as a dialog between the user and robot, facilitated by the language model. The user starts by giving an instruction that the language model turns into a sequence of steps for the robot to execute. This sequence is filtered using the robot’s skillset to determine the most feasible plan given its current state and environment. The model determines the probability of a specific skill successfully making progress toward completing the instruction by multiplying two probabilities: (1) task-grounding (i.e., a skill language description) and (2) world-grounding (i.e., skill feasibility in the current state).

There are additional benefits of our approach in terms of its safety and interpretability. First, by allowing the LM to score different options rather than generate the most likely output, we effectively constrain the LM to only output one of the pre-selected responses. In addition, the user can easily understand the decision making process by looking at the separate language and affordance scores, rather than a single output.

PaLM-SayCan is also interpretable: at each step, we can see the top options it considers based on their language score (blue), affordance score (red), and combined score (green).

Training Policies and Value Functions
Each skill in the agent’s skillset is defined as a policy with a short language description (e.g., “pick up the can”), represented as embeddings, and an affordance function that indicates the probability of completing the skill from the robot’s current state. To learn the affordance functions, we use sparse reward functions set to 1.0 for a successful execution, and 0.0 otherwise.

We use image-based behavioral cloning (BC) to train the language-conditioned policies and temporal-difference-based (TD) reinforcement learning (RL) to train the value functions. To train the policies, we collected data from 68,000 demos performed by 10 robots over 11 months and added 12,000 successful episodes, filtered from a set of autonomous episodes of learned policies. We then learned the language conditioned value functions using MT-Opt in the Everyday Robots simulator. The simulator complements our real robot fleet with a simulated version of the skills and environment, which is transformed using RetinaGAN to reduce the simulation-to-real gap. We bootstrapped simulation policies’ performance by using demonstrations to provide initial successes, and then continuously improved RL performance with online data collection in simulation.

Given a high-level instruction, our approach combines the probabilities from the language model with the probabilities from the value function (VF) to select the next skill to perform. This process is repeated until the high-level instruction is successfully completed.

Performance on Temporally-Extended, Complex, and Abstract Instructions
To test our approach, we use robots from Everyday Robots paired with PaLM. We place the robots in a kitchen environment containing common objects and evaluate them on 101 instructions to test their performance across various robot and environment states, instruction language complexity and time horizon. Specifically, these instructions were designed to showcase the ambiguity and complexity of language rather than to provide simple, imperative queries, enabling queries such as “I just worked out, how would you bring me a snack and a drink to recover?” instead of “Can you bring me water and an apple?”

We use two metrics to evaluate the system’s performance: (1) the plan success rate, indicating whether the robot chose the right skills for the instruction, and (2) the execution success rate, indicating whether it performed the instruction successfully. We compare two language models, PaLM and FLAN (a smaller language model fine-tuned on instruction answering) with and without the affordance grounding as well as the underlying policies running directly with natural language (Behavioral Cloning in the table below). The results show that the system using PaLM with affordance grounding (PaLM-SayCan) chooses the correct sequence of skills 84% of the time and executes them successfully 74% of the time, reducing errors by 50% compared to FLAN and compared to PaLM without robotic grounding. This is particularly exciting because it represents the first time we can see how an improvement in language models translates to a similar improvement in robotics. This result indicates a potential future where robotics is able to ride the wave of progress that we have been observing in language models, bringing these subfields of research closer together.

Algorithm     Plan     Execute
PaLM-SayCan     84%     74%
PaLM     67%     -
FLAN-SayCan     70%     61%
FLAN     38%     -
Behavioral Cloning     0%     0%
PaLM-SayCan halves errors compared to PaLM without affordances and compared to FLAN over 101 tasks.
SayCan demonstrated successful planning for 84% of the 101 test instructions when combined with PaLM.

If you're interested in learning more about this project from the researchers themselves, please check out the video below:

Conclusion and Future Work
We’re excited about the progress that we’ve seen with PaLM-SayCan, an interpretable and general approach to leveraging knowledge from language models that enables a robot to follow high-level textual instructions to perform physically-grounded tasks. Our experiments on a number of real-world robotic tasks demonstrate the ability to plan and complete long-horizon, abstract, natural language instructions at a high success rate. We believe that PaLM-SayCan’s interpretability allows for safe real-world user interaction with robots. As we explore future directions for this work, we hope to better understand how information gained via the robot’s real-world experience could be leveraged to improve the language model and to what extent natural language is the right ontology for programming robots. We have open-sourced a robot simulation setup, which we hope will provide researchers with a valuable resource for future research that combines robotic learning with advanced language models. The research community can visit the project’s GitHub page and website to learn more.

Acknowledgements
We’d like to thank our coauthors Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Kelly Fu, Keerthana Gopalakrishnan, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan, and Andy Zeng. We’d also like to thank Yunfei Bai, Matt Bennice, Maarten Bosma, Justin Boyd, Bill Byrne, Kendra Byrne, Noah Constant, Pete Florence, Laura Graesser, Rico Jonschkowski, Daniel Kappler, Hugo Larochelle, Benjamin Lee, Adrian Li, Suraj Nair, Krista Reymann, Jeff Seto, Dhruv Shah, Ian Storz, Razvan Surdulescu, and Vincent Zhao for their help and support in various aspects of the project. And we’d like to thank Tom Small for creating many of the animations in this post.

Source: Google AI Blog


Making robots more helpful with language

Even the simplest human tasks are unbelievably complex. The way we perceive and interact with the world requires a lifetime of accumulated experience and context. For example, if a person tells you, “I am running out of time,” you don’t immediately worry they are jogging on a street where the space-time continuum ceases to exist. You understand that they’re probably coming up against a deadline. And if they hurriedly walk toward a closed door, you don’t brace for a collision, because you trust this person can open the door, whether by turning a knob or pulling a handle.

A robot doesn’t innately have that understanding. And that’s the inherent challenge of programming helpful robots that can interact with humans. We know it as “Moravec's paradox” — the idea that in robotics, it’s the easiest things that are the most difficult to program a robot to do. This is because we’ve had all of human evolution to master our basic motor skills, but relatively speaking, humans have only just learned algebra.

In other words, there’s a genius to human beings — from understanding idioms to manipulating our physical environments — where it seems like we just “get it.” The same can’t be said for robots.

Today, robots by and large exist in industrial environments, and are painstakingly coded for narrow tasks. This makes it impossible for them to adapt to the unpredictability of the real world. That’s why Google Research and Everyday Robots are working together to combine the best of language models with robot learning.

Called PaLM-SayCan, this joint research uses PaLM — or Pathways Language Model — in a robot learning model running on an Everyday Robots helper robot. This effort is the first implementation that uses a large-scale language model to plan for a real robot. It not only makes it possible for people to communicate with helper robots via text or speech, but also improves the robot’s overall performance and ability to execute more complex and abstract tasks by tapping into the world knowledge encoded in the language model.

Using language to improve robots

PaLM-SayCan enables the robot to understand the way we communicate, facilitating more natural interaction. Language is a reflection of the human mind’s ability to assemble tasks, put them in context and even reason through problems. Language models also contain enormous amounts of information about the world, and it turns out that can be pretty helpful to the robot. PaLM can help the robotic system process more complex, open-ended prompts and respond to them in ways that are reasonable and sensible.

PaLM-SayCan shows that a robot’s performance can be improved simply by enhancing the underlying language model. When the system was integrated with PaLM, compared to a less powerful baseline model, we saw a 14% improvement in the planning success rate, or the ability to map a viable approach to a task. We also saw a 13% improvement on the execution success rate, or ability to successfully carry out a task. This is half the number of planning mistakes made by the baseline method. The biggest improvement, at 26%, is in planning long horizon tasks, or those in which eight or more steps are involved. Here’s an example: “I left out a soda, an apple and water. Can you throw them away and then bring me a sponge to wipe the table?” Pretty demanding, if you ask me.

Making sense of the world through language

With PaLM, we’re seeing new capabilities emerge in the language domain such as reasoning via chain of thought prompting. This allows us to see and improve how the model interprets the task. For example, if you show the model a handful of examples with the thought process behind how to respond to a query, it learns to reason through those prompts. This is similar to how we learn by showing our work on our algebra homework.

PaLM-SayCan uses chain of thought prompting, which interprets the instruction in order to score the likelihood of completing the task

So if you ask PaLM-SayCan, “Bring me a snack and something to wash it down with,” it uses chain of thought prompting to recognize that a bag of chips may be a good snack, and that “wash it down” means bring a drink. Then PaLM-SayCan can respond with a series of steps to accomplish this. While we’re early in our research, this is promising for a future where robots can handle complex requests.

Grounding language through experience

Complexity exists in both language and the environments around us. That’s why grounding artificial intelligence in the real world is a critical part of what we do in Google Research. A language model may suggest something that appears reasonable and helpful, but may not be safe or realistic in a given setting. Robots, on the other hand, have been trained to know what is possible given the environment. By fusing language and robotic knowledge, we’re able to improve the overall performance of a robotic system.

Here’s how this works in PaLM-SayCan: PaLM suggests possible approaches to the task based on language understanding, and the robot models do the same based on the feasible skill set. The combined system then cross-references the two to help identify more helpful and achievable approaches for the robot.

By combining language and robotic affordances, PaLM-SayCan breaks down the requested task to perform it successfully

For example, if you ask the language model, “I spilled my drink, can you help?,” it may suggest you try using a vacuum. This seems like a perfectly reasonable way to clean up a mess, but generally, it’s probably not a good idea to use a vacuum on a liquid spill. And if the robot can’t pick up a vacuum or operate it, it’s not a particularly helpful way to approach the task. Together, the two may instead be able to realize “bring a sponge” is both possible and more helpful.

Experimenting responsibly

We take a responsible approach to this research and follow Google’s AI’s Principles in the development of our robots. Safety is our number-one priority and especially important for a learning robot: It may act clumsily while exploring, but it should always be safe. We follow all the tried and true principles of robot safety, including risk assessments, physical controls, safety protocols and emergency stops. We also always implement multiple levels of safety such as force limitations and algorithmic protections to mitigate risky scenarios. PaLM-SayCan is constrained to commands that are safe for a robot to perform and was also developed to be highly interpretable, so we can clearly examine and learn from every decision the system makes.

Making sense of our worlds

Whether it’s moving about busy offices — or understanding common sayings — we still have many mechanical and intelligence challenges to solve in robotics. So, for now, these robots are just getting better at grabbing snacks for Googlers in our micro-kitchens.

But as we continue to uncover ways for robots to interact with our ever-changing world, we’ve found that language and robotics show enormous potential for the helpful, human-centered robots of tomorrow.

Get to know Sophie, the 2022 Doodle for Google contest winner

For this year’s Doodle for Google contest, we asked students across the country to illustrate a Doodle around the prompt, “I care for myself by…” In July, we announced the national finalists, and the thoughtfulness, heart and artistry of one artist stood out in particular. Today, we’re announcing Sophie Araque-Liu of Florida is our 2022 contest winner!

Sophie’s Doodle, titled “Not Alone,” speaks to the importance of leaning on your support system and asking for help in tough times. I chatted with Sophie to learn more about her and the meaning behind her Doodle, which is on the Google.com homepage today.

How did you start making art?

I started making art by doodling in my notebooks in class. Soon it shifted from something I did to pass the time when I was bored to something I looked forward to and loved to do.

Why did you enter the Doodle for Google contest?

I entered the Doodle for Google contest this year, because I really wanted to give back to my parents. I feel like it’s very hard for me to show them just how much I appreciate them, so I’m grateful for the chance to be able to show them just how much I love them and give back to them in any way I can.

I want other people to know that you are also valuable, and you are worth something too, just like anyone else.

Can you share why you chose to focus on the theme of asking for help?

I chose to focus on the theme of asking for help based on my own experiences. A couple years ago, I was struggling a lot mentally and I was honestly pretty embarrassed and scared to tell my friends and family. But when I did open up to them, I was met with so much love and support. So I really wanted to encourage others to not be afraid to look for help if they need it!

Why is self-care important to you?

Self care is important to me because I believe that mental health is just as important as physical health. For me and for so many other people, it can be easy to sacrifice too much of yourself and to push yourself too hard. I want other people to know that you are also valuable, and you are worth something too, just like anyone else.

How does it feel to be the winner of this year’s Doodle for Google contest?

It feels incredible! I truly did not think that I would win, so I am so surprised and happy! I’m really really proud of myself for making it so far, and I know the competition wasnot easy at all. I think I’m honestly in shock and I still haven’t processed it yet. It’s just so amazing and every time I think about it I can’t help but smile hard!

Congratulations, Sophie! Be sure to bookmark the Doodle for Google websitefor updates around the 2023 contest, set to open submissions again this winter.

Updated user interface for managing email quarantines

Quick summary 

In the coming weeks, you will see a new user interface when using the email quarantine tool. This update will bring the email quarantine experience inline with other tools in the Admin console, making it more intuitive to navigate and use. Quarantines help minimize data loss, protect confidential information, and manage message attachments. 


Some improvements you’ll notice are: 
  • A collapsible side panel for filtering quarantines 
  • A paginated table view displaying quarantines with custom names row by row 
  • The option to view the original, raw content of a selected message for easier referencing. 

Quarantines with custom names are displayed row by row


Original, raw content can be viewed for each quarantine



Getting started 

  • Admins: The admin quarantine can be found in the Admin console at Apps > Google Workspace > Gmail > Manage Quarantines. Visit the Help Center to learn more. Visit the Help Center to learn more. 
  • End users: There is no end user impact. 

Rollout pace 


Availability 

  • Available to all Google Workspace editions, as well as legacy G Suite Basic and Business customers 

Resources 



Bringing readers even more local news

Local news is local knowledge. It’s shared understanding. It’s a chronicle of the places we live and the culture that defines them. Local news is essential to people and their communities. But at the same time, we also recognize the job of gathering and monetizing news is increasingly challenging for local news publishers.

Today, we’re hosting more than 100 American and Canadian local news leaders at our annual Community News Summit in Chicago. Journalists and business leaders are sharing their successes and challenges in running small, community-oriented news organizations. The program features hands-on workshops on specific Google products and tools, best practices on topics such as search and sustainability, and discussion about local news consumer behavior.

Through our products, partnerships and programs, like the Google News Initiative, Google has long worked to help people cut through the noise and connect to the stories that matter most in their local communities. In June, we announced a redesigned, more customizable Google News experience for desktop to help people dive deeper into important stories and more easily find local news from around the world.

The newly redesigned Google News on desktop, with local news now easier to find.

The newly redesigned Google News on desktop, with local news now easier to find.

We’ve also improved our systems so authoritative local news sources appear more often alongside national publications, when relevant, in our general news features such as Top Stories. This improvement ensures people will see authoritative local stories when they’re searching for news, helping both the brand and the content of news publishers reach more people.

We also recently introduced a new way to help people identify stories that have been frequently cited by other news organizations, giving them a simple way to find the most helpful or relevant information for a news story. This label appears on Top Stories, and you can find it on anything from an investigative article, to an interview, an announcement, a press release or a local news story, as long as other publishers indicate its relevance by linking to it. The highly cited label is currently available in English in the U.S. with plans to expand globally over the coming weeks.

A GIF of a phone screen showing an example of new information literacy tips on notices for rapidly evolving situations. Tampa Bay Rays is being typed into the search bar.

An example of new information literacy tips on notices for rapidly evolving situations.

We work closely with publishers and news industry associations to build a sustainable digital future for local news media. Having a digital news revenue strategy through subscribers and advertising is a key component for local news publishers to be sustainable. That’s why we're partnering with six different news associations in the U.S., each serving a unique constituency of publishers, to develop custom programs that support their members’ digital capabilities.

In addition to publishers, we’re also working with local broadcasters. The National Association of Broadcasters’ PILOT innovation division recently launched a Google News Initiative-supported program designed to improve online audience engagement and monetization for local broadcasters. The program helps stations implement their first-party data and direct-to-consumer business models.

We’ve also launched a $15 million digital and print ad campaign placed exclusively with U.S. local news media. The campaign directly supports publishers through the purchase of ad space in their papers and on their websites, and highlights our work with local publishers across the country. We’re encouraging readers everywhere to support their local news publishers, and are showcasing publishers who have made significant contributions to their communities through innovative reporting.

An example of a local ad campaign that says 'we're supporting the local news our communities need.'

Local news publishers are the heart of the communities they serve. They are one of our most trusted sources of information that impacts our daily lives. Their stories connect us to our neighbors, hold power to account, drive civic engagement and more. We hope you’ll join us and support local publishers in your area by subscribing, donating or advertising today. Together, we can help ensure a sustainable future for local news and all who depend on it.

Supporting Natural Language Processing (NLP) in Africa

Language is what connects us to each other and the world around us. While Africa is home to a third of the world's languages, technology is not yet available for many of its languages. This is an important challenge to tackle because language is more than a vehicle for communication. It is also a marker of identity, belonging, and opportunity. This is why we want to make sure you can understand and be understood, in any language of your choosing. It's a significant technical challenge to make this dream a reality, but we’re committed to and working towards this goal.


One of the challenges everyone faces in this space is the scarcity of machine readable language data which can be used to build technology. For many languages, it is difficult to find or it simply does not exist. Diversity gaps in Natural Language Processing (NLP) education and academia also narrow representation among language technologists working on lesser-resourced languages. Democratizing access to underrepresented languages data and increasing NLP education helps drive NLP research and advance language technology.


As part of our continued commitment and investment in digital transformation in Africa, Google teams have been working on programs to advance language technologies that serve the region, such as: adding 24 new languages to Google Translate earlier at I/O (including Bambara, Ewe, Krio, Lingala, Luganda, Tsonga and Twi), researching how to build speech recognition in African languages, and supporting local researchers through initiatives like Lacuna Fund. Community initiatives launched in India expanded to Africa, resulting in open-sourced crowdsourced datasets for speech applications in Nigerian English and Yoruba, and new community initiatives and workshops like     Explore ML with Crowdsource are gaining momentum in multiple African countries. We also hosted our first community workshop in the field of NLP and African languages in our growing AI research center in Ghana, which is also looking into how to advance NLP for African languages.


One more recent example of our language initiatives in the continent comes from a partnership with Africans to invest in African languages and NLP technology: in collaboration with Zindi, a social enterprise and professional network for data science we organized a series of Natural Language Processing (NLP) hackathons in Africa. The series included an Africa Automatic Speech Recognition (ASR) workshop and three hackathon challenges centered on model training for speech recognition, sentiment analysis, and speech data collection.


The interactive workshop aimed to increase awareness and skills for NLP in Africa, especially among researchers, students, and data scientists new to NLP. The workshop provided a beginner-friendly introduction to NLP and ASR, including a step by step guide on how to train a speech model for a new language. Participants also learned about the challenges and progress of work in the Africa NLP space and opportunities to get involved with data science and grow their careers.

 




In the Intro to Speech Recognition Africa Challenge, participants collected speech data for African languages and trained their own speech recognition models with it. This challenge generated new datasets in African languages, including the open-source datasets released by the challenge winners in Fongbe, Wolof, Swahili, Baule, Dendi, Chichewa and Khartoum Arabic, which enables further research, collaboration, and development of technology for these languages.


We partnered with Data Scientists Network (DSN) to organize the West Africa Speech Recognition Challenge, which according to Toyin Adekanmbi, the Executive Director of DSN, gave participants an “immersive experience to sharpen their skills as they learned to solve local problems”. Participants worked to train their own speech-recognition model for Hausa, spoken by an estimated 72 million people, using open source data from the Mozilla Common Voice platform.


In the Swahili Social Media Sentiment Analysis Challenge, held across Tanzania, Malawi, Kenya, Rwanda and Uganda, participants open sourced solutions of models that classified if the sentiment of a tweet was positive, negative, or neutral. These challenges allowed participants with similar interests to connect with each other in a supported environment and improve their machine learning and NLP skills.


Our focus to empower people to use technology in the language of their choice continues and, across many teams, we are on a mission to advance language technologies for African languages and increase NLP skills and education in the region, so that we can collectively build a world that is truly accessible for everyone, irrespective of the language they speak.




Posted by Connie Tao & Clara Rivera, Program Managers for Google AI


 ==== 

More control over accessibility preferences in Docs, Sheets, Slides, and Drawings

Quick summary

Over the years, we’ve launched features to support our ongoing accessibility efforts to ensure our products work well for everyone. For users of screen readers, braille devices, screen magnifiers, and more, we're improving the ability to adjust your accessibility preferences for Docs, Sheets, Slides, and Drawings separately. 

Rather than having the same accessibility settings apply across these products, you’re now able to set preferences for each product individually. We expect this change to make it easier to ensure accessibility settings are personalized to best meet each user’s needs. 

Accessibility settings can now be personalized for Docs, Sheets, Slides, and Drawings

Getting started 

  • Admins: There is no admin control for this feature. 
  • End users: In your document, spreadsheet, slide deck, or drawing, navigate to Tools > Accessibility > select your preferred settings. Visit the Help Center to learn more about Accessibility

Rollout pace 

 Availability 

  • Available to all Google Workspace customers, as well as legacy G Suite Basic and Business customers 
  • Available to users with personal Google Accounts 

 Resources 

Roadmap