Google’s new Viger office is an homage to Montréal

Google first laid down roots in Québec in 2004, when three engineers worked out of shared coworking spaces no larger than vestibules — including me, I’m employee number five in Montréal! Over the years Google expanded. For many years, we called McGill College home, and it became a place where team members shared milestones, forged community and built software that touches the lives of so many in Québec and around the globe.

Today, we’re proud to continue our commitment as we enter a new frontier for Google in Québec, by celebrating the official opening of Google Montréal Viger. Viger is a sustainability-focused office located in downtown Montréal, on the periphery of the city’s beloved Old Port. We’re also announcing $2.75 million towards Québec’s tech ecosystem and digital skills training.

The new office is home to a variety of teams that work on some of the most crucial products and services Google offers worldwide, including cybersecurity, AI research, Chrome and Cloud.

Explore Google Montréal’s Viger space

Montréal is often referred to as a medley of neighbourhoods, each with its own distinct identity. The retrofitted historical building pays homage to Montréal by reflecting the essence of five of the city’s most beloved neighbourhoods — Little Italy, Le Village, Le Plateau, Chinatown and Old Port. Every inch of the office celebrates the spirit of this vibrant city from the Farine Five Roses style Google Montréal sign when you enter the lobby, to the playful nods, traditional elements and architectural detailing of historic neighborhoods like Chinatown's flock and damask, Little Italy's artisan markets, Old Port's industrial roots, and Le Village's festive spirit.

interior image of cafeteria

The building is equipped with the latest LEED Gold sustainability standards, and includes enhanced ventilation that helps conserve energy and sustainably sourced furniture and materials. Throughout the space you can discover carefully curated art from Québec artists, which aims to inspire employees and strengthen connections that Google maintains with local makers. Some of the artists featured include Nadia Myre, a Montréal-based member of the Kitigan Zibi Anishinabeg First Nation, and Bryan Beyung, a street artist and painter born in Montréal into a Chinese-Cambodian refugee family.

To acknowledge Indigenous connection and stewardship to the land where Google Montréal exists, and to help remind Google team members of this history, we named many of our meeting rooms after local tree species in the Kanienʼkéha (Mohawk) language. The name selection process was conducted in consultation with the nearby Mohawk community of Kahnawake. A native Kanienʼkéha speaker helped ensure spellings were correct, and traditions and cultural practices were respected.

interior area with couches and chairs

Supporting Québec’s tech ecosystem

Today, we are also excited to announce a commitment to Québec’s tech ecosystem of over $2.75 million. This funding will support curiosity-driven research that tackles some of the most important 21st century challenges and catalyze Québec’s future digital builders and innovators:

  • Google Canada is committed to pushing the boundaries of deep learning research and is renewing its collaboration with Mila by providing a $1.5M grant for 2023. The funding will help support fundamental AI research projects in areas like AI for Humanity, climate change and sustainable agriculture. Support will also be provided to increase successful participation of students and faculty from underrepresented groups in computing research careers.

Google.org Support:

  • To create new opportunities for students across Québec looking to build digital skills, Google.org is providing a grant to Digital Moment (formerly Kids Code Jeunesse) to help their organization launch The Quebec Digital Literacy Project, a program aimed to equip teachers & students in grades 3-12 with digital skills.
  • To help job seekers in Québec gain the technical and digital skills required in the current job market, Google is now offering the Project Management and IT Support Google Career Certificates in French. The courses will be available to the public on Coursera, a global online learning platform. Google.org will also provide a grant to NPower Canada to deliver the Google Career Certificates in Québec. NPower Canada will offer need-based scholarships to the programs that will be distributed through local workforce development nonprofitsLa Maison de l’Amitié and AIM CROIT.
  • Google and Google.org are providing additional support to local Québec organizations, like Startup Montreal, Pathways to Education, E2 Adventures, UpstartED, AI4Good Lab and Resilience Montréal. These organizations work to tackle digital skilling, fostering startup communities, STEM education, job training and often support some of Québec’s most underrepresented communities.

For over 16 years, I’ve watched our team in Québec grow and work on some of Google’s most beloved products. Today, I’m proud to continue this journey and expand our commitment to Québec’s tech ecosystem. I look forward to what the next 20 years brings.

How we’re using AI to help address the climate crisis


Communities around the world are facing the effects of climate change — from devastating floods and wildfires to challenges around food security. As global leaders meet in Egypt for COP27, a key area of focus will be on how we can work together to address to climate change crisis and implement sustainable solutions. At Google, we’re investing in technologies that can help communities prepare for and respond to climate-related disasters and threats.

Tools to alert people and governments about immediate risks

Natural disasters are increasing in frequency and intensity due to climate change. As part of our Crisis Response efforts, we're working to bring trusted information to people in critical moments to keep them safe and informed. To do so, we rely on the research and development of our AI-powered technologies and longstanding partnerships with frontline emergency workers and organizations. Here’s a look at some of our crisis response efforts and new ways we’re expanding these tools.

  • Floods: Catastrophic damage from flooding affects more than 250 million people every year. In 2018, we launched our flood forecasting initiative that uses machine learning models to provide people with detailed alerts. In 2021, we sent 115 million flood alert notifications to 23 million people over Search and Maps, helping save countless lives. Today, we’re expanding our flood forecasts to river basins in 18 additional countries across Africa, Latin America and Southeast Asia. We’re also announcing the global launch of the new FloodHub, a platform that displays flood forecasts and shows when and where floods may occur to help people directly at risk and provide critical information to aid organizations and governments. This expansion in geographic coverage is possible thanks to our recent breakthroughs in AI-based flood forecasting models, and we’re committed to expanding to more countries.
An image of a FloodHub map showing areas where riverine floods my occur

The new Google FloodHub at g.co/floodhub shows forecasts for riverine floods. Forecasts are now available in 18 additional countries: Brazil, Colombia, Sri Lanka, Burkina Faso, Cameroon, Chad, Democratic Republic of Congo, Ivory Coast, Ghana, Guinea, Malawi, Nigeria, Sierra Leone, Angola, South Sudan, Namibia, Liberia, South Africa.

  • Wildfires: Wildfires affect hundreds of thousands of people each year, and are increasing in frequency and size. I experienced firsthand the need for accurate information when wildfires occur and this inspired our crisis response work. We detect wildfire boundaries using new AI models based on satellite imagery and show their real-time location in Search and Maps. Since July, we’ve covered more than 30 big wildfire events in the U.S. and Canada, helping inform people and firefighting teams with over 7 million views in Search and Maps. Today, wildfire detection is now available in the U.S., Canada, Mexico and parts of Australia.
Picture shows the location of the Pukatawagan fire in Manitoba, Canada.

The location of the Pukatawagan fire in Manitoba, Canada.

  • Hurricanes: Access to authoritative forecasts and safety information about hurricanes can be life-saving. In the days before a hurricane in North America or a typhoon in Japan, detailed forecasts from authoritative sources appear on SOS Alerts in Search and Maps to show a storm’s predicted trajectory. We're also using machine learning to analyze satellite imagery after disasters and identify which areas need help. When Hurricane Ian hit Florida in September, this technology was deployed in partnership with Google.org grantee GiveDirectly to quickly allocate aid to those most affected.

Managing current and future climate impacts

Climate change poses a threat to our world's natural resources and food security. We’re working with governments, organizations and communities to provide information and technologies to help adapt to these changes.

  • Keeping cities greener and healthier: Extreme temperatures and poor air quality are increasingly common in cities and can impact public health. To mitigate this, our Project Green Light uses AI to optimize traffic lights at intersections around the world with the aim to help minimize congestion and related pollution. Project Air View also brings detailed air quality maps to scientists, policymakers and communities. And we’re working to expand our Environmental Insights Explorer’s Tree Canopy Insights tool to hundreds of cities by the end of this year so they can use trees to lower street-level temperatures and improve quality of life.
  • Meeting the world’s growing demand for food: Mineral — a project from X, Alphabet’s moonshot factory — is working to build a more sustainable and productive food system. The team is joining diverse data sets in radically new ways — from soil and weather data to drone and satellite images — and using AI to reveal insights never before possible about what’s happening with crops. As part of our Startups For Sustainable Development program, we’re also supporting startups addressing food security. These include startups like OKO, which provides crop insurance to keep farmers in business in case of adverse weather events and has reached tens of thousands of farmers in Mali and Uganda.
  • Helping farmers protect their crops: Pest infestations can threaten entire crops and impact the livelihoods of millions. In collaboration with InstaDeep and the Food and Agriculture Organization of the United Nations, our team at the Google AI Center in Ghana is using AI to better detect locust outbreaks so that it's possible to implement control measures. In India, Google.org Fellows worked with Wadhwani AI to create an AI-powered app that helps identify and treat infestations of pests, resulting in a 20% reduction in pesticide sprays and a 26% increase in profit margins for farmers. Google Cloud is also working with agricultural technology companies to use machine learning and cloud services to improve crop yields.
  • Analyzing a changing planet: Using Google Cloud and Google Earth Engine, organizations and businesses can better assess and manage climate risks. For example, the U.S. Forest Service uses these tools to analyze land-cover changes to better respond to new wildfire threats and monitor the impacts of invasive insects, diseases and droughts. Similarly, the Bank of Montreal is integrating climate data — like precipitation trends — into its business strategy and risk management for clients.

AI already plays a critical role in addressing many urgent, climate-related challenges. It is important that we continue to invest in research and raise awareness about why we are doing this work. Google Arts and Culture has collaborated with artists on the Culture meets Climate collection so everyone can explore more perspectives on climate change. And at COP27 we hope to generate more awareness and engage in productive discussions about how to use AI, innovations, and shared data to help global communities address the changing climate.

3 ways AI is scaling helpful technologies worldwide

I was first introduced to neural networks as an undergraduate in 1990. Back then, many people in the AI community were excited about the potential of neural networks, which were impressive, but couldn’t yet accomplish important, real-world tasks. I was excited, too! I did my senior thesis on using parallel computation to train neural networks, thinking we only needed 32X more compute power to get there. I was way off. At that time, we needed 1 million times as much computational power.

A short 21 years later, with exponentially more computational power, it was time to take another crack at neural networks. In 2011, I and a few others at Google started training very large neural networks using millions of randomly selected frames from videos online. The results were remarkable. Without explicit training, the system automatically learned to recognize different objects (especially cats, the Internet is full of cats). This was one transformational discovery in AI among a long string of successes that is still ongoing — at Google and elsewhere.

I share my own history of neural networks to illustrate that, while progress in AI might feel especially fast right now, it’s come from a long arc of progress. In fact, prior to 2012, computers had a really difficult time seeing, hearing, or understanding spoken or written language. Over the past 10 years we’ve made especially rapid progress in AI.

Today, we’re excited about many recent advances in AI that Google is leading — not just on the technical side, but in responsibly deploying it in ways that help people around the world. That means deploying AI in Google Cloud, in our products from Pixel phones to Google Search, and in many fields of science and other human endeavors.

We’re aware of the challenges and risks that AI poses as an emerging technology. We were the first major company to release and operationalize a set of AI Principles, and following them has actually (and some might think counterintuitively) allowed us to focus on making rapid progress on technologies that can be helpful to everyone. Getting AI right needs to be a collective effort — involving not just researchers, but domain experts, developers, community members, businesses, governments and citizens.

I’m happy to make announcements in three transformative areas of AI today: first, using AI to make technology accessible in many more languages. Second, exploring how AI might bolster creativity. And third, in AI for Social Good, including climate adaptation.

1. Supporting 1,000 languages with AI

Language is fundamental to how people communicate and make sense of the world. So it’s no surprise it’s also the most natural way people engage with technology. But more than 7,000 languages are spoken around the world, and only a few are well represented online today. That means traditional approaches to training language models on text from the web fail to capture the diversity of how we communicate globally. This has historically been an obstacle in the pursuit of our mission to make the world’s information universally accessible and useful.

That’s why today we’re announcing the 1,000 Languages Initiative, an ambitious commitment to build an AI model that will support the 1,000 most spoken languages, bringing greater inclusion to billions of people in marginalized communities all around the world. This will be a many years undertaking – some may even call it a moonshot – but we are already making meaningful strides here and see the path clearly. Technology has been changing at a rapid clip – from the way people use it to what it’s capable of. Increasingly, we see people finding and sharing information via new modalities like images, videos, and speech. And our most advanced language models are multimodal – meaning they’re capable of unlocking information across these many different formats. With these seismic shifts come new opportunities.

spinning globe with languages

As part of our this initiative and our focus on multimodality, we’ve developed a Universal Speech Model — or USM — that’s trained on over 400 languages, making it the largest language coverage seen in a speech model to date. As we expand on this work, we’re partnering with communities across the world to source representative speech data. We recently announced voice typing for 9 more African languages on Gboard by working closely with researchers and organizations in Africa to create and publish data. And in South Asia, we are actively working with local governments, NGOs, and academic institutions to eventually collect representative audio samples from across all the regions’ dialects and languages.

2. Empowering creators and artists with AI

AI-powered generative models have the potential to unlock creativity, helping people across cultures express themselves using video, imagery, and design in ways that they previously could not.

Our researchers have been hard at work developing models that lead the field in terms of quality, generating images that human raters prefer over other models. We recently shared important breakthroughs, applying our diffusion model to video sequences and generating long coherent videos for a sequence of text prompts. We can combine these techniques to produce video — for the first time, today we’re sharing AI-generated super-resolution video:

We’ll soon be bringing our text-to-image generation technologies to AI Test Kitchen, which provides a way for people to learn about, experience, and give feedback on emerging AI technology. We look forward to hearing feedback from users on these demos in AI Test Kitchen Season 2. You’ll be able to build themed cities with “City Dreamer” and design friendly monster characters that can move, dance, and jump with “Wobble” — all by using text prompts.

In addition to 2D images, text-to-3D is now a reality with DreamFusion, which produces a three-dimensional model that can be viewed from any angle and can be composited into any 3D environment. Researchers are also making significant progress in the audio generation space with AudioLM, a model that learns to generate realistic speech and piano music by listening to audio only. In the same way a language model might predict the words and sentences that follow a text prompt, AudioLM can predict which sounds should follow after a few seconds of an audio prompt.

We're collaborating with creative communities globally as we develop these tools. For example, we're working with writers using Wordcraft, which is built on our state-of-the-art dialog system LaMDA, to experiment with AI-powered text generation. You can read the first volume of these stories at the Wordcraft Writers Workshop.

3. Addressing climate change and health challenges with AI

AI also has great potential to address the effects of climate change, including helping people adapt to new challenges. One of the worst is wildfires, which affect hundreds of thousands of people today, and are increasing in frequency and scale.

Today, I’m excited to share that we’ve advanced our use of satellite imagery to train AI models to identify and track wildfires in real time, helping predict how they will evolve and spread. We’ve launched this wildfire tracking system in the U.S., Canada, Mexico, and are rolling out in parts of Australia, and since July we’ve covered more than 30 big wildfire events in the U.S. and Canada, helping inform our users and firefighting teams with over 7 million views in Google Search and Maps.

wildfire alert on phone

We’re also using AI to forecast floods, another extreme weather pattern exacerbated by climate change. We’ve already helped communities to predict when floods will hit and how deep the waters will get — in 2021, we sent 115 million flood alert notifications to 23 million people over Google Search and Maps, helping save countless lives. Today, we’re sharing that we’re now expanding our coverage to more countries in South America (Brazil and Colombia), Sub-Saharan Africa (Burkina Faso, Cameroon, Chad, Democratic Republic of Congo, Ivory Coast, Ghana, Guinea, Malawi, Nigeria, Sierra Leone, Angola, South Sudan, Namibia, Liberia, and South Africa), and South Asia (Sri Lanka). We’ve used an AI technique called transfer learning to make it work in areas where there’s less data available. We’re also announcing the global launch of Google FloodHub, a new platform that displays when and where floods may occur. We’ll also be bringing this information to Google Search and Maps in the future to help more people to reach safety in flooding situations.

flood alert on a phone

Finally, AI is helping provide ever more access to healthcare in under-resourced regions. For example, we’re researching ways AI can help read and analyze outputs from low-cost ultrasound devices, giving parents the information they need to identify issues earlier in a pregnancy. We also plan to continue to partner with caregivers and public health agencies to expand access to diabetic retinopathy screening through our Automated Retinal Disease Assessment tool (ARDA). Through ARDA, we’ve successfully screened more than 150,000 patients in countries like India, Thailand, Germany, the United States, and the United Kingdom across deployed use and prospective studies — more than half of those in 2022 alone. Further, we’re exploring how AI can help your phone detect respiratory and heart rates. This work is part of Google Health’s broader vision, which includes making healthcare more accessible for anyone with a smartphone.

AI in the years ahead

Our advancements in neural network architectures, machine learning algorithms and new approaches to hardware for machine learning have helped AI solve important, real-world problems for billions of people. Much more is to come. What we’re sharing today is a hopeful vision for the future — AI is letting us reimagine how technology can be helpful. We hope you’ll join us as we explore these new capabilities and use this technology to improve people’s lives around the world.

Robots That Write Their Own Code

A common approach used to control robots is to program them with code to detect objects, sequencing commands to move actuators, and feedback loops to specify how the robot should perform a task. While these programs can be expressive, re-programming policies for each new task can be time consuming, and requires domain expertise.

What if when given instructions from people, robots could autonomously write their own code to interact with the world? It turns out that the latest generation of language models, such as PaLM, are capable of complex reasoning and have also been trained on millions of lines of code. Given natural language instructions, current language models are highly proficient at writing not only generic code but, as we’ve discovered, code that can control robot actions as well. When provided with several example instructions (formatted as comments) paired with corresponding code (via in-context learning), language models can take in new instructions and autonomously generate new code that re-composes API calls, synthesizes new functions, and expresses feedback loops to assemble new behaviors at runtime. More broadly, this suggests an alternative approach to using machine learning for robots that (i) pursues generalization through modularity and (ii) leverages the abundance of open-source code and data available on the Internet.

Given code for an example task (left), language models can re-compose API calls to assemble new robot behaviors for new tasks (right) that use the same functions but in different ways.

To explore this possibility, we developed Code as Policies (CaP), a robot-centric formulation of language model-generated programs executed on physical systems. CaP extends our prior work, PaLM-SayCan, by enabling language models to complete even more complex robotic tasks with the full expression of general-purpose Python code. With CaP, we propose using language models to directly write robot code through few-shot prompting. Our experiments demonstrate that outputting code led to improved generalization and task performance over directly learning robot tasks and outputting natural language actions. CaP allows a single system to perform a variety of complex and varied robotic tasks without task-specific training.




We demonstrate, across several robot systems, including a robot from Everyday Robots, that language models can autonomously interpret language instructions to generate and execute CaPs that represent reactive low-level policies (e.g., proportional-derivative or impedance controllers) and waypoint-based policies (e.g., vision-based pick and place, trajectory-based control).

A Different Way to Think about Robot Generalization

To generate code for a new task given natural language instructions, CaP uses a code-writing language model that, when prompted with hints (i.e., import statements that inform which APIs are available) and examples (instruction-to-code pairs that present few-shot "demonstrations" of how instructions should be converted into code), writes new code for new instructions. Central to this approach is hierarchical code generation, which prompts language models to recursively define new functions, accumulate their own libraries over time, and self-architect a dynamic codebase. Hierarchical code generation improves state-of-the-art on both robotics as well as standard code-gen benchmarks in natural language processing (NLP) subfields, with 39.8% pass@1 on HumanEval, a benchmark of hand-written coding problems used to measure the functional correctness of synthesized programs.

Code-writing language models can express a variety of arithmetic operations and feedback loops grounded in language. Pythonic language model programs can use classic logic structures, e.g., sequences, selection (if/else), and loops (for/while), to assemble new behaviors at runtime. They can also use third-party libraries to interpolate points (NumPy), analyze and generate shapes (Shapely) for spatial-geometric reasoning, etc. These models not only generalize to new instructions, but they can also translate precise values (e.g., velocities) to ambiguous descriptions ("faster" and "to the left") depending on the context to elicit behavioral commonsense.

Code as Policies uses code-writing language models to map natural language instructions to robot code to complete tasks. Generated code can call existing perception action APIs, third party libraries, or write new functions at runtime.

CaP generalizes at a specific layer in the robot: interpreting natural language instructions, processing perception outputs (e.g., from off-the-shelf object detectors), and then parameterizing control primitives. This fits into systems with factorized perception and control, and imparts a degree of generalization (acquired from pre-trained language models) without the magnitude of data collection needed for end-to-end robot learning. CaP also inherits language model capabilities that are unrelated to code writing, such as supporting instructions with non-English languages and emojis.

CaP inherits the capabilities of language models, such as multilingual and emoji support.

By characterizing the types of generalization encountered in code generation problems, we can also study how hierarchical code generation improves generalization. For example, "systematicity" evaluates the ability to recombine known parts to form new sequences, "substitutivity" evaluates robustness to synonymous code snippets, while "productivity" evaluates the ability to write policy code longer than those seen in the examples (e.g., for new long horizon tasks that may require defining and nesting new functions). Our paper presents a new open-source benchmark to evaluate language models on a set of robotics-related code generation problems. Using this benchmark, we find that, in general, bigger models perform better across most metrics, and that hierarchical code generation improves "productivity" generalization the most.

Performance on our RoboCodeGen Benchmark across different generalization types. The larger model (Davinci) performs better than the smaller model (Cushman), with hierarchical code generation improving productivity the most.

We're also excited about the potential for code-writing models to express cross-embodied plans for robots with different morphologies that perform the same task differently depending on the available APIs (perception action spaces), which is an important aspect of any robotics foundation model.

Language model code-generation exhibits cross-embodiment capabilities, completing the same task in different ways depending on the available APIs (that define perception action spaces).

Limitations

Code as policies today are restricted by the scope of (i) what the perception APIs can describe (e.g., few visual-language models to date can describe whether a trajectory is "bumpy" or "more C-shaped"), and (ii) which control primitives are available. Only a handful of named primitive parameters can be adjusted without over-saturating the prompts. Our approach also assumes all given instructions are feasible, and we cannot tell if generated code will be useful a priori. CaPs also struggle to interpret instructions that are significantly more complex or operate at a different abstraction level than the few-shot examples provided to the language model prompts. Thus, for example, in the tabletop domain, it would be difficult for our specific instantiation of CaPs to "build a house with the blocks" since there are no examples of building complex 3D structures. These limitations point to avenues for future work, including extending visual language models to describe low-level robot behaviors (e.g., trajectories) or combining CaPs with exploration algorithms that can autonomously add to the set of control primitives.


Open-Source Release

We have released the code needed to reproduce our experiments and an interactive simulated robot demo on the project website, which also contains additional real-world demos with videos and generated code.


Conclusion

Code as policies is a step towards robots that can modify their behaviors and expand their capabilities accordingly. This can be enabling, but the flexibility also raises potential risks since synthesized programs (unless manually checked per runtime) may result in unintended behaviors with physical hardware. We can mitigate these risks with built-in safety checks that bound the control primitives that the system can access, but more work is needed to ensure new combinations of known primitives are equally safe. We welcome broad discussion on how to minimize these risks while maximizing the potential positive impacts towards more general-purpose robots.


Acknowledgements

This research was done by Jacky Liang, Wenlong Huang, Fei Xia, Peng Xu, Karol Hausman, Brian Ichter, Pete Florence, Andy Zeng. Special thanks to Vikas Sindhwani, Vincent Vanhoucke for helpful feedback on writing, Chad Boodoo for operations and hardware support. An early preprint is available on arXiv.

Source: Google AI Blog


Google for Education transformation reports window open for customers worldwide

What’s changing

Google for Education transformation reports are available for K-12 Google Workspace for Education customers worldwide, at no cost. Note: transformation reports are only available in English at this time. 

The reporting window is open from Nov 2, 2022 through Dec 31, 2022. Workspace admins will benefit from: 

  • Immediate access: Google Workspace for Education super admins can log in today and immediately view their custom report. 
  • Realtime customization: You can adjust settings in realtime, including your two 12 week product data reporting windows and the numbers of teachers and students in your organization. (The report will be locked for editing after Dec 31, 2022.) 

See below for more information on generating your custom transformation report. 


Who’s impacted 

Admins 


Why you’d use it 

The transformation report is a free tool designed to help quantify your organization’s Google for Education implementation across our products and programs. Semester-based reports track usage trends over time and make it easy to understand how your organization is using Google Workspace for Education, Chromebooks, and progressing through educator certification programs. Additionally, each report includes links to free resources aimed to support your Google implementation. 




Getting started 

  • Admins: Google Workspace for Education super admins can log in to the transformation report tool starting Nov 2, 2022 to view their custom report. Upon login, Admins should: 
    • Update their student enrollment and faculty count.
    • Customize the product reporting windows — choose the two 12 week periods of time (current and previous) that make the most sense for your organization. Graphs in the report will display data comparing usage across these two windows. 
    • Click “View Report” 
  • End users: No action required. 

Rollout pace 

  • Transformation reports are available now for all users 

Availability 

  • Available to K-12 Google Workspace for Education Fundamentals, Education Standard, Education Plus, and the Teaching and Learning Upgrade customers 
  • Not available to Google Workspace Essentials, Business Starter, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Frontline, and Nonprofits, as well as G Suite Basic and Business customers 

Resources 

New features for parents and kids on Google Assistant

Earlier this week, I was in the kitchen watching my kids — at the (very fun) ages of seven and 11 — engaged in a conversation with our Google Assistant. My son, who has recently discovered a love of karaoke, asked it to play music so he could practice singing along to his favorite band, BTS. He and his sister ask it all kinds of questions: “How tall is the Eiffel Tower?” “How much do elephants weigh?” “Where was the Declaration of Independence signed?”

Whether we’re dictating a text message in the car or starting a timer while cooking at home, one thing is true: Voice plays an increasingly important role in the way we get things done — not just for us, but for our kids, too. It allows them to indulge their curiosities, learn new things and tap into their creative, inquisitive minds — all without having to look at a screen. As a mom, I see firsthand how kids’ relationship with technology starts by discovering the power of their own voice. And as today’s kids grow up in a world surrounded by technology, we want to help them have safer, educational and natural conversational experiences with Assistant. Here’s how we’re doing it.

Parental controls for safer, age-appropriate content

Since we know kids — like my own — tend to use their families’ shared devices, we take very seriously our responsibility to help parents protect them from harmful and inappropriate content. Building on that long-standing commitment, we’re rolling out a number of new features that will make it safer for your kids to interact with Assistant.

To give parents more control and peace of mind over the interactions their children have on Google speakers and smart displays, we’re introducing parental controls for Google Assistant. In the coming weeks through the Google Home, Family Link and Google Assistant apps on Android and iOS, you can modify media settings, enable or disable certain Assistant functionality and set up downtime for your kids.

The home screen of Google Assistant parental controls displaying different options, including Media, Assistant features, Downtime and Assistant devices.

After selecting your child’s account, you can choose the music and video providers they can access — such as YouTube Kids, YouTube and YouTube Music — and your kids will only be able to explore content from those pre-selected providers. You can also decide whether you want your children to listen to news and podcasts on their devices.

Through parental controls, you can also control the specific Assistant features your kids can use — like restricting them from making phone calls or choosing what kind of answers they get from Assistant. And to encourage a healthy relationship between kids and technology, just say, “Hey Google, open Assistant settings.” From there, navigate to parental controls, and you can block off time when they shouldn’t use their devices, just like you can do on personal Android devices and Chromebooks. Whether you have parental controls turned on or not, we always make sure you’re in control of your privacy settings.

Educational and fun conversations with Kids Dictionary

“What does telescope mean?” “What is the definition of ‘fluorescent’?”

Kids are naturally inquisitive and often turn to their Assistant to define words like these when they’re not sure what they mean. To help make those interactions even more engaging, we're introducing Kids Dictionary, which gives simplified and age-appropriate answers across speakers, smart displays and mobile devices.

With Kids Dictionary, children’s interactions with Assistant can be both educational and fun, allowing them to fuel their interests and learn new things. When your child is voice matched and Assistant detects their voice asking about a definition, it will automatically respond using this experience in Kids Dictionary.

A text bubble asks “Hey Google, what does telescope mean?” A Nest Hub Max is shown next to the text bubble, displaying a picture of a telescope and its definition.

Whether they’re doing their homework or simply curious about a new word they saw in a book, they’re only a “Hey Google” away from a little more help.

Kid-friendly voices for more engaging interactions

Kids today are growing up with technology, so it’s important that their experiences are developmentally appropriate. In addition to our increased efforts around safety and education, we’re also introducing four new kid-friendly voices. These new voices, which we designed alongside kids and parents, were developed with a diverse range of accents to reflect different communities and ways of speaking. And like a favorite teacher, these voices speak in slower and more expressive styles to help with storytelling and aid comprehension.

To activate one of Assistant’s new kid-friendly voices, kids can simply say, “Hey Google, change your voice!” Parents can also help their child navigate to Assistant settings, where they can select a new voice from the options available.

Like all parents, I’m always amazed by my kids’ insatiable curiosity. And every day, I see that curiosity come to life in the many questions they ask our Assistant. We’re excited to not only provide a safer experience, but an educational and engaging one, too — and to continue our work to truly build an Assistant for everyone.

Long Term Support Channel Update for ChromeOS

LTS-102 is being updated in the LTS channel to 102.0.5005.184 (Platform Version: 14695.142.0) for most ChromeOS devices. Want to know more about Long-term Support? Click here.


This update contains Security fixes, including:

1051198 High CVE-2022-3044 Inappropriate implementation in Site Isolation.
1320139 High CVE-2022-3306 Use-after-free in Ash
1319229 High CVE-2022-3305 Use-after-free in Ash
1368076 High CVE-2022-3446 Heap buffer overflow in WebSQL



Giuliana Pritchard

Google Chrome OS

Upcoming changes to targeting expansion in Display & Video 360 API

On November 7, 2022, optimized targeting will gradually begin replacing targeting expansion for display, video, and audio line items under a Display & Video 360 partner, with the new feature launched for all partners by November 9, 2022. We will be making changes to the Display & Video 360 API to reflect this. These changes may impact the existing configurations of your resources, and the behavior for your currently successful requests.

Read the optimized targeting and targeting expansion guides to understand the differences between optimized targeting and targeting expansion. Optimized targeting is not available for over-the-top line items, or line items that use fixed bidding.

There will be no structural changes to the existing targetingExpansion field or TargetingExpansionConfig object in Display & Video 360 API LineItem resources. Once optimized targeting replaces targeting expansion for your partner, these fields will be used to manage optimized targeting in the following manner:
  • The targetingExpansionLevel field will only support two possible values:
    • NO_EXPANSION: optimized targeting is off
    • LEAST_EXPANSION: optimized targeting is on
  • NO_EXPANSION will be the default value for the targetingExpansionLevel field and will be automatically assigned if you do not set the field
  • If you set targetingExpansionLevel to one of the following values, it will automatically be reset to LEAST_EXPANSION:
    • SOME_EXPANSION
    • BALANCED_EXPANSION
    • MORE_EXPANSION
    • MOST_EXPANSION
  • excludeFirstPartyAudience will continue to set whether first-party audiences should be excluded from targeting. This will now apply to optimized targeting instead of targeting expansion.
  • If you turn on optimized targeting for an ineligible line item, the request will not return an error and the change will persist. However, you must update the line item to be eligible before it will use optimized targeting when serving.
  • Optimized targeting will not automatically be turned on for eligible line items created or updated by the Display & Video 360 API.
We will also be updating the configurations of existing line items as follows: To prepare for this change, it is recommended that you turn on automated bidding for line items currently using fixed bidding with targeting expansion before November 7, 2022 to continue using audience expansion to improve campaign performance.

If you have questions regarding these changes or need help with these new features, please contact us using our support contact form.

Rocky Mountain High

Next up, Lakewood, Colorado! 


It’s official. Google Fiber is coming to Lakewood, Colorado. It's not our first foray in the Rocky Mountain state, in fact we've served customers with Google Fiber Webpass in the Denver area since 2017, but it is our first service in Lakewood through our first fiber-to-the-home network in the state. 


Thumbnail


Residents in Lakewood have been asking for more competition and options for internet service. We are grateful to the City for working with us on a non-exclusive right-of-way use agreement that enables us to deploy the network efficiently.  


Right now, we’re getting to work on detailed engineering designs, with construction beginning in 2023.  Google Fiber is fast, but network construction is … well, we’ll go as fast as we can, while prioritizing safety and minimizing disruption to traffic and to neighborhoods. Local residents who want more information on service availability or the construction process can sign up for updates.


Lakewood, Colorado, here we come!


Posted by Sasha Petrovic, Southwest Region General Manager


Introducing Developer Journey: November 2022

Posted by Lyanne Alfaro, DevRel Program Manager, Google Developer Studio

Developer Journey is a new monthly series to spotlight diverse and global developers sharing relatable challenges, opportunities, and wins in their journey. Every month, we will spotlight developers around the world, the Google tools they leverage, and the kind of products they are building.

We are kicking off #DevJourney in November to give members of our community the chance to share their stories through our social platforms. This month, it’s our pleasure to feature four members spanning products including Google Developer Expert, Android, and Cloud. Enjoy reading through their entries below and be on the lookout on social media platforms, where we will also showcase their work.

Headshot of Sierra Obryan smiling
Sierra OBryan, Google Developer Expert, Android


















Sierra OBryan

Google Developer Expert, Android
Cincinnati, OH
Twitter and Instagram: @_sierraOBryan

What Google tools have you used?

As an Android developer, I use many Google tools every day like Jetpack Compose and other Android libraries, Android Studio, and Material Design. I also like to explore some of the other Google tools in personal projects. I’ve built a Flutter app, poked around in Firebase, and trained my own ML model using the model maker.

Which tool has been your favorite to use? Why?

It’s hard to choose one but I’m really excited about Jetpack Compose! It’s really exciting to be able to work with a new and evolving framework with so much energy and input coming from the developer community. Compose makes it easier to quickly build things that previously could be quite complex like animations and custom layouts, and has some very cool tooling in Android Studio like Live Edit and recomposition counts; all of which improve developer efficiency and app quality. One of my favorite things about Compose in general is that I think it will make Android development more accessible to more people because it is more intuitive and easier to get started and so we’ll see the Android community continue to grow with new perspectives and backgrounds bringing in new ideas.

Google also provides a lot of really helpful tools for building more accessible mobile apps and I’m really glad these important tools also exist! The Accessibility Scanner is available on Google Play and can identify some common accessibility pitfalls in your app with tips about how to fix them and why it’s important. The “Accessibility in Jetpack Compose” code lab is a great starting place for learning more about these concepts.

Please share with us about something you’ve built in the past using Google tools.

A favorite personal project is a (very) simple flower identifying app built using ML Kit ’s Image Labeling API and Android. After the 2020 ML-focused Android Developer Challenge, I was very curious about ML Kit but also still quite intimidated by the idea of machine learning. It was surprisingly easy to follow the documentation to build and tinker with a custom model and then add it to an Android app. I just recently migrated the app to Jetpack Compose.

What advice would you give someone starting in their developer journey?

Find a community! Like most things, developing is more fun with friends.


Photo of Harun Wangereka smiling
Harun Wangereka, Google Developer Expert, Android

















Harun Wangereka

Google Developer Expert, Android

What Google tools have you used?

I'm an Android Engineer by profession. The tools I use on a day-to-day basis are Android as the framework, Android Studio as the IDE, and some of the Jetpack Libraries from the Android Team at Google.

Which tool has been your favorite to use? Why?

Jetpack libraries. I love these libraries because they solve most of the common pain points we, as Android developers, faced before they came along. They also concisely solve them and provide best practices for Android developers to follow.

Please share with us about something you've built in the past using Google tools.

At my workplace, Apollo Agriculture, I collaborate with cross-functional teams to define, design and ship new features for the agent's and agro-dealer’s Android apps, which are entirely written in Kotlin. We have Apollo for Agents, an app for agents to perform farmer-related tasks and Apollo Checkout, which helps farmers check out various Apollo products. With these two apps, I'm assisting Apollo Agriculture to make financing for small-scale farmers accessible to everyone.

What advice would you give someone starting in their developer journey?

Be nice to yourself as you learn. The journey can be quite hard at times but remember to give yourself time. You can never know all the things at once, so try to learn one thing at a time. Do it consistently and it will pay off in the very end. Remember also to join existing developer communities in your area. They help a lot!


Selfie of Richard Knowles at the beach
Richard Knowles, Android Developer
























Richard Knowles

Android Developer
Los Angeles, CA

What Google tools have you used?

I’ve been building Android apps since 2011, when I was in graduate school studying for my Master’s Degree in Computer Engineering. I built my first Android app using Eclipse which seemed to be a great tool at the time, at least until Google’s Android Studio was released for the first time in 2014. Android Studio is such a powerful and phenomenal IDE! I’ve been using it to build apps for Android phones, tablets, smartwatches, and TV. It is amazing how the Android Accessibility Test Framework integrates with Android Studio to help us catch accessibility issues in our layouts early on.

Which tool has been your favorite to use? Why?

My favorite tool by far is the Accessibility Scanner. As a developer with a hearing disability, accessibility is very important to me. I was born with a sensorineural hearing loss, and wore hearing aids up until I was 18 when I decided to get a cochlear implant. I am a heavy closed-captioning user and I rely on accessibility every single day. When I was younger, before the smartphone era, even through the beginning of the smartphone era, it was challenging for me to fully enjoy TV or videos that didn’t have captions. I’m so glad that the world is starting to adapt to those with disabilities and the awareness of accessibility has increased. In fact, I chose the software engineering field because I wanted to create software or apps that would improve other people’s lives, the same way that technology has made my life easier. Making sure the apps I build are accessible has always been my top priority. This is why the Accessibility Scanner is one of my favorite tools: It allows me to efficiently test how accessible my user-facing changes are, especially for those with visual disabilities.

Please share with us about something you’ve built in the past using Google tools.

As an Android engineer on Twitter’s Accessibility Experience Team, one of our initiatives is to improve the experience of image descriptions and the use of alt text. Did you know that when you put images in your Tweets on Twitter, you can add descriptions to make them accessible to people who can’t see images? If yes, that is great! But do you always remember to do it? Don’t worry if not - you’re not alone. Many people including myself forget to add image descriptions. So, we implemented Alt Text reminders which allow users to opt in to be notified when they tweet images without descriptions. We also have been working to expose alt text for all images and GIFs. What that means is, we are now displaying an “ALT” badge on images that have associated alternative text or image descriptions. In general, alt text is primarily used for Talkback users but we wanted to allow users not using a screen reader to know which images have alternative text, and of course allow them to view the image description by selecting the “ALT” badge. This feature helped achieve two things: 1) Users that may have low-vision or other disabilities that would benefit from available alternative text can now access that text; 2) Users can know which images have alternative text before retweeting those images. I personally love this feature because it increases the awareness of Alt text.

What advice would you give someone starting in their developer journey?

What an exciting time to start! I have three tips I'd love to share:

1) Don’t start coding without reviewing the specifications and designs carefully. Draw and map out the architecture and technical design of your work before you jump into the code. In other words, work smarter, not harder.

2) Take the time to read through the developer documentation and the source code. You will become an expert more quickly if you understand what is happening behind the scenes. When you call a function from a library or SDK, get in the habit of looking at the source code and implementation of that function so that you can not only learn as you code, but also find opportunities to improve performance.

3) Learn about accessibility as early as possible, preferably at the same time as learning everything else, so that it becomes a habit and not something you have to force later on.


Headshot of Lynne Langit smiling
Lynn Langit, GDE/Cloud


























Lynn Langit

GDE/Cloud
Minnesota
Twitter: @lynnlangit

What Google tools have you used?

So many! My favorite Google Cloud services are CloudRun, BigQuery, Dataproc. Favorite Tools are Cloud Shell Editor, SSH-in browser for Compute Engine and Big Query Execution Details.

Which tool has been your favorite to use? Why?

I love to use the open source Variant Transforms tool for VCF [or genomic] data files. This tool gets bioinformaticians working with BigQuery quickly. Researchers use the VariantTransforms tool to validate and load VCF files into BigQuery. VariantTransforms supports genome-scale data analysis workloads. These workloads can contain hundreds of thousands of files, millions of genomic samples, and billions of input records.

Please share with us about something you’ve built in the past using Google tools.

I have been working with teams around the world to build, scale, and deploy multiple genomic-scale data pipelines for human health. Recent use cases are data analysis in support of Covid or cancer drug development.

What advice would you give someone starting in their developer journey?

Expect to spend 20-25% of your professional time learning for the duration of your career. All public cloud services, including Google Cloud, evolve constantly. Building effectively requires knowing both cloud patterns and services at a deep level.