Accurate Alpha Matting for Portrait Mode Selfies on Pixel 6

Image matting is the process of extracting a precise alpha matte that separates foreground and background objects in an image. This technique has been traditionally used in the filmmaking and photography industry for image and video editing purposes, e.g., background replacement, synthetic bokeh and other visual effects. Image matting assumes that an image is a composite of foreground and background images, and hence, the intensity of each pixel is a linear combination of the foreground and the background.

In the case of traditional image segmentation, the image is segmented in a binary manner, in which a pixel either belongs to the foreground or background. This type of segmentation, however, is unable to deal with natural scenes that contain fine details, e.g., hair and fur, which require estimating a transparency value for each pixel of the foreground object.

Alpha mattes, unlike segmentation masks, are usually extremely precise, preserving strand-level hair details and accurate foreground boundaries. While recent deep learning techniques have shown their potential in image matting, many challenges remain, such as generation of accurate ground truth alpha mattes, improving generalization on in-the-wild images and performing inference on mobile devices treating high-resolution images.

With the Pixel 6, we have significantly improved the appearance of selfies taken in Portrait Mode by introducing a new approach to estimate a high-resolution and accurate alpha matte from a selfie image. When synthesizing the depth-of-field effect, the usage of the alpha matte allows us to extract a more accurate silhouette of the photographed subject and have a better foreground-background separation. This allows users with a wide variety of hairstyles to take great-looking Portrait Mode shots using the selfie camera. In this post, we describe the technology we used to achieve this improvement and discuss how we tackled the challenges mentioned above.

Portrait Mode effect on a selfie shot using a low-resolution and coarse alpha matte compared to using the new high-quality alpha matte.

Portrait Matting
In designing Portrait Matting, we trained a fully convolutional neural network consisting of a sequence of encoder-decoder blocks to progressively estimate a high-quality alpha matte. We concatenate the input RGB image together with a coarse alpha matte (generated using a low-resolution person segmenter) that is passed as an input to the network. The new Portrait Matting model uses a MobileNetV3 backbone and a shallow (i.e., having a low number of layers) decoder to first predict a refined low-resolution alpha matte that operates on a low-resolution image. Then we use a shallow encoder-decoder and a series of residual blocks to process a high-resolution image and the refined alpha matte from the previous step. The shallow encoder-decoder relies more on lower-level features than the previous MobileNetV3 backbone, focusing on high-resolution structural features to predict final transparency values for each pixel. In this way, the model is able to refine an initial foreground alpha matte and accurately extract very fine details like hair strands. The proposed neural network architecture efficiently runs on Pixel 6 using Tensorflow Lite.

The network predicts a high-quality alpha matte from a color image and an initial coarse alpha matte. We use a MobileNetV3 backbone and a shallow decoder to first predict a refined low-resolution alpha matte. Then we use a shallow encoder-decoder and a series of residual blocks to further refine the initially estimated alpha matte.

Most recent deep learning work for image matting relies on manually annotated per-pixel alpha mattes used to separate the foreground from the background that are generated with image editing tools or green screens. This process is tedious and does not scale for the generation of large datasets. Also, it often produces inaccurate alpha mattes and foreground images that are contaminated (e.g., by reflected light from the background, or “green spill”). Moreover, this does nothing to ensure that the lighting on the subject appears consistent with the lighting in the new background environment.

To address these challenges, Portrait Matting is trained using a high-quality dataset generated using a custom volumetric capture system, Light Stage. Compared with previous datasets, this is more realistic, as relighting allows the illumination of the foreground subject to match the background. Additionally, we supervise the training of the model using pseudo–ground truth alpha mattes from in-the-wild images to improve model generalization, explained below. This ground truth data generation process is one of the key components of this work.

Ground Truth Data Generation
To generate accurate ground truth data, Light Stage produces near-photorealistic models of people using a geodesic sphere outfitted with 331 custom color LED lights, an array of high-resolution cameras, and a set of custom high-resolution depth sensors. Together with Light Stage data, we compute accurate alpha mattes using time-multiplexed lights and a previously recorded “clean plate”. This technique is also known as ratio matting.

This method works by recording an image of the subject silhouetted against an illuminated background as one of the lighting conditions. In addition, we capture a clean plate of the illuminated background. The silhouetted image, divided by the clean plate image, provides a ground truth alpha matte.

Then, we extrapolate the recorded alpha mattes to all the camera viewpoints in Light Stage using a deep learning–based matting network that leverages captured clean plates as an input. This approach allows us to extend the alpha mattes computation to unconstrained backgrounds without the need for specialized time-multiplexed lighting or a clean background. This deep learning architecture was solely trained using ground truth mattes generated using the ratio matting approach.

Computed alpha mattes from all camera viewpoints at the Light Stage.

Leveraging the reflectance field for each subject and the alpha matte generated with our ground truth matte generation system, we can relight each portrait using a given HDR lighting environment. We composite these relit subjects into backgrounds corresponding to the target illumination following the alpha blending equation. The background images are then generated from the HDR panoramas by positioning a virtual camera at the center and ray-tracing into the panorama from the camera’s center of projection. We ensure that the projected view into the panorama matches its orientation as used for relighting. We use virtual cameras with different focal lengths to simulate the different fields-of-view of consumer cameras. This pipeline produces realistic composites by handling matting, relighting, and compositing in one system, which we then use to train the Portrait Matting model.

Composited images on different backgrounds (high-resolution HDR maps) using ground truth generated alpha mattes.

Training Supervision Using In-the-Wild Portraits
To bridge the gap between portraits generated using Light Stage and in-the-wild portraits, we created a pipeline to automatically annotate in-the-wild photos generating pseudo–ground truth alpha mattes. For this purpose, we leveraged the Deep Matting model proposed in Total Relighting to create an ensemble of models that computes multiple high-resolution alpha mattes from in-the-wild images. We ran this pipeline on an extensive dataset of portrait photos captured in-house using Pixel phones. Additionally, during this process we performed test-time augmentation by doing inference on input images at different scales and rotations, and finally aggregating per-pixel alpha values across all estimated alpha mattes.

Generated alpha mattes are visually evaluated with respect to the input RGB image. The alpha mattes that are perceptually correct, i.e., following the subject's silhouette and fine details (e.g., hair), are added to the training set. During training, both datasets are sampled using different weights. Using the proposed supervision strategy exposes the model to a larger variety of scenes and human poses, improving its predictions on photos in the wild (model generalization).

Estimated pseudo–ground truth alpha mattes using an ensemble of Deep Matting models and test-time augmentation.

Portrait Mode Selfies
The Portrait Mode effect is particularly sensitive to errors around the subject boundary (see image below). For example, errors caused by the usage of a coarse alpha matte keep sharp focus on background regions near the subject boundaries or hair area. The usage of a high-quality alpha matte allows us to extract a more accurate silhouette of the photographed subject and improve foreground-background separation.

Try It Out Yourself
We have made front-facing camera Portrait Mode on the Pixel 6 better by improving alpha matte quality, resulting in fewer errors in the final rendered image and by improving the look of the blurred background around the hair region and subject boundary. Additionally, our ML model uses diverse training datasets that cover a wide variety of skin tones and hair styles. You can try this improved version of Portrait Mode by taking a selfie shot with the new Pixel 6 phones.

Portrait Mode effect on a selfie shot using a coarse alpha matte compared to using the new high quality alpha matte.

Acknowledgments
This work wouldn’t have been possible without Sergio Orts Escolano, Jana Ehmann, Sean Fanello, Christoph Rhemann, Junlan Yang, Andy Hsu, Hossam Isack, Rohit Pandey, David Aguilar, Yi Jinn, Christian Hane, Jay Busch, Cynthia Herrera, Matt Whalen, Philip Davidson, Jonathan Taylor, Peter Lincoln, Geoff Harvey, Nisha Masharani, Alexander Schiffhauer, Chloe LeGendre, Paul Debevec, Sofien Bouaziz, Adarsh Kowdle, Thabo Beeler, Chia-Kai Liang and Shahram Izadi. Special thanks to our photographers James Adamson, Christopher Farro and Cort Muller who took numerous test photographs for us.


Source: Google AI Blog


Create or import text watermarks in Google Docs

Quick summary

You can now add a text watermark to your documents in Google Docs. Additionally, when working with Microsoft Word documents, text watermarks will be preserved when importing or exporting your files.



Text watermarks will repeat on every page on your document, making it useful for indicating file status, such as “Confidential” or “Draft” before sharing more broadly, no matter the application you use. In addition to text watermarks, you can insert an image watermark or images above or behind text.

Getting started

  • Admins: There is no admin control for this feature.
  • End users: To get started, go to Insert > Watermark > Text. Visit the Help Center to learn more about adding watermarks in Docs.


Rollout pace


Availability

  • Available to all Google Workspace customers, as well as G Suite Basic and Business customers

Resources


Google Ads scripts now supports asset-based video ads

Starting today, Google Ads scripts supports the new asset-based video ads. This replaces the previous media-based video ads. If you have a script that creates new video ads, you must migrate your code by February 28, 2022, or else your script will begin to fail with errors after that date. Media-based video ads will no longer be supported.

This follows the same change in Google Ads API v9. Working with assets gives you greater flexibility and ease of use as they are building blocks for various ad types across networks.

If you don’t create new video ads, you aren’t affected.

For instructions on how to create an asset, view our documentation on the YouTubeVideoAssetBuilder. Once you've created an asset, you can use it when creating video ads of various types.

If you have any questions, please leave a post on our forum.

Google Developer Group Spotlight: A conversation with software developer Aditi Soni

Posted by Manoranjan Padhy - Developer Relations Lead, India

Six years ago, Aditi Soni was new to computers and programming when she learned about Google Developer Groups (GDG) and Women Techmakers (WTM) from a senior at her university, the Shri Vaishnav Institute of Technology and Science, Indore. Then, everything changed when she joined a Google Developer Group in Indore, the largest city in Central India, which began as a 16th century trading hub.

“Initially, it was extremely overwhelming for me to be in that space, where so many accomplished professionals were present,” Aditi says of her first experiences attending GDG Indore. “I was very hesitant to go and have a conversation with them.”

But Aditi felt determined. Her friend Aditya Sharma taught her C and C++, and she practiced her programming skills on her smartphone, using tools like the C4droid Android app, because she didn’t have a laptop. By the time she got a laptop, she was off and running. Aditi began teaching herself Android development and landed an internship after her second year of college.

image of Aditi standing at a podium

“I consider myself as an opportunity grabber,” Aditi writes in a post on her Medium blog. “ I never miss a single chance to become a better version of myself. I used to attend all community meetups and did not miss a single one.”

All her hard work paid off. In 2017, she became a Women Techmakers lead in Indore and took her first flight on an airplane to the WTM Leads Summit in Bangalore. The same year, she became a Microsoft Student Partner and attended Google Developer Days India. In 2018, Aditi earned the Google India Udacity Android Developers Nanodegree Scholarship, as one of the top 150 students from India, and graduated with a Bachelor’s of Engineering degree in computer science. In 2019, Women Techmakers awarded Aditi a travel grant to Madrid, Spain to attend the Firebase Summit.

image Aditi at Firebase Summit 2019

Using the experience of being a woman in tech to encourage others to pursue STEM careers

Now, Aditi is a full-time software developer at Consultadd Incorporation, India, and a Women Techmakers Ambassador, and a GDG organizer for her local chapter in Pune. She contributes to the community as an organizer, speaker, and mentor.

“We organize monthly technical meetups to empower women and provide them with a platform to achieve their goals,” Aditi explains. “Being able to help others feels like I am giving it back to the community.”

Aditi says GDG and WTM have helped her develop technical skills and have also positively impacted her personal life.

“I had significant life experiences because of the Google Developer Group and Women Techmakers communities, including my first flight, my first hands-on experience with Google's trending technologies, and one-on-one interaction with Googlers and many great personalities while attending global summits,” she says. “All these things have helped me to be a person who can guide and help others and share my knowledge and experiences with hundreds of professionals today.”

Aditi describes herself as a community enthusiast, using her platform to encourage other women and students to pursue careers in technology, even if they’re brand-new to the field. She also enjoys mentoring new programmers.

“I am passionate about making an impact on others’ lives by sharing my journey and experiences and helping people face hurdles like mine with courage and confidence,” she says. “I enjoy helping people who are struggling to learn to code or who want to switch their careers to tech.”

image of Aditi presenting in a classroom

Supporting budding developers

Aditi acknowledges the adage, “Change is not easy,’’ especially when preparing for a career in technology.

“You may try very hard, give up so many times, and go through all that frustration, but remember not to quit,” she advises. “The moment you feel like quitting is the moment you need to keep pushing and get your reward.”

She has specific suggestions for making it easier to build new tech skills, too.

“Before learning a specific technology, understand yourself,” she suggests. “What works for you? What's your learning process? Then look for the appropriate resources. It can be a simple one-page tutorial or a full-fledged course. Everything is easy when the basics are clear and the foundation is strong.”

Aditi plans to continue contributing to the tech community in India and around the world, by sharing her insight, connecting with new people, and developing new technical skills. She recently welcomed a new member into her family–a baby girl–and she is growing her own regional tech community and providing so much to others in her area and the STEM field.

Know someone with a powerful story? Nominate someone in your community to be featured, or share your own personal stories with us through the story submission form!

Separating Birdsong in the Wild for Classification

Birds are all around us, and just by listening, we can learn many things about our environment. Ecologists use birds to understand food systems and forest health — for example, if there are more woodpeckers in a forest, that means there’s a lot of dead wood. Because birds communicate and mark territory with songs and calls, it’s most efficient to identify them by ear. In fact, experts may identify up to 10x as many birds by ear as by sight.

In recent years, autonomous recording units (ARUs) have made it easy to capture thousands of hours of audio in forests that could be used to better understand ecosystems and identify critical habitat. However, manually reviewing the audio data is very time consuming, and experts in birdsong are rare. But an approach based on machine learning (ML) has the potential to greatly reduce the amount of expert review needed for understanding a habitat.

However, ML-based audio classification of bird species can be challenging for several reasons. For one, birds often sing over one another, especially during the “dawn chorus” when many birds are most active. Also, there aren’t clear recordings of individual birds to learn from — almost all of the available training data is recorded in noisy outdoor conditions, where other sounds from the wind, insects, and other environmental sources are often present. As a result, existing birdsong classification models struggle to identify quiet, distant and overlapping vocalizations. Additionally, some of the most common species often appear unlabeled in the background of training recordings for less common species, leading models to discount the common species. These difficult cases are very important for ecologists who want to identify endangered or invasive species using automated systems.

To address the general challenge of training ML models to automatically separate audio recordings without access to examples of isolated sounds, we recently proposed a new unsupervised method called mixture invariant training (MixIT) in our paper, “Unsupervised Sound Separation Using Mixture Invariant Training”. Moreover, in our new paper, “Improving Bird Classification with Unsupervised Sound Separation,” we use MixIT training to separate birdsong and improve species classification. We found that including the separated audio in the classification improves precision and classification quality on three independent soundscape datasets. We are also happy to announce the open-source release of the birdsong separation models on GitHub.

Bird Song Audio Separation
MixIT learns to separate single-channel recordings into multiple individual tracks, and can be trained entirely with noisy, real-world recordings. To train the separation model, we create a “mixture of mixtures” (MoM) by mixing together two real-world recordings. The separation model then learns to take the MoM apart into many channels to minimize a loss function that uses the two original real-world recordings as ground-truth references. The loss function uses these references to group the separated channels such that they can be mixed back together to recreate the two original real-world recordings. Since there’s no way to know how the different sounds in the MoM were grouped together in the original recordings, the separation model has no choice but to separate the individual sounds themselves, and thus learns to place each singing bird in a different output audio channel, also separate from wind and other background noise.

We trained a new MixIT separation model using birdsong recordings from Xeno-Canto and the Macaulay Library. We found that for separating birdsong, this new model outperformed a MixIT separation model trained on a large amount of general audio from the AudioSet dataset. We measure the quality of the separation by mixing two recordings together, applying separation, and then remixing the separated audio channels such that they reconstruct the original two recordings. We measure the signal-to-noise ratio (SNR) of the remixed audio relative to the original recordings. We found that the model trained specifically for birds achieved 6.1 decibels (dB) better SNR than the model trained on AudioSet (10.5 dB vs 4.4 dB). Subjectively, we also found many examples where the system worked incredibly well, separating very difficult to distinguish calls in real-world data.

The following videos demonstrate separation of birdsong from two different regions (Caples and the High Sierras). The videos show the mel-spectrogram of the mixed audio (a 2D image that shows the frequency content of the audio over time) and highlight the audio separated into different tracks.

High Sierras
  
Caples

Classifying Bird Species
To classify birds in real-world audio captured with ARUs, we first split the audio into five-second segments and then create a mel-spectrogram of each segment. We then train an EfficientNet classifier to identify bird species from the mel-spectrogram images, training on audio from Xeno-Canto and the Macaulay Library. We trained two separate classifiers, one for species in the Sierra Nevada mountains and one for upstate New York. Note that these classifiers are not trained on separated audio; that’s an area for future improvement.

We also introduced some new techniques to improve classifier training. Taxonomic training asks the classifier to provide labels for each level of the species taxonomy (genus, family, and order), which allows the model to learn groupings of species before learning the sometimes-subtle differences between similar species. Taxonomic training also allows the model to benefit from expert information about the taxonomic relationships between different species. We also found that random low-pass filtering was helpful for simulating distant sounds during training: As an audio source gets further away, the high-frequency parts fade away before the low-frequency parts. This was particularly effective for identifying species from the High Sierras region, where bird songs cover very long distances, unimpeded by trees.

Classifying Separated Audio
We found that separating audio with the new MixIT model before classification improved the classifier performance on three independent real-world datasets. The separation was particularly successful for identification of quiet and background birds, and in many cases helped with overlapping vocalizations as well.

Top: A mel-spectrogram of two birds, an American pipit (amepip) and gray-crowned rosy finch (gcrfin), from the Sierra Nevadas. The legend shows the log-probabilities for the two species given by the pre-trained classifiers. Higher values indicate more confidence, and values greater than -1.0 are usually correct classifications. Bottom: A mel-spectrogram for the automatically separated audio, with the classifier log probabilities from the separated channels. Note that the classifier only identifies the gcrfin once the audio is separated.
Top: A complex mixture with three vocalizations: A golden-crowned kinglet (gockin), mountain chickadee (mouchi), and Steller’s jay (stejay). Bottom: Separation into three channels, with classifier log probabilities for the three species. We see good visual separation of the Steller’s jay (shown by the distinct pink marks), even though the classifier isn’t sure what it is.

The separation model does have some potential limitations. Occasionally we observe over-separation, where a single song is broken into multiple channels, which can cause misclassifications. We also notice that when multiple birds are vocalizing, the most prominent song often gets a lower score after separation. This may be due to loss of environmental context or other artifacts introduced by separation that do not appear during classifier training. For now, we get the best results by running the classifier on the separated channels and the original audio, and taking the maximum score for each species. We expect that further work will allow us to reduce over-separation and find better ways to combine separation and classification. You can see and hear more examples of the full system at our GitHub repo.

Future Directions
We are currently working with partners at the California Academy of Sciences to understand how habitat and species mix changes after prescribed fires and wildfires, applying these models to ARU audio collected over many years.

We also foresee many potential applications for the unsupervised separation models in ecology, beyond just birds. For example, the separated audio can be used to create better acoustic indices, which could measure ecosystem health by tracking the total activity of birds, insects, and amphibians without identifying particular species. Similar methods could also be adapted for use underwater to track coral reef health.

Acknowledgements
We would like to thank Mary Clapp, Jack Dumbacher, and Durrell Kapan from the California Academy of Sciences for providing extensive annotated soundscapes from the Sierra Nevadas. Stefan Kahl and Holger Klinck from the Cornell Lab of Ornithology provided soundscapes from Sapsucker Woods. Training data for both the separation and classification models came from Xeno-Canto and the Macaulay Library. Finally, we would like to thank Julie Cattiau, Lauren Harrell, Matt Harvey, and our co-author, John Hershey, from the Google Bioacoustics and Sound Separation teams.

Source: Google AI Blog


Originality reports are now available for Google Slides files

What’s changing

You can now use originality reports on Google Slides files, which were previously only available for Google Docs.

Originality reports allow students and teachers to compare work against billions of web pages and books on the internet, making it easier to ensure academic integrity of the work. It can be used when submitting or receiving files within Google Classroom and Assignments.

Who’s impacted

End users



Why it matters

Students widely use slides to showcase academic work. By expanding the originality reports feature to run in Slides, students can ensure they’ve properly integrated external ideas into more of their work, while instructors can check for potential plagiarism in more assignments.


Additional details

Originality reports are available for all Google Workspace for Education users, but teachers will still need to turn on originality reports for individual assignments in Classroom.
 
If you have a Google Workspace for Education Fundamentals account, you can turn on originality reports for 5 assignments per class. Unlimited originality reports are available for the Teaching and Learning upgrade or upgrade to Google Workspace for Education Plus
 
Additionally, students can check their Slides file for originality before submitting it in Classroom. When the student file is ready and submitted, their teacher will receive an originality report for the student’s work.



Getting started





Rollout pace

  • This feature is now available for all Google Workspace for Education users

Availability

  • Available for Google Workspace for Education Fundamentals, Education Standard, Teaching and Learning Upgrade, and Education Plus customers 
  • Not available to Google Workspace Essentials, Business Starter, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Frontline, and Nonprofits, as well as G Suite Basic and Business customers

Resources


More digital skills training in the Latino community

Ver abajo versión en español

Alex Corral was working in his family’s restaurant when he suffered an injury on the job. That moment made him stop to reflect about his career path, and ultimately look for a change.

Alex always had an interest in IT, but during the time of his injury he decided to act on this interest, so he enrolled on the Google IT Support Career Certificate. Earning his Certificate gave him the confidence to apply for jobs in the field, which led him to his current job with Truckstop.com, a software company which helps optimize freight moving. Alex credits Google Career Certificates as the reason he was able to pivot careers and get his start in the IT sector.

Everyone should have the opportunity to participate in today's increasingly digital economy, but in occupations requiring digital skills, which will represent two thirds of jobs by 2030, members of the Latino community in the U.S. are significantly underrepresented. Over the next decade, members of the Latino community will grow to represent 21% of the workforce, and 78% of net new workers.

To be prepared for in-demand, high-paying jobs and to close the existing gap in digital skills, it’s crucial that members of the Latino community get easy access to education and credentials. Investing in training and support for job seekers in the community will drive economic mobility and equity. So today we’re announcing three initiatives to help open doors to higher-paying jobs and entrepreneurial ventures that can grow into long-lasting careers.

Digital skills training for 200,000 students

In 2021, we announced the Grow with Google Career Readiness Program expansion to reach Hispanic-serving Institutions (HSIs). Today, in partnership with the Hispanic Association of Colleges and Universities (HACU), we’re announcing the first group of more than 20 HSI partner institutions. With the support of Google’s $2 million investment, career counselors and faculty from across the country — from McAllen, Texas to Pueblo, Colorado — will lead the effort to train 200,000 college students in the Latino community with learning pathways built by Google and focused on the digital skills needed to help them land internships and jobs. Our HSI partner institutions will also award Google Career Certificate scholarships to participating students to help them continue to build necessary skills to succeed in high-growth jobs.

More access to our career-focused training

Google.org is announcing $5 million in grants to UnidosUS, the League of United Latin American Citizens and the Hispanic Federation to help members of the Latino community prepare for jobs in the digital economy. They will work with local affiliates across the U.S. to reach over five thousand Latino jobseekers. Learners will receive access to training opportunities to grow their digital skills, and access to Google Career Certificates and wraparound support, too. Additionally, Google.org is supporting the Aspen Institute Latinos and Society Program with a grant to help advance research on the challenges non-native English-speaking immigrants and first-generation Americans face in acquiring digital skills.

Google Career Certificates now support Spanish-language learners

The Google Career Certificates have helped tens of thousands of job seekers in the U.S. acquire digital skills and find pathways to well-paying, in-demand jobs. As of today, all our Google Career Certificates are available with Spanish subtitled video lessons and fully translated pages, reading materials, quizzes and documents. Our Career Certificates prepare learners for jobs in data analytics, project management, user experience design and IT support. No previous experience or knowledge is required to enroll, and learners can be job ready in three to six months.

These announcements build on over $20 million in Google.org grants to advance economic mobility in the Latino community, and our work to create tools to help the community in the U.S. grow their digital skills. That includes our library of free Spanish-language training, including workshops via Grow with Google, minicourses on the Google Primer app and video lessons from our Applied Digital Skills program.

With new investments helping to create a more equitable workforce, we’ll keep building new resources for people like Alex. To learn more about our tools and resources for the Latino community, visit grow.google.


Más entrenamiento en habilidades digitales para la comunidad latina

Alex Corral estaba trabajando en el restaurante de su familia cuando sufrió una lesión laboral. Esa ocasión lo hizo frenar para reflexionar sobre su rumbo profesional y, finalmente, buscar un cambio.

A Alex siempre le interesó la tecnología de la información (TI), y durante el tiempo que estuvo lesionado, decidió seguir su interés; por lo tanto, se inscribió en el Certificado de carrera en Soporte de TI de Google. Obtener su Certificado, le otorgó la seguridad para postularse para trabajos en el área; así llegó a su empleo actual en Truckstop.com, una compañía de software que contribuye a mejorar el transporte de carga. Alex atribuye a losCertificados de carrera de Google el haber podido dar un giro en su profesión e iniciarse en el sector de TI.

Todos deberían tener la oportunidad de participar en la pujante economía digital de hoy, pero los miembros de la comunidad latina en los EE. UU. tienenmuy poca representación para los puestos en los que se requieren habilidades digitales, que supondrándos tercios de los empleos para el año 2030. Durante la próxima década, los miembros de la comunidad latina pasarán a representar el21% de la fuerza de trabajo y el78% del total neto de los nuevos trabajadores.

Para prepararse para los empleos más demandados y mejor remunerados y cerrar así la brecha que existe en las habilidades digitales, es crucial que los miembros de la comunidad latina puedan acceder fácilmente a la educación y la certificación. Invertir en formación y apoyo para quienes buscan empleo en la comunidad fomentará la equidad y la estabilidad económica. Por ello, hoy anunciamos tres iniciativas que ayudarán a abrir las puertas hacia empleos mejor remunerados y emprendimientos que pueden convertirse en profesiones a largo plazo.

Entrenamiento en habilidades digitales para 200,000 estudiantes

En 2021, anunciamos laampliación del programa Career Readiness de Grow with Google para llegar a las Instituciones al Servicio del Hispano (HSIs, por sus siglas en inglés). Hoy, junto con la Hispanic Association of Colleges and Universities (HACU), damos a conocer el primer grupo de más de 20 instituciones HSI asociadas. Gracias a una inversión de $2 millones de parte de Google, asesores profesionales y docentes de todo el país –desde McAllen, Texas, hasta Pueblo, Colorado– conducirán la iniciativa para entrenar a 200,000 estudiantes universitarios de la comunidad latina en diferentes planes de estudio desarrollados por Google y centrados en las habilidades digitales que necesitan para acceder a pasantías y empleos. Nuestras instituciones asociadas también otorgarán becas de los Certificados de carrera de Google para que los estudiantes que participen continúen desarrollando las habilidades necesarias para tener éxito en los empleos de alto crecimiento.

Más acceso a nuestro entrenamiento enfocado en formación profesional

Google.org anuncia subsidios por valor de $5 millones paraUnidosUS, League of United Latin American Citizens (LULAC) y Hispanic Federation con el fin de ayudar a los miembros de la comunidad latina a prepararse para los empleos de la economía digital. Trabajarán con organizaciones locales a lo largo de los EE. UU. para llegar a más de cinco mil latinos que buscan trabajo. Los estudiantes tendrán acceso a cursos de formación para desarrollar sus habilidades digitales y acceder a losCertificados de carrera de Google, además de recibir apoyo integral. Asimismo, Google.org apoya alAspen Institute Latinos and Society Program con un subsidio para fomentar la investigación sobre los desafíos que enfrentan para adquirir habilidades digitales los inmigrantes no nativos de habla inglesa y los estadounidenses de primera generación.

Los Certificados de carrera de Google ahora disponibles en español

LosCertificados de carrera de Google han ayudado a decenas de miles de personas que buscan empleo en EE. UU. a adquirir habilidades digitales y acceder a los empleos más demandados y mejor remunerados. Nuestros Certificados de carrera de Google están disponibles enespañol mediante lecciones de video con subtítulos y páginas, materiales de lectura, cuestionarios y documentos completamente traducidos. Nuestros Certificados de carrera preparan a los estudiantes para empleos en Análisis de Datos, Gestión de Proyectos, Diseño de Experiencia de Usuario y Soporte de TI. No se requiere experiencia ni conocimientos previos para inscribirse, y los estudiantes pueden estar listos para trabajar en tres a seis meses.

Estos anuncios dan cuenta de más de $20 millones en subvenciones de Google.org para promover la estabilidad económica de la comunidad latina, y nuestro trabajo para diseñar herramientas que ayuden a la comunidad en EE. UU. a desarrollar sus habilidades digitales. Esto incluye nuestros recursos gratuitos disponibles en español, como lostalleres a través de Grow with Google, losminicursos en la aplicación Google Primer y laslecciones de video de nuestro programa Applied Digital Skills.

Gracias a las nuevas inversiones que ayudan a generar una fuerza de trabajo más equitativa, seguiremos desarrollando nuevos recursos para personas como Alex. Para conocer más sobre nuestras herramientas y recursos para la comunidad latina, visitagrow.google.

How Google protects your privacy and keeps you in control of your ad experience

Whether you’re managing your inbox, browsing the web, or interacting with ads, we know that your privacy is a top priority. That’s why this week, to celebrate Data Privacy Day, we’re highlighting how we keep you safe online – and reminding you of the controls available to you.

First, to keep your private information private, everything we build at Google is secure by default, private by design and keeps you in control. It's how we ensure that everyday, you're Safer with Google.

Your Google Account is a one-stop-shop for your key privacy and security settings. You can control what activity gets saved to your account, download your data, or delete your activity at any time. We’ve also created tools like Dashboard and My Activity, which make it easy to view and control information saved in your Google Account.

To start using these controls, we recommend taking your Privacy Checkup, which helps you choose the settings that are right for you. You can also take a Security Checkup to check your Google Account security status and get personalized recommendations to strengthen your account protection. To learn more about our privacy tools and settings, you can visit our Safety Center.

You’re in control of your ad experience

Our commitment to privacy also applies to your ad experience. We follow a set of core principles about the data we use for ads. For example, we never sell your personal information and we don’t use the content you create, store, and share in apps like Drive, Gmail, and Photos for any ads purposes. It’s simply off limits. We also prohibit advertisers from using sensitive interest categories like personal hardships (including health conditions), identity and beliefs, and sexual interests to target ads.

We’ve also built easy-to-use controls for you in Ad Settings to help you tailor your ad experience by reviewing and updating information in your ads profile. You can even turn off ad personalization altogether.

At the same time, we’ve started rolling out new innovations on features like our “About this ad” menu to help you understand why an ad was shown, and which advertiser ran it. You can report an ad if you believe it violates one of our policies, see the ads a specific verified advertiser has run over the past 30 days, or mute ads or advertisers you aren’t interested in.

Finally, we’ve heard that sometimes there are certain ads you might not want to engage with at all. Currently, you can turn off sensitive ads related to alcohol or gambling on YouTube. We’ll continue to expand our sensitive ad categories soon so that you can choose the ad experience that’s right for you.

Giving you a better ad experience in 2022

This year, we’ll focus on strengthening protections for vulnerable groups, leading the industry towards a more privacy safe future, and delivering new ways to put you in control.

We’ve already made progress on delivering a safer experience to kids and teens online by expanding safeguards to prevent age-sensitive ad categories from being shown to teens, and we will block ad targeting based on the age, gender, or interests of people under 18.

We’re also collaborating with our industry to define the future of the privacy-safe internet together. Chrome is leading a collaborative effort to make the web private by default with Privacy Sandbox, which seeks to help transform digital marketing in a way that meets your privacy expectations.

Finally, we’ll continue to work on introducing exciting new ways to put you in control of your experiences with our products.

Discover the Memory of the World with UNESCO

On the occasion of International Day of Education, the UNESCO Memory of the World Programme is pleased to join forces with Google Arts & Culture to present Memory of the World, the records that hold the memory of our shared past. The digital collection brings together 66 inscriptions held by institutions across over 30 countries, all listed on the Memory of the World International Register, to tell their stories and highlight key moments in history that have left the world changed forever.

From Shakespeare documents chronicling the life and times of the famous dramatist to maps tracking Columbus' historic voyages — and all the manuscripts, maps, illustrations, sheet music, monumental carvings, pieces of literature, satellite images and ancient artifacts in between — each of these inscriptions serves as an important educational resource and fascinating window into our shared past.

Preserving the past

Established in 1992, Memory of the World — which will make its treasures available as of today on Google Arts & Culture — seeks to preserve the documentary heritage that carries the world’s memories into succeeding generations, and to make sure those memories remain accessible for future generations. Numerous threats can conspire to keep such memories from circulating freely and optimally. Such threats include poor preservation policy and budgetary environments, the lack of skilled staff and rescue teams, vandalism and theft, armed conflict, and natural and man-made disasters.

Protecting documentary heritage against such threats is thus an exercise in preserving the memories that have come to define us, as humans, across a range of achievements in arts and literature, geography, politics, science & technology, and religion, as well as in other fields of human endeavor throughout history. Consequently, loss of memory can critically diminish our identity as individuals and as communities.

Learning from our historical legacy

To better appreciate the overall goal of UNESCO’s Memory of the World Programme as an affirmation of our shared humanity, one only has to consider the UNESCO 2015 Recommendation Concerning the Preservation of, and Access to Documentary Heritage, Including in Digital Form which underlines “the importance of documentary heritage to promote the sharing of knowledge for greater understanding and dialogue, in order to promote peace and respect for freedom, democracy, human rights and dignity.” In this respect, the Memory of the World Programme upholds historically significant documents which contain and invoke memories of both positive and negative events and movements that remind us where we have been, of happenings that should never be forgotten, and of moments that have shaped our global society for better or worse.

It is through this preservation of history, and digitization of the footprints that remain, that lessons about the very nature of humanity can be passed down.

Dev Channel Update for Desktop

The Dev channel has been updated to 99.0.4840.0 for Windows,Mac and Linux.

A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.

Prudhvikumar Bommana

Google Chrome