What publishers should focus on now to prepare for privacy changes

In the fourth episode of our Publisher Privacy Q&A series, we talk about what publishers should be prioritizing and focused on right now to prepare their businesses for ongoing and upcoming privacy changes.

Stay tuned for the fifth Publisher Privacy Q&A episode coming soon. In the meantime, check out episodes 1, 2, and 3 of this series.

Source: Inside AdSense


Working with UNESCO to support Ukraine’s teachers on World Teachers’ Day

“The transformation of education begins with teachers” is the theme for World Teachers' Day 2022. For Ukraine’s teachers, who have had to transform the way they work and teach over the last seven months, these words take on an entirely different meaning.

Ukrainian teachers and children continue to be impacted by the war - whether they’re refugees abroad, displaced in their own country, or trapped in areas under fire. According to the authorities, 2,292 education institutions have been damaged and 309 destroyed since the Russian offensive began in February.

This has meant that two out of three children who were living in Ukraine at the beginning of this year have had their education disrupted, with some of these children out of education completely. Given the experiences of these children, and what they have witnessed, many are also traumatized. The classroom, whether virtual or otherwise, can help children to heal by being a place of security through which normality, curiosity and play can return.

Supporting Ukraine’s teachers

To support Ukrainian teachers to keep teaching, and students to keep learning, Google.org are providing UNESCO with €1.2M to train and equip 50,000 teachers in Ukraine with psychosocial skills to support the mental health of their students. This will help Ukrainian teachers with some critical tools they need to continue teaching – including into the longer term – in these challenging circumstances. This latest support builds on the over $40 million in cash donations and $5 million of in-kind support for humanitarian relief efforts provided by Google.org and Google employees.

Providing the tools

Earlier this year, we announced our partnership with organisations including the Ukrainian Ministry of Education and Science and UNESCO to provide Chromebooks to schools - helping teachers connect with their students, wherever they are now based.

Since then, for many teachers the challenges have escalated. This academic year started with more than 40% of Ukrainian schools giving classes online to increasing numbers of displaced and traumatized children.

To help teachers connect with their students, wherever they and their students are, we've increased our commitment to provide Chromebooks from 43,000 to 50,000. Thanks to our close collaboration with UNESCO and the Ukraine Ministry of Education and Science, these Chromebooks have started to arrive. They are currently being distributed to teachers in and around the Dnipro region, and will be provided throughout the country in the weeks ahead.

Of course, university and college students have been impacted by the war in Ukraine too - with many unable to attend their classes in person or in real-time. To help support them continue in their education, we’ve now given 250 universities and colleges six months’ free access to our premium Google Workspace for Education features. These features support higher education online learning, allowing universities to host meetings for up to 250 students and record them in Drive.

Providing the resources

To help Ukraine’s teachers adapt to giving lessons purely online, Google is working with local partners to deliver training in online tools, such as Google Workspace for Education, through a series of workshops and resources. We’ve recently increased our goal from 50,000 to 200,000 teachers trained by June 2023.

We’ll continue to search for ways we can partner with Ukraine’s Ministry of Education and Science, and those of bordering countries, to help those impacted by the war in Ukraine - including the millions of school and university students trying to access education in this trying and difficult time.

How LiDAR tech helps preserve world heritage

There’s a ninth-century Buddhist temple at the heart of an ancient city in Myanmar that’s constructed from red brick and adorned with exquisite plaster moldings softened by weather and age. When a 6.8-magnitude earthquake hit the area in 2016, its walls collapsed and the plaster crumbled. And this is just one temple amongst more than 3,000 pagodas, temples and monasteries in a vast archeological site that sprawls over 65 square kilometers, so assessing the scale of the damage — and how to repair it — was a huge and complex task.

Luckily, the team at CyArk, a Google Arts and Culture partner, were in a position to help. Six months before the earthquake, they had gathered a series of detailed 3D laser scans – or “digital twins” – of Bagan’s cultural sites for a UNESCO conservation project. By creating another set of “twins” in the earthquake’s aftermath, they could compare before and after in precise detail. For the engineers and conservators tasked with repairing Bagan, the data was invaluable.

According to John Ristevski, CEO and chairman of CyArk, the project was one dramatic example of “putting data to work to solve problems.” As the Bagan Lab Experiment shows, the data also served another purpose: bringing ancient heritage to life for new audiences around the world. Google Arts and Culture sat down with John to learn more about how this kind of 3D laser scanning technology, also known as LiDAR, can help preserve cultural heritage, tell captivating stories and make history more accessible.

John, tell us a bit more about LiDAR devices and how they work.

Ben Kacyra, who founded CyArk in 2003 to preserve and celebrate cultural heritage, developed the first mobile LiDAR devices in the mid-nineties. LiDAR stands for Light Detection and Ranging, and these devices use lasers to create incredibly detailed and accurate 3D representations of places that would be hard to describe using other means. Think of the inside of a submarine or an oil refinery, for instance – it would take forever to measure and map these places using traditional methods. A LiDAR device can gather many millions of data points per second.

So it’s safe to say that LiDAR beats a measuring tape and a pencil – but is it enough?

At Bagan, we also used aerial drone photography and photogrammetry, a technique that allows us to build 3D reconstructions that capture the colors and textures of the pagodas and temples in photo-realistic detail. Alongside these, we collected interviews, audio soundscapes and 360-degree video to evoke the atmosphere and history of Bagan.

Members of CyArk, Myanmar's Department of Archaeology, Carleton University and Yangon Technological University during a 3D documentation workshop at Bagan, 2016.

Members of CyArk, Myanmar's Department of Archaeology, Carleton University and Yangon Technological University during a 3D documentation workshop at Bagan, 2016.

Google Arts and Culture lends itself to pulling all these different pieces together to present coherent, interactive experiences, pushing the boundaries of how to tell these stories online. Open Heritage or Resilience of the Redwoods are two examples of that.

What threatens world heritage sites today and how can 3D models help?

The number one threat is climate change. Rising sea levels, desertification, rainfall events and so on are affecting sites and monuments that are not designed to withstand them. The Bagan earthquake was a dramatic, one-off event. But climate change is more insidious and it’s often harder to pin down its effects. By helping us understand how heritage sites are changing, 3D data can support efforts to preserve them which we’ve been doing with Heritage on the Edge.

What does the future hold for Bagan, and for LiDAR technology?

In 2019, Bagan was added to the UNESCO World Heritage list. Careful restoration work is ongoing to protect and preserve its statues, soaring temples and hand-painted frescoes, and it continues to be an active site of pilgrimage and worship.

Image of Easter Island, with two CyArk team members on a fieldwork trip outdoors, looking at a stone artefact.

CyArk team members on a fieldwork trip to Rapa Nui, 2020.

Looking ahead, our hope for LiDAR technology is not just to document the world's cultural heritage ourselves, but to share these techniques and methods with others. A good example of this is our work in Rapa Nui, or Easter Island. Its unique moai stone statues are threatened by storms, rising sea levels and coastal erosion. Local people have now acquired their own LiDAR equipment to help preserve the island's cultural heritage for generations to come.

Delivering on our $1B commitment in Africa


Last year our CEO, Sundar Pichai, announced that Google would invest $1 billion in Africa over the next five years to support a range of initiatives, from improved connectivity to investment in startups, to help boost Africa’s digital transformation.


Africa’s internet economy has the potential to grow to $180 billion by 2025 – 5.2% of the continent’s GDP. To support this growth, over the last year we’ve made progress on helping to enable affordable access and on building products for every African user – helping businesses build their online presence, supporting entrepreneurs spur next-generation technologies, and helping nonprofits to improve lives across the continent.


We’d like to share how we’re delivering on our commitment and partnering with others – policymakers, non-profits, businesses and creators – to make the internet more useful to more people in Africa.




Introducing the first Google Cloud region in Africa

Today we’re announcing our intent to establish a Google Cloud region in South Africa – our first on the continent. South Africa will be joining Google Cloud’s global network of 35 cloud regions and 106 zones worldwide.


The future cloud region in South Africa will bring Google Cloud services closer to our local customers, enabling them to innovate and securely deliver faster, more reliable experiences to their own customers, helping to accelerate their growth. According to research by AlphaBeta Economics for Google Cloud, the South Africa cloud region will contribute more than a cumulative USD 2.1 billion to the country’s GDP, and will support the creation of more than 40,000 jobs by 2030.




Along with the cloud region, we are expanding our network through the Equiano subsea cable and building Dedicated Cloud Interconnect sites in Johannesburg, Cape Town, Lagos and Nairobi. In doing so, we are building full scale Cloud capability for Africa.





Supporting African entrepreneurs

We continue to support African entrepreneurs in growing their businesses and developing their talent. Our recently announced second cohort of the Black Founders Fund builds on the success of last year’s cohort, who raised $97 million in follow-on funding and have employed more than 500 additional staff since they were selected. We’re also continuing our support of African small businesses through the Hustle Academy and Google Business Profiles, and helping job seekers learn skills through Developer Scholarships and Career Certifications.

We’ve also continued to support nonprofits working to improve lives in Africa, with a $40 million cash and in-kind commitment so far. Over the last year this has included:

  • $1.5M investment in Career Certifications this year bringing our total Google.org funding to more than $3M since 2021
  • A $3 million grant to support AirQo in expanding their work monitoring air quality from Kampala to ten cities in five countries on the continent;
  • A team of Google employees who have joined the Tony Elumelu Foundation for 6 months, full-time and pro-bono. The team helped build a new training web and app interface to support the next million African entrepreneurs to grow and fund their businesses.

Across all our initiatives, we continue to work closely with our partners – most recently with the UN to launch the Global Africa Business Initiative (GABI), aimed at accelerating Africa’s economic growth and sustainable development.




Building More Helpful Products for Africa

We recently announced plans to open the first African product development centre in Nairobi. The centre will develop and build better products for Africans and the world.

Today, we’re launching voice typing support for nine more African languages (isiNdebele, isiXhosa, Kinyarwanda, Northern Sotho, Swati, Sesotho, Tswana, Tshivenda and Xitsonga) in Gboard, the Google keyboard – while 24 new languages are now supported on Google Translate, including Lingala, which is spoken by more than 45 million people across Central Africa.

To make Maps more useful, Street View imagery in Kenya, South Africa, Senegal and Nigeria has had a refresh with nearly 300,000 more kilometres of imagery now helping people virtually explore and navigate neighbourhoods. We’re also extending the service to Rwanda, meaning that Street View is now available in 11 African countries.

In addition to expanding the AI Accra Research Centre earlier this year, the Open Buildings Project, which mapped buildings across the African continent using machine learning and satellite imagery, is expanding to South and Southeast Asia and is a great example of the AI centre creating solutions for Africa that are useful across the world.



Delivering on our promise

We remain committed to working with our partners in building for Africa together, and helping to unlock the benefits of the digital economy for more people by providing useful products, programmes and investments. We’re doing this by partnering with African organisations, businesses and entrepreneurs. It’s the talent and drive of the individuals in the countries, communities and businesses of Africa that will power Africa’s economic growth.





Posted by Nitin Gajria, Managing Director, Google Africa



--------







Respecter notre engagement d'un milliard de dollars en Afrique

L'année dernière, notre PDG, Sundar Pichai, a annoncé que Google investirait 1 milliard de dollars en Afrique au cours des cinq prochaines années pour soutenir toute une série d'initiatives, allant de l'amélioration de la connectivité à l'investissement dans les startups, pour aider à stimuler la transformation digitale de l'Afrique.


L'économie Africaine de l'internet pourrait atteindre 180 milliards de dollars d'ici 2025, soit 5,2 % du PIB du continent. Afin de soutenir cette croissance, nous avons progressé au cours de l'année dernière en aidant à permettre un accès abordable et en créant des produits pour chaque utilisateur Africain – en aidant les entreprises à développer leur présence en ligne, les entrepreneurs à développer des technologies de nouvelle génération, et les organisations à but non lucratif à améliorer les conditions de vie sur le continent.


Nous aimerions partager la manière dont nous respectons notre engagement et dont nous travaillons en partenariat avec les autres - les décideurs politiques, les organisations à but non lucratif, les entreprises et les créateurs - pour rendre l'internet plus utile à un plus grand nombre de personnes en Afrique.


Présentation de la première région Google Cloud en Afrique

Nous annonçons aujourd'hui notre intention d'établir une région Google Cloud en Afrique du Sud - notre première sur le continent. L'Afrique du Sud rejoindra le réseau mondial de Google Cloud, qui compte 35 régions cloud et 106 zones dans le monde entier.

La future région cloud en Afrique du Sud rapprochera les services Google Cloud de nos clients locaux, leur permettant d'innover et d’offrir en toute sécurité des expériences plus rapides et plus fiables à leurs propres clients, contribuant ainsi à accélérer leur croissance. Selon une étude réalisée par AlphaBeta Economics pour Google Cloud, la région cloud de l'Afrique du Sud apportera une contribution cumulée de plus de 2,1 milliards de dollars au PIB du pays et favorisera la création de plus de 40 000 emplois d'ici 2030.

Parallèlement à la région du cloud, nous étendons notre réseau par le biais du câble sous-marin Equiano et la construction de sites d'Interconnexion Dédiés au Cloud (Dedicated Cloud Interconnect) à Johannesburg, au Cap, à Lagos et à Nairobi. Ce faisant, nous mettons en place une capacité de Cloud à grande échelle pour l’Afrique.



Soutenir les entrepreneurs Africains
Nous continuons à soutenir les entrepreneurs Africains dans la croissance de leurs entreprises et le développement de leurs talents. Notre deuxième cohorte du Fond des Fondateurs Noirs (Black Founders Fund), annoncée récemment, s'appuie sur le succès de la cohorte de l'année dernière, qui a levé 97 millions de dollars en financement complémentaire et a employé plus de 500 personnes supplémentaires depuis sa sélection. Nous continuons également notre soutien aux petites entreprises africaines par le biais de la Hustle Academy et des Profils Commerciaux Google (Google Business Profiles), et nous aidons les demandeurs d'emploi à acquérir des compétences grâce à des Bourses d'Etudes pour Développeurs (Developer Scholarships) et aux Certificats de Carrière (Career Certificates).


Nous avons également continué à soutenir les organisations à but non lucratif qui s'efforcent d'améliorer les conditions de vie en Afrique, avec un engagement en espèces et en nature de 40 millions de dollars à ce jour. Au cours de l'année dernière, cela a inclus :
  • Un investissement de 1,5 million de dollars dans les Certifications de Carrière cette année, ce qui porte le financement total de Google.org à plus de 3 millions de dollars depuis 2021
  • Une subvention de 3 millions de dollars pour soutenir AirQo à étendre ses activités de surveillance de la qualité de l'air de Kampala à dix villes dans cinq pays du continent;
  • Une équipe de Googlers qui ont rejoint la Fondation Tony Elumelu pour 6 mois, à temps plein et pro-bono. L'équipe a contribué à la création d’une nouvelle interface web et d’une application de formation pour soutenir le prochain million d'entrepreneurs Africains à développer et à financer leurs entreprises.

Dans le cadre de toutes nos initiatives, nous continuons à travailler en étroite collaboration avec nos partenaires - tout récemment avec l'ONU pour lancer l’Initiative Mondiale pour les Entreprises en Afrique (Global Africa Business Initiative - GABI), qui vise à accélérer la croissance économique et le développement durable de l'Afrique.


Créer des Produits Plus Utiles pour l'Afrique
Nous avons récemment annoncé notre intention d'ouvrir le premier centre Africain de développement de produits à Nairobi. Le centre développera et fabriquera de meilleurs produits pour les Africains et pour le monde entier.


Aujourd'hui, nous lançons la prise en charge de la saisie vocale pour neuf langues Africaines supplémentaires (isiNdebele, isiXhosa, Kinyarwanda, Sotho du Nord, Swati, Sesotho, Tswana, Tshivenda et Xitsonga) dans Gboard, le clavier Google - tandis que 24 nouvelles langues sont désormais prises en charge par Google Translate, dont le Lingala, qui est parlé par plus de 45 millions de personnes en Afrique Centrale.


Pour rendre les Cartes (Maps) plus utiles, l'imagerie Street View au Kenya, en Afrique du Sud, au Sénégal et au Nigeria a été rafraîchie avec près de 300 000 kilomètres d'images supplémentaires permettent désormais aux utilisateurs d'explorer virtuellement les quartiers et d'y naviguer. Nous étendons également le service au Rwanda, ce qui signifie que Street View est maintenant disponible dans 11 pays Africains.


En plus de l'expansion du Centre de Recherche AI d'Accra (the AI Accra Research Center) plus tôt cette année, le Projet Open Buildings, qui a cartographié les bâtiments du continent Africain à l'aide de l'apprentissage automatique et de l'imagerie par satellite, s'étend à l'Asie du Sud et du Sud-Est et constitue un excellent exemple de la création par le centre d’IA de solutions pour l'Afrique qui sont utiles dans le monde entier.


Respecter notre promesse
Nous restons déterminés à travailler avec nos partenaires pour construire ensemble l'Afrique et aider à faire profiter un plus grand nombre de personnes des avantages de l'économie digitale en fournissant des produits, des programmes et des investissements utiles. Pour ce faire, nous travaillons en partenariat avec des organisations, des entreprises et des entrepreneurs Africains. C'est le talent et le dynamisme des individus dans les pays, les communautés et les entreprises d'Afrique qui alimenteront la croissance économique du continent.



Publié par Nitin Gajria, Directeur Général de Google Afrique

Delivering on our $1B commitment in Africa

Last year our CEO, Sundar Pichai, announced that Google would invest $1 billion in Africa over the next five years to support a range of initiatives, from improved connectivity to investment in startups, to help boost Africa’s digital transformation.

Africa’s internet economy has the potential to grow to $180 billion by 2025 – 5.2% of the continent’s GDP. To support this growth, over the last year we’ve made progress on helping to enable affordable access and on building products for every African user – helping businesses build their online presence, supporting entrepreneurs spur next-generation technologies, and helping nonprofits to improve lives across the continent.

We’d like to share how we’re delivering on our commitment and partnering with others – policymakers, non-profits, businesses and creators – to make the internet more useful to more people in Africa.

Introducing the first Google Cloud region in Africa

Today we’re announcing our intent to establish a Google Cloud region in South Africa – our first on the continent. South Africa will be joining Google Cloud’s global network of 35 cloud regions and 106 zones worldwide.

The future cloud region in South Africa will bring Google Cloud services closer to our local customers, enabling them to innovate and securely deliver faster, more reliable experiences to their own customers, helping to accelerate their growth. According to research by AlphaBeta Economics for Google Cloud, the South Africa cloud region will contribute more than a cumulative USD 2.1 billion to the country’s GDP, and will support the creation of more than 40,000 jobs by 2030.

Image shows Director for Cloud in Africa, Niral Patel, next to a heading that announces Google's intent to establish its first Cloud region in Africa

Niral Patel, Director for Cloud in Africa announces Google's intention to establish Google's first Cloud region in Africa

Along with the cloud region, we are expanding our network through the Equiano subsea cable and building Dedicated Cloud Interconnect sites in Johannesburg, Cape Town, Lagos and Nairobi. In doing so, we are building full scale Cloud capability for Africa.

Supporting African entrepreneurs

We continue to support African entrepreneurs in growing their businesses and developing their talent. Our recently announced second cohort of the Black Founders Fund builds on the success of last year’s cohort, who raised $97 million in follow-on funding and have employed more than 500 additional staff since they were selected. We’re also continuing our support of African small businesses through the Hustle Academy and Google Business Profiles, and helping job seekers learn skills through Developer Scholarships and Career Certifications.

We’ve also continued to support nonprofits working to improve lives in Africa, with a $40 million cash and in-kind commitment so far. Over the last year this has included:

  • $1.5M investment in Career Certifications this year bringing our total Google.org funding to more than $3M since 2021
  • A $3 million grant to support AirQo in expanding their work monitoring air quality from Kampala to ten cities in five countries on the continent;
  • A team of Googlers who have joined the Tony Elumelu Foundation for 6 months, full-time and pro-bono. The team helped build a new training web and app interface to support the next million African entrepreneurs to grow and fund their businesses.

Across all our initiatives, we continue to work closely with our partners – most recently with the UN to launch the Global Africa Business Initiative (GABI), aimed at accelerating Africa’s economic growth and sustainable development.

Building more helpful products for Africa

We recently announced plans to open the first African product development centre in Nairobi. The centre will develop and build better products for Africans and the world.

Today, we’re launching voice typing support for nine more African languages (isiNdebele, isiXhosa, Kinyarwanda, Northern Sotho, Swati, Sesotho, Tswana, Tshivenda and Xitsonga) in Gboard, the Google keyboard – while 24 new languages are now supported on Google Translate, including Lingala, which is spoken by more than 45 million people across Central Africa.

To make Maps more useful, Street View imagery in Kenya, South Africa, Senegal and Nigeria has had a refresh with nearly 300,000 more kilometres of imagery now helping people virtually explore and navigate neighbourhoods. We’re also extending the service to Rwanda, meaning that Street View is now available in 11 African countries.

In addition to expanding the AI Accra Research Centre earlier this year, theOpen Buildings Project, which mapped buildings across the African continent using machine learning and satellite imagery, is expanding to South and Southeast Asia and is a great example of the AI centre creating solutions for Africa that are useful across the world.

Delivering on our promise

We remain committed to working with our partners in building for Africa together, and helping to unlock the benefits of the digital economy for more people by providing useful products, programmes and investments. We’re doing this by partnering with African organisations, businesses and entrepreneurs. It’s the talent and drive of the individuals in the countries, communities and businesses of Africa that will power Africa’s economic growth.

Source: Translate


Long Term Support Channel Update for ChromeOS

LTS-102 has been updated in the LTS channel to 102.0.5005.182 (Platform Version: 14695.135.0) for most ChromeOS devices. Want to know more about Long-term Support? Click here.


This update contains multiple Security fixes, including:

1340253  Critical CVE-2022-3038 Use after free in Network Service.
1051198  High CVE-2022-3044Inappropriate implementation in Site Isolation. 
1355103  High CVE-2022-3200 Heap buffer overflow in Internals
1343104  High CVE-2022-3201 Insufficient validation of untrusted input in DevTools
1345947  High CVE-2022-3041 Use after free in WebSQL
1336979  High CVE-2022-3043 Heap buffer overflow in Screen Capture.
1341918  High CVE-2022-2858 Use after free in Sign-In Flow
1325256  Medium CVE-2022-2613 Use after free in Input
1345245  Medium CVE-2022-3051 Heap buffer overflow in Exosphere
1346154  Medium CVE-2022-3052 Heap buffer overflow in Ash
1337132  Medium CVE-2022-3050 Heap buffer overflow in WebUI
1316892  Medium CVE-2022-3049 Use after free in SplitScreen
1303308  Medium CVE-2022-3048 Inappropriate implementation in Chrome OS lockscreen


Giuliana Pritchard

Google Chrome OS
Use after free in Sign-In Flow se after free in Sign-In FlowUse after free in Sign-In Flo

Beta Channel Update for ChromeOS

The Beta channel is being updated to 107.0.5304.22 (Platform version: 15117.27.0 / 15117.28.0) for most ChromeOS devices. This build contains a number of bug fixes and security updates and will be rolled out over the next couple days.

If you find new issues, please let us know one of the following ways

  1. File a bug
  2. Visit our ChromeOS communities
    1. General: Chromebook Help Community
    2. Beta Specific: ChromeOS Beta Help Community
  3. Report an issue or send feedback on Chrome

Interested in switching channels? Find out how.

Daniel Gagnon,
Google ChromeOS

Large Motion Frame Interpolation

Frame interpolation is the process of synthesizing in-between images from a given set of images. The technique is often used for temporal up-sampling to increase the refresh rate of videos or to create slow motion effects. Nowadays, with digital cameras and smartphones, we often take several photos within a few seconds to capture the best picture. Interpolating between these “near-duplicate” photos can lead to engaging videos that reveal scene motion, often delivering an even more pleasing sense of the moment than the original photos.

Frame interpolation between consecutive video frames, which often have small motion, has been studied extensively. Unlike videos, however, the temporal spacing between near-duplicate photos can be several seconds, with commensurately large in-between motion, which is a major failing point of existing frame interpolation methods. Recent methods attempt to handle large motion by training on datasets with extreme motion, albeit with limited effectiveness on smaller motions.

In “FILM: Frame Interpolation for Large Motion”, published at ECCV 2022, we present a method to create high quality slow-motion videos from near-duplicate photos. FILM is a new neural network architecture that achieves state-of-the-art results in large motion, while also handling smaller motions well.

FILM interpolating between two near-duplicate photos to create a slow motion video.

FILM Model Overview
The FILM model takes two images as input and outputs a middle image. At inference time, we recursively invoke the model to output in-between images. FILM has three components: (1) A feature extractor that summarizes each input image with deep multi-scale (pyramid) features; (2) a bi-directional motion estimator that computes pixel-wise motion (i.e., flows) at each pyramid level; and (3) a fusion module that outputs the final interpolated image. We train FILM on regular video frame triplets, with the middle frame serving as the ground-truth for supervision.

A standard feature pyramid extraction on two input images. Features are processed at each level by a series of convolutions, which are then downsampled to half the spatial resolution and passed as input to the deeper level.

Scale-Agnostic Feature Extraction
Large motion is typically handled with hierarchical motion estimation using multi-resolution feature pyramids (shown above). However, this method struggles with small and fast-moving objects because they can disappear at the deepest pyramid levels. In addition, there are far fewer available pixels to derive supervision at the deepest level.

To overcome these limitations, we adopt a feature extractor that shares weights across scales to create a “scale-agnostic” feature pyramid. This feature extractor (1) allows the use of a shared motion estimator across pyramid levels (next section) by equating large motion at shallow levels with small motion at deeper levels, and (2) creates a compact network with fewer weights.

Specifically, given two input images, we first create an image pyramid by successively downsampling each image. Next, we use a shared U-Net convolutional encoder to extract a smaller feature pyramid from each image pyramid level (columns in the figure below). As the third and final step, we construct a scale-agnostic feature pyramid by horizontally concatenating features from different convolution layers that have the same spatial dimensions. Note that from the third level onwards, the feature stack is constructed with the same set of shared convolution weights (shown in the same color). This ensures that all features are similar, which allows us to continue to share weights in the subsequent motion estimator. The figure below depicts this process using four pyramid levels, but in practice, we use seven.

Bi-directional Flow Estimation
After feature extraction, FILM performs pyramid-based residual flow estimation to compute the flows from the yet-to-be-predicted middle image to the two inputs. The flow estimation is done once for each input, starting from the deepest level, using a stack of convolutions. We estimate the flow at a given level by adding a residual correction to the upsampled estimate from the next deeper level. This approach takes the following as its input: (1) the features from the first input at that level, and (2) the features of the second input after it is warped with the upsampled estimate. The same convolution weights are shared across all levels, except for the two finest levels.

Shared weights allow the interpretation of small motions at deeper levels to be the same as large motions at shallow levels, boosting the number of pixels available for large motion supervision. Additionally, shared weights not only enable the training of powerful models that may reach a higher peak signal-to-noise ratio (PSNR), but are also needed to enable models to fit into GPU memory for practical applications.

The impact of weight sharing on image quality. Left: no sharing, Right: sharing. For this ablation we used a smaller version of our model (called FILM-med in the paper) because the full model without weight sharing would diverge as the regularization benefit of weight sharing was lost.

Fusion and Frame Generation
Once the bi-directional flows are estimated, we warp the two feature pyramids into alignment. We obtain a concatenated feature pyramid by stacking, at each pyramid level, the two aligned feature maps, the bi-directional flows and the input images. Finally, a U-Net decoder synthesizes the interpolated output image from the aligned and stacked feature pyramid.

FILM Architecture. FEATURE EXTRACTION: we extract scale-agnostic features. The features with matching colors are extracted using shared weights. FLOW ESTIMATION: we compute bi-directional flows using shared weights across the deeper pyramid levels and warp the features into alignment. FUSION: A U-Net decoder outputs the final interpolated frame.

Loss Functions
During training, we supervise FILM by combining three losses. First, we use the absolute L1 difference between the predicted and ground-truth frames to capture the motion between input images. However, this produces blurry images when used alone. Second, we use perceptual loss to improve image fidelity. This minimizes the L1 difference between the ImageNet pre-trained VGG-19 features extracted from the predicted and ground truth frames. Third, we use Style loss to minimize the L2 difference between the Gram matrix of the ImageNet pre-trained VGG-19 features. The Style loss enables the network to produce sharp images and realistic inpaintings of large pre-occluded regions. Finally, the losses are combined with weights empirically selected such that each loss contributes equally to the total loss.

Shown below, the combined loss greatly improves sharpness and image fidelity when compared to training FILM with L1 loss and VGG losses. The combined loss maintains the sharpness of the tree leaves.

FILM’s combined loss functions. L1 loss (left), L1 plus VGG loss (middle), and Style loss (right), showing significant sharpness improvements (green box).

Image and Video Results
We evaluate FILM on an internal near-duplicate photos dataset that exhibits large scene motion. Additionally, we compare FILM to recent frame interpolation methods: SoftSplat and ABME. FILM performs favorably when interpolating across large motion. Even in the presence of motion as large as 100 pixels, FILM generates sharp images consistent with the inputs.

Frame interpolation with SoftSplat (left), ABME (middle) and FILM (right) showing favorable image quality and temporal consistency.
Large motion interpolation. Top: 64x slow motion video. Bottom (left to right): The two input images blended, SoftSplat interpolation, ABME interpolation, and FILM interpolation. FILM captures the dog’s face while maintaining the background details.

Conclusion
We introduce FILM, a large motion frame interpolation neural network. At its core, FILM adopts a scale-agnostic feature pyramid that shares weights across scales, which allows us to build a “scale-agnostic” bi-directional motion estimator that learns from frames with normal motion and generalizes well to frames with large motion. To handle wide disocclusions caused by large scene motion, we supervise FILM by matching the Gram matrix of ImageNet pre-trained VGG-19 features, which results in realistic inpainting and crisp images. FILM performs favorably on large motion, while also handling small and medium motions well, and generates temporally smooth high quality videos.

Try It Out Yourself
You can try out FILM on your photos using the source code, which is now publicly available.

Acknowledgements
We would like to thank Eric Tabellion, Deqing Sun, Caroline Pantofaru, Brian Curless for their contributions. We thank Marc Comino Trinidad for his contributions on the scale-agnostic feature extractor, Orly Liba and Charles Herrmann for feedback on the text, Jamie Aspinall for the imagery in the paper, Dominik Kaeser, Yael Pritch, Michael Nechyba, William T. Freeman, David Salesin, Catherine Wah, and Ira Kemelmacher-Shlizerman for support. Thanks to Tom Small for creating the animated diagram in this post.

Source: Google AI Blog


New ways we’re making speech recognition work for everyone

Voice-activated technologies, like Google Home or the Google Assistant, can help people do things like make a phone call to someone, adjust the lighting in their house, or play a favorite song — all with the sound of their voice. But these technologies may not work as well for the millions of people around the world who have non-standard speech. In 2019 we launched our research initiative Project Euphonia with the aim of finding ways to leverage AI to make speech recognition technology more accessible.

Today, we’re expanding this commitment to accessibility through our involvement in the Speech Accessibility Project, a collaboration between researchers at the University of Illinois Urbana-Champaign and five technology companies, including Google. The university is working with advocacy groups, like Team Gleason and the Davis Phinney Foundation, to create datasets of impaired speech that can help accelerate improvements to automated speech recognition (ASR) for the communities these organizations support.

Since the launch of Project Euphonia, we’ve had the opportunity to work with community organizations to compile a collection of speech samples from over 2,000 people. This collection of utterances has allowed Project Euphonia researchers to adapt standard speech recognition systems to understand non-standard speech more accurately, and ultimately reduce median word error rates by an average of more than 80%. These promising results created the foundation for Project Relate, an Android app that allows people to submit samples of their voice and receive a personalized speech recognition model that more accurately understands their speech. It also encouraged the expansion of Project Euphonia to include additional languages like French, Japanese, and Spanish.

There’s still a lot to be done to develop ASR systems that can understand everyone’s voice — regardless of speech pattern. However, it’s clear that larger, more diverse datasets and collaboration with the communities we want to reach will help get us to where we want to go. That is why we’re making it easy for Project Euphonia participants to share copies of their recordings to the Speech Accessibility Project. Our hope is that by making these datasets available to research and development teams, we can help improve communication systems for everyone, including people with disabilities.

Flutter SLSA Progress & Identity and Access Management through Infrastructure As Code

We are excited to announce several new achievements in Dart and Flutter's mission to harden security. We have achieved Supply Chain Levels for Software Artifacts (SLSA) Level 2 security on Flutter’s Cocoon application, reduced our Identity and Access Management permissions to the minimum required access, and implemented Infrastructure-as-Code to manage permissions for some of our applications. These achievements follow our recent success to enable Allstar and Security Scorecards.

Highlights

Achieving Flutter’s Cocoon SLSA level 2: Cocoon application provides continuous integration orchestration for Flutter Infrastructure. Cocoon also helps integrate several CI services with Github and provides tools to make Github development easier. Achieving SLSA Level 2 for Cocoon means we have addressed all the security concerns of levels 1 and 2 across the application. Under SLSA Level 2, Cocoon has “extra resistance to specific threats” to its supply chain. The Google Open Source Security team has audited and validated our achievement of SLSA Level 2 for Cocoon.


Implementing Identity & Access Management (IAM) via Infrastructure-as-Code: We have implemented additional security hardening features by onboarding docs-flutter-dev, master-docs-flutter-dev, and flutter-dashboard to use Identity and Access Management through an Infrastructure-as-Code system. These projects host applications, provide public documentation for Flutter, and contain a dashboard website for Flutter build status.

Using our Infrastructure-as-Code approach, security permission changes require code changes, ensuring approval is granted before the change is made. This also means that changes to security permissions are audited through source control and contain associated reasoning for the change. Existing IAM roles for these applications have been pared so that the applications follow the Principle of Least Privilege.

Advantages

  • Achieving SLSA Level 2 for Cocoon means we have addressed all the security concerns of levels 1 and 2 across the application. Under SLSA Level 2, Cocoon has “extra resistance to specific threats” to its supply chain.
  • Provenance is now generated for both, flutter-dashboard and auto-submit, artifacts through Cocoon’s automated build process. Provenance on these artifacts shows proof of their code source and tamper-proof build evidence. This work helps harden the security on the multiple tools used during the Cocoon build process: Google Cloud Platform, Cloudbuild, App Engine, and Artifact Registry.
  • Overall we addressed 83% of all SLSA requirements across all levels for the Cocoon application. We have identified the work across the application which will need to be completed for each level and category of SLSA compliance. Because of this, we know we are well positioned to continue future work toward SLSA Level 4.

Learnings and Best Practices

  1. Relatively small changes to the Cocoon application’s build process significantly increased the security of its supply chain. Google Cloud Build made this simple, since provenance metadata is created automatically during the Cloud Build process.
  2. Regulating IAM permissions through code changes adds many additional benefits and can make granting first time access simpler.
  3. Upgrading the SLSA level of an application sometimes requires varying efforts depending on the different factors of the application build process. Working towards SLSA level 4 will likely necessitate different configuration and code changes than required for SLSA level 2.

Coming Soon

Since this is the beginning of the Flutter and Dart journey toward greater SLSA level accomplishments, we hope to apply our learnings to more applications. We hope to begin work toward SLSA level 2 and beyond for more complex repositories like Flutter/flutter. Also, we hope to achieve an even higher level of SLSA compliance for the Cocoon application.

References

Supply Chain Levels for Software Artifacts (SLSA) is a security framework which outlines levels of supply chain security for an application as a checklist.

By Jesse Seales, Software Engineer – Dart and Flutter Security Working Group