Monthly Archives: September 2020

Advancing Instance-Level Recognition Research

Instance-level recognition (ILR) is the computer vision task of recognizing a specific instance of an object, rather than simply the category to which it belongs. For example, instead of labeling an image as “post-impressionist painting”, we’re interested in instance-level labels like “Starry Night Over the Rhone by Vincent van Gogh”, or “Arc de Triomphe de l'Étoile, Paris, France”, instead of simply “arch”. Instance-level recognition problems exist in many domains, like landmarks, artwork, products, or logos, and have applications in visual search apps, personal photo organization, shopping and more. Over the past several years, Google has been contributing to research on ILR with the Google Landmarks Dataset and Google Landmarks Dataset v2 (GLDv2), and novel models such as DELF and Detect-to-Retrieve.

Three types of image recognition problems, with different levels of label granularity (basic, fine-grained, instance-level), for objects from the artwork, landmark and product domains. In our work, we focus on instance-level recognition.

Today, we highlight some results from the Instance-Level Recognition Workshop at ECCV’20. The workshop brought together experts and enthusiasts in this area, with many fruitful discussions, some of which included our ECCV’20 paper “DEep Local and Global features” (DELG), a state-of-the-art image feature model for instance-level recognition, and a supporting open-source codebase for DELG and other related ILR techniques. Also presented were two new landmark challenges (on recognition and retrieval tasks) based on GLDv2, and future ILR challenges that extend to other domains: artwork recognition and product retrieval. The long-term goal of the workshop and challenges is to foster advancements in the field of ILR and push forward the state of the art by unifying research workstreams from different domains, which so far have mostly been tackled as separate problems.

DELG: DEep Local and Global Features
Effective image representations are the key components required to solve instance-level recognition problems. Often, two types of representations are necessary: global and local image features. A global feature summarizes the entire contents of an image, leading to a compact representation but discarding information about spatial arrangement of visual elements that may be characteristic of unique examples. Local features, on the other hand, comprise descriptors and geometry information about specific image regions; they are especially useful to match images depicting the same objects.

Currently, most systems that rely on both of these types of features need to separately adopt each of them using different models, which leads to redundant computations and lowers overall efficiency. To address this, we proposed DELG, a unified model for local and global image features.

The DELG model leverages a fully-convolutional neural network with two different heads: one for global features and the other for local features. Global features are obtained using pooled feature maps of deep network layers, which in effect summarize the salient features of the input images making the model more robust to subtle changes in input. The local feature branch leverages intermediate feature maps to detect salient image regions, with the help of an attention module, and to produce descriptors that represent associated localized contents in a discriminative manner.

Our proposed DELG model (left). Global features can be used in the first stage of a retrieval-based system, to efficiently select the most similar images (bottom). Local features can then be employed to re-rank top results (top, right), increasing the precision of the system.

This novel design allows for efficient inference since it enables extraction of global and local features within a single model. For the first time, we demonstrated that such a unified model can be trained end-to-end and deliver state-of-the-art results for instance-level recognition tasks. When compared to previous global features, this method outperforms other approaches by up to 7.5% mean average precision; and for the local feature re-ranking stage, DELG-based results are up to 7% better than previous work. Overall, DELG achieves 61.2% average precision on the recognition task of GLDv2, which outperforms all except two methods of the 2019 challenge. Note that all top methods from that challenge used complex model ensembles, while our results use only a single model.

Tensorflow 2 Open-Source Codebase
To foster research reproducibility, we are also releasing a revamped open-source codebase that includes DELG and other techniques relevant to instance-level recognition, such as DELF and Detect-to-Retrieve. Our code adopts the latest Tensorflow 2 releases, and makes available reference implementations for model training & inference, besides image retrieval and matching functionalities. We invite the community to use and contribute to this codebase in order to develop strong foundations for research in the ILR field.

New Challenges for Instance Level Recognition
Focused on the landmarks domain, the Google Landmarks Dataset v2 (GLDv2) is the largest available dataset for instance-level recognition, with 5 million images spanning 200 thousand categories. By training landmark retrieval models on this dataset, we have demonstrated improvements of up to 6% mean average precision, compared to models trained on earlier datasets. We have also recently launched a new browser interface for visually exploring the GLDv2 dataset.

This year, we also launched two new challenges within the landmark domain, one focusing on recognition and the other on retrieval. These competitions feature newly-collected test sets, and a new evaluation methodology: instead of uploading a CSV file with pre-computed predictions, participants have to submit models and code that are run on Kaggle servers, to compute predictions that are then scored and ranked. The compute restrictions of this environment put an emphasis on efficient and practical solutions.

The challenges attracted over 1,200 teams, a 3x increase over last year, and participants achieved significant improvements over our strong DELG baselines. On the recognition task, the highest scoring submission achieved a relative increase of 43% average precision score and on the retrieval task, the winning team achieved a 59% relative improvement of the mean average precision score. This latter result was achieved via a combination of more effective neural networks, pooling methods and training protocols (see more details on the Kaggle competition site).

In addition to the landmark recognition and retrieval challenges, our academic and industrial collaborators discussed their progress on developing benchmarks and competitions in other domains. A large-scale research benchmark for artwork recognition is under construction, leveraging The Met’s Open Access image collection, and with a new test set consisting of guest photos exhibiting various photometric and geometric variations. Similarly, a new large-scale product retrieval competition will capture various challenging aspects, including a very large number of products, a long-tailed class distribution and variations in object appearance and context. More information on the ILR workshop, including slides and video recordings, is available on its website.

With this research, open source code, data and challenges, we hope to spur progress in instance-level recognition and enable researchers and machine learning enthusiasts from different communities to develop approaches that generalize across different domains.

Acknowledgements
The main Google contributors of this project are André Araujo, Cam Askew, Bingyi Cao, Jack Sim and Tobias Weyand. We’d like to thank the co-organizers of the ILR workshop Ondrej Chum, Torsten Sattler, Giorgos Tolias (Czech Technical University), Bohyung Han (Seoul National University), Guangxing Han (Columbia University), Xu Zhang (Amazon), collaborators on the artworks dataset Nanne van Noord, Sarah Ibrahimi (University of Amsterdam), Noa Garcia (Osaka University), as well as our collaborators from the Metropolitan Museum of Art: Jennie Choi, Maria Kessler and Spencer Kiser. For the open-source Tensorflow codebase, we’d like to thank the help of recent contributors: Dan Anghel, Barbara Fusinska, Arun Mukundan, Yuewei Na and Jaeyoun Kim. We are grateful to Will Cukierski, Phil Culliton, Maggie Demkin for their support with the landmarks Kaggle competitions. Also we’d like to thank Ralph Keller and Boris Bluntschli for their help with data collection.

Source: Google AI Blog


Googlers get creative while working from home

When the going gets tough, the tough bake sourdough bread. Or take up knitting. Or just really get into a new video game. In the months since the COVID-19 pandemic left many of us working from home and social distancing cut down on our calendars, we’ve had plenty of time to pick up a few new hobbies here and there. Others have spent time figuring out how to adapt their passions to the inside of their homes. And that’s the case for Googlers, too, who are still playing in orchestras and working on arts and crafts in quarantine. Here are a few inspiring projects Googlers are working on in their spare time, from home. 

Dancing on their own, together

Incognito Mode dance troupe

Last year, a group of 20 San Francisco-area Googlers got together to compete in a local dance competition. They called themselves Incognito Mode and won second place. Since then, they performed in showcases both inside and outside the office, but the pandemic put a stop to performing in person anytime soon. Instead, they recorded a dance video from their homes, dodging friends, roommates and pets in the process. Each of the 18 participants choreographed a portion of the routine, and they later edited the footage together. “We faced new challenges of dancing together virtually, but it also allowed us to connect in ways we wouldn’t have otherwise,” says Jason Scott, head of Google’s U.S. startup developer ecosystem and one of the group’s creative directors. “Many of our members now live around the country, but remote dance projects have let them continue dancing with us.”

A work-from-home virtual orchestra

In the summer of 2016, around 30 Googlers picked up their instruments and played in The Googler Orchestra’s very first concert. Ever since then, they’ve rehearsed weekly and grown in numbers, with their last in-person performance featuring 80 Googler musicians. After Googlers started working from home, one orchestra member posted a call to get people to play together virtually. That started the Googler Virtual Orchestra, which has increased the group’s membership; their third recording will feature more than 100 musicians across three countries. 

Members each individually record their parts and then edit the footage together into one track. “It’s a logistical challenge,” says Colton Provias, the group’s lead audio engineer and a software engineer based in Sunnyvale, California. “It takes about three months from first discussions of what piece to play through the released video.”

The group intends to continue their work-from-home performances, and potentially adding other instruments or even a choir. “It speaks to the many talents that Googlers have, not just in the workplace, but outside of it too,” says Derek Wu, the orchestra’s founder and a software engineer based in Palo Alto, California. “The orchestra, for myself and others, allows everyone to unite together and create music that as a whole is greater than the sum of its parts.”

Comic relief from the pandemic’s stresses

Gao Fang comic

Gao Fang, who works in information security from Google’s Singapore office, had never drawn a comic before she started working from home in March. “Before the pandemic, I could roam around and sketch landscapes,” she says. “Then the lockdown happened and there was only that much I could sketch in my apartment. My hands got itchy for things to draw, and since I would like to keep a diary of this historical event, it's a natural step to record my days with some drawings.” 


She ended up drawing more than 80 comics while staying at home, and it ended up being a way to cope with living in isolation. Gao Fang’s comics touch on topics like awkward video chat moments and how stressful it can be to keep up with global news. Many of her sketches feature a rabbit as a main character, which she says was a stand-in for herself. “When I woke up everyday to frustrating news around the world, this little bunny did an amazing job keeping me company and guarding my sanity,” she says.

Focusing on the small things—the really small things

Miniature sculptures

Adam Stoves, who works on the Real Estate and Workplace Services team in New York, has been working from his 600-square-foot apartment alongside his wife and their toddler. Back in May, on a whim, he bought a pack of Play-Doh to entertain his daughter, but it ended up entertaining the parents, too. He and his wife started crafting miniature sculptures, which they now share online. They’ve created miniature foods, animals and even a teensy face mask. “Our daughter will pitch in from time to time, but her true talent lies indisputably in being the cutest hand model ever,” Adam says. “We have a limited window where she remains attentive, so we do a little chant: Big flat hand! Big flat hand!, when it’s time to photograph. It helps sharpen her toddler focus.” 

MediaPipe 3D Face Transform

Posted by Kanstantsin Sokal, Software Engineer, MediaPipe team

Earlier this year, the MediaPipe Team released the Face Mesh solution, which estimates the approximate 3D face shape via 468 landmarks in real-time on mobile devices. In this blog, we introduce a new face transform estimation module that establishes a researcher- and developer-friendly semantic API useful for determining the 3D face pose and attaching virtual objects (like glasses, hats or masks) to a face.

The new module establishes a metric 3D space and uses the landmark screen positions to estimate common 3D face primitives, including a face pose transformation matrix and a triangular face mesh. Under the hood, a lightweight statistical analysis method called Procrustes Analysis is employed to drive a robust, performant and portable logic. The analysis runs on CPU and has a minimal speed/memory footprint on top of the original Face Mesh solution.

MediaPipe image

Figure 1: An example of virtual mask and glasses effects, based on the MediaPipe Face Mesh solution.

Introduction

The MediaPipe Face Landmark Model performs a single-camera face landmark detection in the screen coordinate space: the X- and Y- coordinates are normalized screen coordinates, while the Z coordinate is relative and is scaled as the X coordinate under the weak perspective projection camera model. While this format is well-suited for some applications, it does not directly enable crucial features like aligning a virtual 3D object with a detected face.

The newly introduced module moves away from the screen coordinate space towards a metric 3D space and provides the necessary primitives to handle a detected face as a regular 3D object. By design, you'll be able to use a perspective camera to project the final 3D scene back into the screen coordinate space with a guarantee that the face landmark positions are not changed.

Metric 3D Space

The Metric 3D space established within the new module is a right-handed orthonormal metric 3D coordinate space. Within the space, there is a virtual perspective camera located at the space origin and pointed in the negative direction of the Z-axis. It is assumed that the input camera frames are observed by exactly this virtual camera and therefore its parameters are later used to convert the screen landmark coordinates back into the Metric 3D space. The virtual camera parameters can be set freely, however for better results it is advised to set them as close to the real physical camera parameters as possible.

MediaPipe image

Figure 2: A visualization of multiple key elements in the metric 3D space. Created in Cinema 4D

Canonical Face Model

The Canonical Face Model is a static 3D model of a human face, which follows the 3D face landmark topology of the MediaPipe Face Landmark Model. The model bears two important functions:

  • Defines metric units: the scale of the canonical face model defines the metric units of the Metric 3D space. A metric unit used by the default canonical face model is a centimeter;
  • Bridges static and runtime spaces: the face pose transformation matrix is - in fact - a linear map from the canonical face model into the runtime face landmark set estimated on each frame. This way, virtual 3D assets modeled around the canonical face model can be aligned with a tracked face by applying the face pose transformation matrix to them.

Face Transform Estimation

The face transform estimation pipeline is a key component, responsible for estimating face transform data within the Metric 3D space. On each frame, the following steps are executed in the given order:

  • Face landmark screen coordinates are converted into the Metric 3D space coordinates;
  • Face pose transformation matrix is estimated as a rigid linear mapping from the canonical face metric landmark set into the runtime face metric landmark set in a way that minimizes a difference between the two;
  • A face mesh is created using the runtime face metric landmarks as the vertex positions (XYZ), while both the vertex texture coordinates (UV) and the triangular topology are inherited from the canonical face model.

Effect Renderer

The Effect Renderer is a component, which serves as a working example of a face effect renderer. It targets the OpenGL ES 2.0 API to enable a real-time performance on mobile devices and supports the following rendering modes:

  • 3D object rendering mode: a virtual object is aligned with a detected face to emulate an object attached to the face (example: glasses);
  • Face mesh rendering mode: a texture is stretched on top of the face mesh surface to emulate a face painting technique.

In both rendering modes, the face mesh is first rendered as an occluder straight into the depth buffer. This step helps to create a more believable effect via hiding invisible elements behind the face surface.

MediaPipe image

Figure 3: An example of face effects rendered by the Face Effect Renderer.

Using Face Transform Module

The face transform estimation module is available as a part of the MediaPipe Face Mesh solution. It comes with face effect application examples, available as graphs and mobile apps on Android or iOS. If you wish to go beyond examples, the module contains generic calculators and subgraphs - those can be flexibly applied to solve specific use cases in any MediaPipe graph. For more information, please visit our documentation.

Follow MediaPipe

We look forward to publishing more blog posts related to new MediaPipe pipeline examples and features. Please follow the MediaPipe label on Google Developers Blog and Google Developers twitter account (@googledevs).

Acknowledgements

We would like to thank Chuo-Ling Chang, Ming Guang Yong, Jiuqiang Tang, Gregory Karpiak, Siarhei Kazakou, Matsvei Zhdanovich and Matthias Grundman for contributing to this blog post.

Building on our workplace commitments

Editor’s Note: The following email was sent to the company today from Eileen Naughton, VP of People Operations. 

Hi Googlers,

Over the past several years, we have been taking a harder line on inappropriate conduct, and have worked to provide better support to the people who report it. Protecting our workplace and culture means getting both of these things right, and in recent years we’ve worked hard to set and uphold higher standards for the whole company. Thank you for your clear feedback as we’ve advanced this work.

The changes we’ve made to build a more equitable and respectful workplace include overhauling the way we handle and investigate employee concerns, introducing new care programs for employees who report concerns, and making arbitration optional for Google employees.

In late 2018, Alphabet’s Board responded to employee concerns by overseeing a comprehensive review of policies and practices related to sexual harassment, sexual misconduct, and retaliation. An independent committee of the Board also reviewed claims raised by shareholders in early 2019 about past workplace misconduct issues. Today we’re committing to five guiding principles and a list of detailed changes to our workplace policies and practices agreed to by the committee. These principles and improvements incorporate input from both employees and shareholders. 

Below are some of the key changes we’re making.

  • We’re setting up a new DEI Advisory Council to advise on and oversee these efforts, with experts Judge Nancy Gertner (retired), Grace Speights, and Fred Alvarez joining Sundar, Chief Diversity Officer Melonie Parker, SVP of Global Affairs Kent Walker, and SVP of Core Jen Fitzpatrick. They will report to the Leadership Development and Compensation Committee of the Board (LDCC) on a quarterly basis on the company’s progress against these commitments.

  • We’re building on our current practice of prohibiting severance for anyone terminated for any form of misconduct, and expanding the prohibition to anyone who is the subject of a pending investigation for sexual misconduct or retaliation. Managers will also receive guidance instructing them on how misconduct should impact an employee's performance evaluation, compensation decisions, and promotion outcomes. 

  • If there are allegations against any executives, a specialist team will be assigned and the results of any case will be reported to the Board’s Audit Committee.

  • We’ll ensure that $310 million in funding goes toward diversity, equity and inclusion initiatives and programs focused on increasing access to computer science education and careers; continuing to build a more representative workforce; fostering a respectful, equitable and inclusive workplace culture; and helping businesses from underrepresented groups to succeed in the digital economy and tech industry.

Other Bets are required to adhere to our new principles too. Changes they are making now include making arbitration optional for all employees, temporary staff, vendors, and independent contractors for individual harassment, discrimination, and retaliation disputes with Alphabet; as well as following the new Alphabet model for executive investigations. Every Alphabet company (including Google and all Other Bets) will be required to undertake an annual review of their own individual policies and practices to ensure they are consistent with Alphabet’s guiding principles in this area.

Together, Sundar, the DEI Advisory Council, and the Board will uphold Alphabet’s unwavering commitment to prohibit and respond effectively to complaints of sexual harassment, discrimination, and retaliation and promote diversity, equity, and inclusion in the workplace.

Recent years have involved a lot of introspection and work to make sure we’re providing a safe and inclusive workplace for every employee. That doesn’t stop here and you’ll receive reports on our progress as we move forward. I’m grateful to everyone, especially our employees and shareholders, for providing us with feedback, and for making sure that the way we tackle these vital issues is better today than it was in the past.

Eileen

The rise and fall and rise again of “now more than ever”

One of my favorite Google tools is the Google Books Ngram Viewer, or “Ngrams.” Originally created in 2009 by part of the Google Books team, Ngrams shows how books and other pieces of literature have used certain words or phrases over time. You can chart the rise (and fall) of colloquialisms like “sockdollager” or “take the egg”—or even “that slaps.” 

“Ngrams simply aggregates the use of words or phrases across the entire Google Books dataset,” says Michael Ballbach, a software engineer who works on Google Books. “It then allows users to graph the usage of those words or phrases through time.” Each word being searched is a “gram” that the tool searches across its database. 

Ngrams’s capabilities have grown recently, thanks to an update in 2019 that added approximately 19 million more books to its dataset. “For the English language corpus, that adds trillions of words,” Michael says. For context, that’s roughly the equivalent of three million copies of “War and Peace!”

But there’s one phrase—er, four grams—that’s been surfacing more and more during these...challenging, unprecedented, uncertain, unusual times that I’m particularly interested in: “Now more than ever.” 

Perhaps you’ve even noticed it? “Now more than ever” has invaded our vernacular; in fact, I’m sure you’ve read it (or a similar phrase) in a Keyword post or two. So I decided to dive into Ngrams to see if “now more than ever” is showing up...now more than ever. While we’re currently experiencing a spike, there have been others: In the early 1940s, around 1915-1920 and in 1866. Between 1805-1809 it was particularly high—nearly as high as it is today.  

And then of course there was the banner year of 1752, when things peaked for “now more than ever.” 

Now more than ever

Today, as we’re living through a pandemic, wildfires, racial injustice and so, so much more, it feels obvious why we’re increasingly saying and hearing “now more than ever,” but what about back then? What things made people feel like everything had a certain crucialness? 

While the Ngrams team doesn’t investigate the causes of the booms and busts of words and phrases, for this particular exercise, I thought a little about what could have possibly been happening during these periods of “now more than ever.” I can imagine in the 1940s, World War II changed the lives of people everwhere. 1915-1920 was marked by World War I—and of course, the influenza pandemic of 1918. In 1866, the United States was emerging from civil war. 1805 to 1809 was a heady time for the young U.S. government.

“If you have the time or inclination, you can use Books Search to try and get some insights,” Michael explains. So I plugged in “now more than ever,” searched under Books, and toggled the time settings for 1751 to 1753 to try and see if I could glean anything about the peak year of 1752. And while I can’t say I know what about that time really pushed the “now more than everness,” a handful of British literary journals were definitely using the phrase. 

But things don’t stay at a “now more than ever” pitch. From 1955 to 1996, “now more than ever” was relatively uncommon, before climbing steeply up through the late 90s and early aughts to today. 

Maybe you, like me, may find some comfort in knowing that this moment in time—as unprecedented, challenging and uncertain as it may be—is not the only one in which everything is “now more than ever.” Maybe you, too, can appreciate the light Ngrams sheds on the lives of the words we choose. 

“I think that language is evolving just like society is evolving. That is, language is a reflection of the society that used it, and vice versa,” Michael says. “How the use of language changes over time reflects at least some of the changes taking place in the wider world. Having better tools to look at one can hopefully lead to insights in the other.” 

And if you’re feeling very “now more than ever,” just remember: This too shall pass


Small businesses and Australia’s Media Bargaining Code

In what has been an incredibly tough year, Australia’s small and medium businesses have kept our economic engine going—protecting jobs and providing vital services in their communities. 



Throughout this time, we’ve made sure business owners know Google’s tools and services are there to help. Small businesses are using our affordable ad services to advertise where they couldn’t before, and connecting with new customers via free listings on Search and Maps. We’ve also helped businesses operate online through national digital skills training



As Australia starts to look towards economic recovery, we’re concerned that many of these businesses will be affected by a new law being proposed by the Australian Government—the News Media Bargaining Code—which would put the digital tools they rely on at risk. 



While we don’t oppose a code governing the relationship between digital platforms and news businesses, the current draft Code has implications for everyone, not just digital platforms and media businesses. We wanted to explain our concerns and how we believe they can be addressed in a way that works for all businesses. 



How does the Code impact small businesses? 



The draft code affects small businesses because it would weaken Google services like Search and YouTube. These services created more than 130 million connections between business and potential customers in 2019, and contributed to the $35 billion in benefits we generated for more than 1.3 million businesses across the country. But they rely on Search and YouTube working the same for everyone—so that people can trust that the results they see are useful and authoritative, and businesses know they’re on a level playing field. 



Under the draft code, we’d be forced to give some news businesses privileged access to data and information—including about changes to our search algorithms—enabling them to feature more prominently in search results at the expense of other businesses, website owners and creators. 




For example, a cafe owner might have made their way to the top spot in Search results for a particular query over time, thanks to popularity, search interest and other signals. But if the draft code became law—giving some publishers an advanced look at algorithm changes—they could potentially take advantage of this and make their web content appear more prominently in search results. 



Likewise, if you ran an independent travel website that provides advice to people on how to plan local holidays, you might lose out to a newspaper travel section because they’ve had a sneak peek at changes to how Search works. 



That’s an unfair advantage for news businesses. Businesses of all kinds would face an additional hurdle at a time when it’s more important than ever to connect with their customers. 



A bad precedent 



The draft code would also create a mandatory negotiation and arbitration model that only takes into account the costs and value created by one party—news businesses. The code’s provisions mean costs are uncapped and unquantifiable, and there is no detail on what formula is used to calculate payment. 



Regulation framed in this way would set a bad precedent. Most businesses support sensible regulation—but not heavy-handed rules that favour one group of companies over all others. Australian entrepreneurs like Mike Cannon-Brooks, Matt Barrie and Daniel Petrie have made the point that a market intervention like this would deter international companies from operating in Australia, risking jobs and investment just as we need to be focusing on the recovery from COVID-19. 



And it’s not just business leaders who’ve spoken out. Over the last few weeks, we’ve heard a cross-section of Australia’s business community, from local retailers and restaurants to YouTube creators, and we’re deeply grateful for their support. 



The way forward 



The issues with the draft code are serious, but we believe they can be worked through in a way that protects full and fair access to Search and YouTube for every Australian business. We’ve made it clear that we want to contribute to a strong future for Australian news, and we’re engaging constructively with the Government and the ACCC to try to find a resolution — making proposals for changes that would support a workable code



Throughout 2020, we’ve worked with business owners across Australia to help them get through the challenges of the fires and the pandemic, whether by providing digital tools, direct assistance, skills training or advice, and we hope to continue providing that support long into the future. 



We know how tough this year has been, and we’re going to keep doing everything we can to make sure that the final version of the code supports Australia’s amazing businesses.

Small business and Australia’s media bargaining code

In what has been an incredibly tough year, Australia’s small and medium businesses have kept our economic engine going—protecting jobs and providing vital services in their communities. 


Throughout this time, we’ve made sure business owners know Google’s tools and services are there to help. Small businesses are using our affordable ad services to advertise where they couldn’t before, and connecting with new customers via free listings on Search and Maps. We’ve also helped businesses operate online through national digital skills training.


As Australia starts to look towards economic recovery, we’re concerned that many of these businesses will be affected by a new law being proposed by the Australian Government—the News Media Bargaining Code—which would put the digital tools they rely on at risk. 


While we don’t oppose a code governing the relationship between digital platforms and news businesses, the current draft code has implications for everyone, not just digital platforms and media businesses. We wanted to explain our concerns and how we believe they can be addressed in a way that works for all businesses.  


How does the code impact small businesses? 


The draft code affects small businesses because it would weaken Google services like Search and YouTube. These services created more than 130 million connections between businesses and potential customers in 2019, and contributed to the $35 billion in benefits we generated for more than 1.3 million businesses across the country. But they rely on Search and YouTube working the same for everyone—so that people can trust that the results they see are useful and authoritative, and businesses know they’re on a level playing field.


Under the draft code, we’d be forced to give some news businesses privileged access to data and information—including about changes to our search algorithms—enabling them to feature more prominently in search results at the expense of other businesses, website owners and creators. 


News GIF

For example, a cafe owner might have made their way to the top spot in Search results for a particular query over time, thanks to popularity, search interest and other signals. But if the draft code became law—giving some publishers an advanced look at algorithm changes—they could potentially take advantage of this and make their web content appear more prominently in search results.


Likewise, if you ran an independent travel website that provides advice to people on how to plan local holidays, you might lose out to a newspaper travel section because they’ve had a sneak peek at changes to how Search works.


That’s an unfair advantage for news businesses. Businesses of all kinds would face an additional hurdle at a time when it’s more important than ever to connect with their customers.

A bad precedent

The draft code would also create a mandatory negotiation and arbitration model that only takes into account the costs and value created by one party—news businesses. The code’s provisions mean costs are uncapped and unquantifiable, and there is no detail on what formula is used to calculate payment.

Regulation framed in this way would set a bad precedent. Most businesses support sensible regulation—but not heavy-handed rules that favour one group of companies over all others.  

Australian entrepreneurs like Mike Cannon-Brooks, Matt Barrie and Daniel Petrie have made the point that a market intervention like this would deter international companies from operating in Australia, risking jobs and investment just as we need to be focusing on the recovery from COVID-19. 

And it’s not just business leaders who’ve spoken out. Over the last few weeks, we’ve heard from a cross-section of Australia’s business community, from local retailers and restaurants to YouTube creators, and we’re deeply grateful for their support.    

The way forward

The issues with the draft code are serious, but we believe they can be worked through in a way that protects full and fair access to Search and YouTube for every Australian business.  

We’ve made it clear that we want to contribute to a strong future for Australian news, and we’re engaging constructively with the Government and the ACCC to try to find a resolution — making proposals for changes that would support a workable code

Throughout 2020, we’ve worked with business owners across Australia to help them get through the challenges of the fires and the pandemic, whether by providing digital tools, direct assistance, skills training or advice, and we hope to continue providing that support long into the future.  

We know how tough this year has been, and we’re going to keep doing everything we can to make sure that the final version of the code supports Australia’s amazing businesses.

See apps installed on managed Windows 10 devices

Quick launch summary 

You can now view a list of all apps installed on Windows 10 devices that you manage with Windows device management. The list includes when the app was first installed, the current version, and the publisher. You can use this information to identify devices that have malicious or untrusted apps on them. 

Note that this feature requires the device to be enrolled in Windows device management. Learn more about our enhanced security for Windows or how to view Windows device details in the Admin console

See apps installed on managed Windows 10 devices 


Getting started 

Rollout pace 

Availability 

  • Available to G Suite Enterprise, G Suite Enterprise for Education, and Cloud Identity Premium customers 
  • Not available to G Suite Basic, G Suite Business, G Suite for Education, G Suite for Nonprofits, G Suite Essentials, and Cloud Identity Free customers 

Resources 

See apps installed on managed Windows 10 devices

Quick launch summary 

You can now view a list of all apps installed on Windows 10 devices that you manage with Windows device management. The list includes when the app was first installed, the current version, and the publisher. You can use this information to identify devices that have malicious or untrusted apps on them. 

Note that this feature requires the device to be enrolled in Windows device management. Learn more about our enhanced security for Windows or how to view Windows device details in the Admin console

See apps installed on managed Windows 10 devices 


Getting started 

Rollout pace 

Availability 

  • Available to G Suite Enterprise, G Suite Enterprise for Education, and Cloud Identity Premium customers 
  • Not available to G Suite Basic, G Suite Business, G Suite for Education, G Suite for Nonprofits, G Suite Essentials, and Cloud Identity Free customers 

Resources 

Stable Channel Update for Chrome OS

The Stable channel has been updated to 85.0.4183.131 (Platform version: 13310.91.0) for most Chrome OS devices. This build contains a number of bug fixes, security updates and feature enhancements. Changes can be viewed here.

If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser).

Daniel Gagnon
Google Chrome