Tag Archives: Safety & Security

An update on our work to counter extremism in Singapore

For an example of a harmonious, multicultural society, look no further than Singapore, where people of different ethnicities, religious backgrounds, and who speak varying languages live and work together peacefully. It’s a remarkable achievement — one of Singapore’s great strengths as a global hub for trade, travel and technology. It’s also something that all of us in Singapore have to work hard to preserve.

At Google, and YouTube, we’re committed to doing everything we can to promote and celebrate Singapore’s diversity — and to protect it from threats. Today, in collaboration with the Ministry of Culture, Community and Youth, we’re kicking off a series of workshops developed with Ministry of Funny. The aim is to help creators from local interfaith groups and religious organizations start meaningful discussions on issues of online extremism and hate, while fostering awareness, tolerance and empathy.

Participants in the workshops will learn the basics of video production, content strategy, and data analytics, as well as how to sustain an audience on YouTube. Select organizations will receive additional support in the form of grants and mentoring by four YouTube creators: Our Grandfather Story, The Daily Ketchup Podcast, itsclarityco and Overthink.

By amplifying positive voices and constructive dialogue, we believe we can help counter the impact of online extremism — building on the steps we’re already taking.

Taking strong actions against extremism

Over recent years, YouTube has made deep investments in machine learning to enable better detection and faster removal of harmful content that breaches its guidelines. Since 2019, YouTube has removed more than 2.6 million videos for violating its policies around violent extremism — as well as reducing the spread of content that comes close to violating these policies but doesn’t cross the line. YouTube is also holding itself to high standards of accountability, through a dedicated violent extremism section in the YouTube Community Guidelines Enforcement Report.

Across all Google products, we have long standing policies that prohibit harmful content, including incitement to violence and hate speech. We’re working closely with other major technology companies, through coalitions like the Global Internet Forum to Counter Terrorism. And we’re focused on developing other technology-based solutions. For example, teams at Jigsaw have developed the Redirect Method, an open-source program which uses targeted ads and videos uploaded by people around the world to confront online radicalization.

We’re looking forward to expanding on these efforts in collaboration with the Singapore Government, Ministry of Funny, and other leaders in the YouTube ecosystem. We see first hand the positive impact creators make all over the world every day, and with the right support, we know they can be powerful voices for tolerance and inclusion in Singapore’s diverse communities.

Women Techmakers expands online safety education

Online violence against women goes beyond the internet. It impacts society and the economy at large. It leads to damaging economic repercussions, due to increased medical costs and lost income for victims. It impacts the offline world, with seven percent of women changing jobs due to online violence, and one in ten experiencing physical harm due to online threats, according to Google-supported research conducted by the Economist Intelligence Unit in 2020.

That’s why the Women Techmakers program, which provides visibility, community and resources for women in technology, supports online safety education for women and allies. Google community manager Merve Isler, who lives in Turkey and leads Women Techmakers efforts in Turkey, Central Asia and the Caucasus region, organized the first-ever women’s online safety hackathon in Turkey in 2020, which expanded to a full week of trainings and ideathons in 2021. Google community manager and Women Techmakers manager Hufsa Manawar brought online safety training to Pakistan in early 2022.

Now, Women Techmakers is providing a more structured way for women around the world to learn about online safety, in the form of a free online learning module, launched in April 2022, in honor of International Women’s Day. To create this module, I worked with my co-host Alana Fromm from Jigsaw and our teams to create a series of videos covering different topics related to women’s online safety. Jigsaw is a unit within Google that explores threats to open society and builds technological solutions.

In the online training, we begin by defining online violence and walking through the ways negative actors threaten women online, which include misinformation and defamation, cyberharassment and hate speech. Regardless of the tactic, the goal remains the same: to threaten and harass women into silence. We break down the groups of people involved in online harassment and the importance of surrounding oneself with allies.

In one of the videos in the series, Women Techmakers Ambassador Esrae Abdelnaby Hassan shares her story of online abuse. She was exploring learning cybersecurity when a mentor she trusted gave her USB drives with courses and reading material that were infected with viruses and allowed him to take control of her computer and record videos. Then, he blackmailed her, using the videos he’d taken as threats. She felt afraid and isolated, and relied on her family for support as she addressed the harassment.

The learning module provides two codelabs, one on steps you can take to protect yourself online, and one on Perspective API, a free, open-source product built by Jigsaw and the Counter Abuse security team at Google. The first codelab provides practical guidance, and the second codelab walks viewers through the process of installing Perspective API, which uses machine learning to identify toxic comments.

We look forward to seeing the impact of our new, easy-to-access online training, as well as what our ambassadors are able to accomplish offline as the year progresses.

Get more information about your apps in Google Play

We work hard to keep Google Play a safe, trusted space for people to enjoy the latest Android apps. Today, we’re launching a new feature, the Data safety section, where developers will be required to give people more information about how apps collect, share and secure users’ data. Users will start seeing the Data safety section in Google Play today, and developers are required to complete this section for their apps by July 20th. As app developers update their functionality or change their data handling practices, they will show the latest in the apps’ Data safety section.

A unified view of app safety in Google Play

We heard from users and app developers that displaying the data an app collects, without additional context, is not enough. Users want to know for what purpose their data is being collected and whether the developer is sharing user data with third parties. In addition, users want to understand how app developers are securing user data after an app is downloaded. That’s why we designed the Data safety section to allow developers to clearly mark what data is being collected and for what purpose it's being used. Users can also see whether the app needs this data to function or if this data collection is optional.

Here are the information developers can show in the Data safety section:

  • Whether the developer is collecting data and for what purpose.
  • Whether the developer is sharing data with third parties.
  • The app’s security practices, like encryption of data in transit and whether users can ask for data to be deleted.
  • Whether a qualifying app has committed to following Google Play’s Families Policy to better protect children in the Play store.
  • Whether the developer has validated their security practices against a global security standard (more specifically, the MASVS).
Android phone showing the Data safety section of an app on Google Play

Putting users in control, before and after you download

Giving users more visibility into how apps collect, share and secure their data through the Data safety section is just one way we’re keeping the Android users and ecosystem safe.

We’ve also worked hard to give users control of installed apps through simple permissions features. For example, when an app asks to access “your location”, users can quickly and easily decide whether they want to grant that permission - for one time use, only while using the app, or all the time. For sensitive permissions like camera, microphone, or location data, people can go to the Android Privacy dashboard to review data access by apps.

Apps should help users explore the world, connect with loved ones, do work, learn something new, and more without compromising user safety. The new Data safety section, in addition to Google Play’s existing safety features, gives people the visibility and control they need to enjoy their apps.

To learn more about Google Play’s Data safety section, check out this guide.

New cookie choices in Europe

If you’ve visited a website in Europe, chances are you’ve seen a cookie consent banner. Cookies help sites remember information about your visit, so they can do things like display text in your preferred language, make sure you’re a real user and not a pesky bot, or estimate whether or not an ad campaign is working.

In the past year, regulators who interpret European laws requiring these banners, including data protection authorities in France, Germany, Ireland, Italy, Spain and the U.K., have updated their guidance for compliance. We’re committed to meeting the standards of that updated guidance and have been working with a number of these authorities.

Based on these conversations and specific direction from France’s Commission Nationale de l’Informatique et des Libertés (CNIL), we have now completed a full redesign of our approach, including changes to the infrastructure we use to handle cookies.

A box that reads, “Before you continue to YouTube,” explains cookies, and asks you to “Reject all” or “Accept all” with one click

Our new cookie banners began rolling out earlier this month on YouTube in France and will soon be coming to all Google users in Europe

Soon, anyone visiting Search and YouTube in Europe while signed out or in Incognito Mode will see a new cookie consent choice. This update, which began rolling out earlier this month on YouTube, will provide you with equal “Reject all” and “Accept all” buttons on the first screen in your preferred language. (You can also still choose to customize your choice in more detail with “More options.”)

We’ve kicked off the launch in France and will be extending this experience across the rest of the European Economic Area, the U.K. and Switzerland. Before long, users in the region will have a new cookie choice — one that can be accepted or rejected with a single click.

Not just a new button

This update meant we needed to re-engineer the way cookies work on Google sites, and to make deep, coordinated changes to critical Google infrastructure. Moreover, we knew that these changes would impact not only Search and YouTube, but also the sites and content creators who use them to help grow their businesses and make a living.

We believe this update responds to updated regulatory guidance and is aligned with our broader goal of helping build a more sustainable future for the web. We’ve committed to building new privacy-preserving technologies in the Privacy Sandbox for the same reason. We believe it is possible both to protect people’s privacy online and to give companies and developers tools to build thriving digital businesses.

Find great extensions with new Chrome Web Store badges

Since 2009, publishers have been hard at work building extensions that make Chrome more powerful, useful and customizable for users. It has always been our mission to make it easy for users to find great extensions while recognizing the publishers who create them. Today, we’re announcing two new extension badges to help us deliver on our goal: the Featured badge and the Established Publisher badge. Both badges are live on the Chrome Web Store today.

Featured badge

Picture featuring UI of Featured badge

The Featured badge is assigned to extensions that follow our technical best practices and meet a high standard of user experience and design. Chrome team members manually evaluate each extension before it receives the badge, paying special attention to the following:

  1. Adherence to Chrome Web Store’s best practices guidelines, including providing an enjoyable and intuitive experience, using the latest platform APIs and respecting the privacy of end-users.
  2. A store listing page that is clear and helpful for users, with quality images and a detailed description.

Established Publisher badge

Picture featuring UI of Featured badge

The Established Publisher badge showcases publishers who have verified their identity and demonstrated compliance with the developer program policies. This badge is granted to publishers who meet the following two conditions:

  1. The publisher's identity has been verified.
  2. The publisher has established a consistent positive track record with Google services and compliance with the Developer Program Policy.

As our goal is to help users find great extensions, publishers cannot pay to receive either badge. They can, however, submit a request for their extension to be reviewed to receive the Featured badge in the one-stop support page (under My item → I want to nominate my extension…) .

If you’re a publisher, learn more about badging and discovery on Chrome Web Store.

Hounding scammers with litigation

Over the last few years we’ve seen a rise in bad actors using the internet for illegal activities, and we see it in our work. Every single day we stop more than 100 million harmful emails from reaching our users, and we routinely work with law enforcement to combat nefarious actors. But across the web, people are caught in romance scams, loan scams, and investment scams every day — and older Americans are often the most vulnerable.

Raising public awareness can help people avoid becoming victims. But for more emergent illicit behaviors and scams, lawsuits are an effective tool for establishing a legal precedent, disrupting the tools used by scammers, and raising the consequences for bad actors. That’s why last December we used our resources to file a lawsuit to combat illegal activity in the botnet industry and have used legal action to defend small businesses from scammers masquerading as Google. With these actions, we establish legal precedent to help stop similar cyber threats and scams.

Today, we’re building on this work by taking legal action against an actor who was operating fraudulent websites and using Google products as a part of their scheme. The actor used a network of fraudulent websites that claimed to sell basset hound puppies — with alluring photos and fake customer testimonials — in order to take advantage of people during the pandemic.

This type of scheme follows a similar script to many online scams where malicious actors pretend to be someone they are not to convince victims to give them money for something they will never receive. The Better Business Bureau recently announced that pet scams now make up 35% of all online shopping scams reported to them, and this particular scam targeted people at their most vulnerable, just as the pandemic led to a record spike in people wanting to own pets. (According to Google Search Trends, searches for “Adopt a Dog” spiked at the start of the pandemic as people spent more time at home. By the end of 2020, 70% of Americans reported owning a pet.)

Sadly, this scam disproportionately targeted older Americans, who can be more vulnerable to cyberattacks. The FTC and FBI report that older people are scammed out of an estimated $650 million per year.

That’s why we’re taking proactive action to set a legal precedent, protect victims, disrupt the scammer’s infrastructure, and raise public awareness. Of course, legal action is just one way we work to combat these types of scams. We build our security into all of our products and use machine learning to filter new threats, and our CyberCrime Investigation Group investigates misconduct and sends referrals to various law enforcement agencies including the Department of Justice to combat nefarious actors engaging in a wide range of scams including pets, covid relief, romance, and tech support scams.

Here are some additional steps you can take to help spot a pet scam:

  • See the pet in person (or on a video call) before paying any money. This way, you are able to see the seller and the actual pet for sale. More often than not, scammers won't comply with the request.
  • Use verified payment methods. ​​Avoid wiring money or paying with gift cards or prepaid debit cards. And before you pay, research prices for what you’re looking to purchase. If someone is advertising a product at a deeply discounted price, you could be dealing with a fraudulent offer.
  • Reverse image search. Search to see if the item or product is a stock image or stolen photo. Using Google Chrome, place the cursor over the photo and right click, then choose the option “Search Google for image.” If that picture shows up in a number of places, you’re likely dealing with a scam.
  • Search online for the seller. Ask for the company name, number and street address. See what Google search results pop up. If you can’t find anything, the name and address are likely fake.

We will continue to work with federal and state agencies and law enforcement to ensure our consumers are better protected from fraud online.

Matt Brittin on data, ethics, and privacy by design

The following is adapted from remarks delivered by Matt Brittin, President, Google EMEA, at UBA Trends Dayin Brussels, on data, ethics, and privacy by design.


I first accessed the internet in 1989 — the same year Tim Berners-Lee invented the World Wide Web.

Bright text on dark backgrounds listing links to other pages of text listing more links. Thirty years later, it’s something many of us take for granted. Half of humanity is online, using tools that we could never have dreamed of. It’s open, affordable and would have seemed magical to me as a student.

But, we’re running it all on a rule book that’s twenty years out of date; delight with the magic is tempered by concerns about how our data is used; and fears of technology being used for ill rather than for good.

A century of advertising

It’s often helpful to make sense of the future by understanding the past. Throughout history, advertising has helped make all kinds of media content affordable and accessible.

About a century ago, as the global middle class was growing, modern business could reach potential customers at an undreamed of scale. But reaching all those people — without knowing how many of them might be interested in your product — was expensive and inefficient.

Modern newspapers came up with ‘the bundle’ — ad space sold in specific sections like ‘Auto’ or ‘Fashion’. So that car companies could communicate directly to readers interested in cars; and coat sellers to readers interested in fashion.

Mass-market magazines created ways to target diverse interest sets — creating magazines or sections specifically for gardeners, those interested in the natural world, or science fiction.

And Broadcasting developed increasingly differentiated ‘genre entertainment’ — a novel form that helped advertisers segment and reach viewers based on assumptions about who was watching.

All of these inventions benefited our everyday lives — bringing us our favorite magazines, TV shows or newspapers. And with measurement and data, advertisers were reassured that they were getting value from the exercise too. Ads have long funded our favorite content, and they’ve always been targeted.

Preparing for the future

That’s what made Google Search possible. It’s free to use not because we target based on knowing anything about you — just that you are searching for cycling shoes in Brussels right now. It gives you advertising that’s relevant and useful — and privacy safe.

The web has brought an explosion of content and choice. And the chance to show a different ad to people reading the same article, or watching the same show.

But the question for the web in 2022 is whether this model of advertising is good enough. With more people managing more of their lives online than ever before, the web is going through a fundamental shift. Citizens want more online privacy and control — and for the services they use to earn, and be worthy of, their trust.

That means preparing for a future without third party cookies — by working with the industry to build and test new solutions in the Privacy Sandbox, like our latest proposal, the Topics API. Proposals that make advertising on the web more private and transparent — without needing to compromise on quality or content.

The importance of distributed computing

Now, reform also means regulation — clear tools and rules. We’re grateful to be getting a steer from regulators on a full range of issues, from cookies to online ads — and for the concern it shows for user privacy.

Of course, with increased regulation comes intense engagement. Today, some are questioning whether services like Google Analytics can be properly used in Europe under the GDPR. The concern is that because it’s run by an America-based company, Google Analytics can’t totally remove the possibility that the US government could demand access to user data.

This is a strictly hypothetical situation — because over the past 15 years, Google Analytics has never received a request of the kind speculated about in this case. While legal cases on this have only covered a few specific websites and their unique circumstances, there are others who are concerned that the same logic could be applied to any US-based provider or website — and indeed any EU-US data transfers.

Talk to anyone in the technical or security communities, and they will tell you that scaled cloud computing of the kind supporting these services makes data more secure, not less. Scale makes it easier to fight hackers, scammers and thieves — by expanding the signals needed to detect them. It’s how platforms can offer customers the greatest possible security and redundancy.

Today, Project Shield is a great example of that. It’s an advanced security technology that helps keep organizations safe from cyber attacks — particularly those designed to overwhelm small organizations with a flood of fake traffic.

We use Project Shield to protect at-risk organizations across the world, like news sites, human rights organizations or election monitors. Including in countries like Ukraine, where over 150 government and news websites are currently being kept safe and online by Project Shield and in surrounding countries affected by the war — so that they can continue to provide valuable information and services to people on the ground.

Here’s the kicker: like Google Analytics, the infrastructure that enables Project Shield relies on transatlantic data flows. We’re able to absorb massive attacks against individual websites by diffusing the traffic across a global network.

The very processes that enable Project Shield — a service that is protecting news and human rights organizations across Europe — are themselves considered suspect because they don’t adequately protect European users from the United States.

Towards a more responsible foundation

Of course, we understand that there are concerns about U.S. surveillance overreach — and we share them. Google has lobbied many years for U.S. government transparency, lawful processes, and surveillance reform — and continues to fight for protections for digital citizens outside the U.S.

We’ve done so while continuing in our belief that it is possible to advance international cooperation towards shared goals and against shared threats — and to build a future based on interests and values shared by democracies on both sides of the Atlantic.

For users, advertisers and tech, this shift towards a privacy-first internet will be a good thing.

Our studies have found that when users know that their privacy is respected, they respond with increased trust and interest. Users who feel they have control over their data are two times more likely to find content relevant; and three times more likely to react positively to advertising.

For online advertising, and the internet as a whole, this is a page-turning moment. We’re getting tools and rules. Legal clarity. Codes of practice. And a regulatory dialogue. A new future of advertising is coming: one that puts privacy front and center.

Matt Brittin on data, ethics, and privacy by design

The following is adapted from remarks delivered by Matt Brittin, President, Google EMEA, at UBA Trends Dayin Brussels, on data, ethics, and privacy by design.


I first accessed the internet in 1989 — the same year Tim Berners-Lee invented the World Wide Web.

Bright text on dark backgrounds listing links to other pages of text listing more links. Thirty years later, it’s something many of us take for granted. Half of humanity is online, using tools that we could never have dreamed of. It’s open, affordable and would have seemed magical to me as a student.

But, we’re running it all on a rule book that’s twenty years out of date; delight with the magic is tempered by concerns about how our data is used; and fears of technology being used for ill rather than for good.

A century of advertising

It’s often helpful to make sense of the future by understanding the past. Throughout history, advertising has helped make all kinds of media content affordable and accessible.

About a century ago, as the global middle class was growing, modern business could reach potential customers at an undreamed of scale. But reaching all those people — without knowing how many of them might be interested in your product — was expensive and inefficient.

Modern newspapers came up with ‘the bundle’ — ad space sold in specific sections like ‘Auto’ or ‘Fashion’. So that car companies could communicate directly to readers interested in cars; and coat sellers to readers interested in fashion.

Mass-market magazines created ways to target diverse interest sets — creating magazines or sections specifically for gardeners, those interested in the natural world, or science fiction.

And Broadcasting developed increasingly differentiated ‘genre entertainment’ — a novel form that helped advertisers segment and reach viewers based on assumptions about who was watching.

All of these inventions benefited our everyday lives — bringing us our favorite magazines, TV shows or newspapers. And with measurement and data, advertisers were reassured that they were getting value from the exercise too. Ads have long funded our favorite content, and they’ve always been targeted.

Preparing for the future

That’s what made Google Search possible. It’s free to use not because we target based on knowing anything about you — just that you are searching for cycling shoes in Brussels right now. It gives you advertising that’s relevant and useful — and privacy safe.

The web has brought an explosion of content and choice. And the chance to show a different ad to people reading the same article, or watching the same show.

But the question for the web in 2022 is whether this model of advertising is good enough. With more people managing more of their lives online than ever before, the web is going through a fundamental shift. Citizens want more online privacy and control — and for the services they use to earn, and be worthy of, their trust.

That means preparing for a future without third party cookies — by working with the industry to build and test new solutions in the Privacy Sandbox, like our latest proposal, the Topics API. Proposals that make advertising on the web more private and transparent — without needing to compromise on quality or content.

The importance of distributed computing

Now, reform also means regulation — clear tools and rules. We’re grateful to be getting a steer from regulators on a full range of issues, from cookies to online ads — and for the concern it shows for user privacy.

Of course, with increased regulation comes intense engagement. Today, some are questioning whether services like Google Analytics can be properly used in Europe under the GDPR. The concern is that because it’s run by an America-based company, Google Analytics can’t totally remove the possibility that the US government could demand access to user data.

This is a strictly hypothetical situation — because over the past 15 years, Google Analytics has never received a request of the kind speculated about in this case. While legal cases on this have only covered a few specific websites and their unique circumstances, there are others who are concerned that the same logic could be applied to any US-based provider or website — and indeed any EU-US data transfers.

Talk to anyone in the technical or security communities, and they will tell you that scaled cloud computing of the kind supporting these services makes data more secure, not less. Scale makes it easier to fight hackers, scammers and thieves — by expanding the signals needed to detect them. It’s how platforms can offer customers the greatest possible security and redundancy.

Today, Project Shield is a great example of that. It’s an advanced security technology that helps keep organizations safe from cyber attacks — particularly those designed to overwhelm small organizations with a flood of fake traffic.

We use Project Shield to protect at-risk organizations across the world, like news sites, human rights organizations or election monitors. Including in countries like Ukraine, where over 150 government and news websites are currently being kept safe and online by Project Shield and in surrounding countries affected by the war — so that they can continue to provide valuable information and services to people on the ground.

Here’s the kicker: like Google Analytics, the infrastructure that enables Project Shield relies on transatlantic data flows. We’re able to absorb massive attacks against individual websites by diffusing the traffic across a global network.

The very processes that enable Project Shield — a service that is protecting news and human rights organizations across Europe — are themselves considered suspect because they don’t adequately protect European users from the United States.

Towards a more responsible foundation

Of course, we understand that there are concerns about U.S. surveillance overreach — and we share them. Google has lobbied many years for U.S. government transparency, lawful processes, and surveillance reform — and continues to fight for protections for digital citizens outside the U.S.

We’ve done so while continuing in our belief that it is possible to advance international cooperation towards shared goals and against shared threats — and to build a future based on interests and values shared by democracies on both sides of the Atlantic.

For users, advertisers and tech, this shift towards a privacy-first internet will be a good thing.

Our studies have found that when users know that their privacy is respected, they respond with increased trust and interest. Users who feel they have control over their data are two times more likely to find content relevant; and three times more likely to react positively to advertising.

For online advertising, and the internet as a whole, this is a page-turning moment. We’re getting tools and rules. Legal clarity. Codes of practice. And a regulatory dialogue. A new future of advertising is coming: one that puts privacy front and center.

Helping Ukraine

The Russian invasion of Ukraine is both a tragedy and a humanitarian disaster in the making. The international community’s response to this war continues to evolve and governments are imposing new sanctions and restrictions.

Our teams are working around the clock to support people in Ukraine through our products, defend against cybersecurity threats, surface high-quality, reliable information and ensure the safety and security of our colleagues and their families in the region.

Here are a few of the actions we’re taking.

Providing support from Google.org

Together, Google.org and Google employees are contributing $15 million in donations and in-kind support to aid relief efforts in Ukraine, including $5 million so far from our employee matching campaign and $5 million in direct grants. We’re also contributing $5 million in advertising credits to help trusted humanitarian and intergovernmental organizations connect people to important sources of aid and resettlement information.

A woman in a Red Cross uniform puts bedding in a pile on the floor

According to the Polish Red Cross, since Thursday last week over 300,000 people have arrived in Poland. (photo credit: Red Cross)

Updating Search and Maps in Ukraine

We've launched an SOS alert on Search across Ukraine. When people search for refugee and evacuation information, they will see an alert pointing them to United Nations resources for refugees and asylum seekers. We’re working with expert organizations to source helpful humanitarian information as the situation unfolds.

And after consulting with multiple sources on the ground, including local authorities, we’ve temporarily disabled some live Google Maps features in Ukraine, including the traffic layer and information about how busy places are, to help protect the safety of local communities and their citizens. We’ve also added information on refugee and migrant centers in neighboring countries.

Expanding security protections

Our security teams are on call 24/7. Russia-backed hacking and influence operations are not new to us; we’ve been taking action against them for years. Over the past 12 months alone, we’ve issued hundreds of government-backed attack warnings to people in Ukraine using products like Gmail. We’ve been particularly vigilant during the invasion and our products will continue to automatically detect and block suspicious activity.

While we have not seen meaningful changes in the levels of malicious activity in this region overall, our Threat Analysis Group (TAG) has seen threat actors refocus their efforts on Ukrainian targets. For example, we’ve seen the attackers behind the GhostWriter threat group targeting Ukrainian government and military officials. We blocked these attempts and have not seen any compromise of Google accounts as a result of this campaign.

We also automatically increased Google account security protections (including more frequent authentication challenges) for people in the region and will continue to do so as cyber threats evolve. Our Advanced Protection Program — which delivers Google’s highest level of security — is currently protecting the accounts of hundreds of high-risk users in Ukraine. And “Project Shield,” a service providing free unlimited protection against Distributed Denial of Service attacks, is already protecting over 100 Ukrainian websites, including local news services.

Promoting information quality

In this extraordinary crisis we are taking extraordinary measures to stop the spread of misinformation and disrupt disinformation campaigns online.

Beginning today, we’re blocking YouTube channels connected to RT and Sputnik across Europe. This builds on our indefinite pause of monetization of Russian state-funded media across our platforms, meaning media outlets such as RT are not allowed to monetize their content or advertise on our platforms.

We have also significantly limited recommendations globally for a number of Russian state-funded media outlets across our platforms. And in the past few days, YouTube has removed hundreds of channels and thousands of videos for violating its Community Guidelines, including a number of channels engaging in coordinated deceptive practices.

Of course we are working to not just reduce the reach of unreliable information, but also to make reliable and trustworthy information readily available. Our systems are built to prioritize the most authoritative information in moments of crisis and rapidly-changing news. When people around the world search for topics related to the war in Ukraine on Search or YouTube, our systems prominently surface information, videos and other key context from authoritative news sources.

Helping our colleagues in Ukraine

We remain extremely concerned for the safety and wellbeing of our Ukrainian team and their families. Our local Security and People Operations teams have been working since January to provide help, including physical security support, paid leave, assistance options and reimbursement for housing, travel and food for anyone forced to leave their homes.

Operating our services in Russia

We are committed to complying with all sanctions requirements and we continue to monitor the latest guidance. As individuals, regions and institutions like banks are sanctioned, products like Google Pay may become unavailable in certain countries.

Most of our services (like Search, Maps and YouTube) currently remain available in Russia, continuing to provide access to global information and perspectives.

We will continue to monitor the situation and take additional actions as needed – and we join the international community in expressing sincere hope for a return to a peaceful and sovereign Ukraine.

Introducing Checks: simplifying privacy for app developers

Can I trust this app with my data? Is this app respecting my privacy rights? These are questions consumers are asking more and more about mobile apps and the developers who create them. In turn, developers are faced with a privacy and compliance landscape that is becoming increasingly more complex. And the path to compliance can be both time-consuming and difficult.

We believe every developer — no matter the stage or size of their company — deserves access to easy-to-use tools that help them achieve their goals, while making privacy compliance simpler.

That’s why today, as part of Google’s in-house incubator Area 120, we’re launching Checks, a new privacy platform. We are on a mission to help simplify privacy and reduce risk for mobile app developers.

A shared passion to help developers succeed

In 2018, as the world prepared for the EU General Data Protection Regulation (GDPR), we were hearing that mobile app developers were struggling to feel prepared to meet the new privacy expectations under GDPR, and they hoped Google could help. Having previously built tools like Android Vitals to address developers’ technical challenges, we had an idea to use Google’s artificial intelligence and resources to create a new product to help mobile app developers address their privacy compliance needs. Since we worked together for years on Google Play, we knew we could bring this vision to life as a team.

Joining Area 120 allowed us to focus full-time on creating a solution that simplifies privacy for developers distributing on both Android and iOS. Over the past two years, our team has spent time listening to feedback from hundreds of mobile app developers on their approach to privacy, and partnered closely with 40 highly-engaged early adopters to refine our product and roadmap. We believe Checks will help mobile app developers of all sizes save time by replacing complicated processes and providing automated privacy insights.

Greater confidence for app developers

We’ve heard developers say it’s difficult to keep pace with regulatory and app store policy changes, and determine how those changes apply to their apps. Checks helps developers gain confidence to make informed decisions by identifying potential compliance issues, providing clear actionable insights in simple language, and offering links to relevant resources.

Checks scans multiple sources of information including an app’s privacy policy, SDK information, and network traffic to generate a report that indicates the number of Checks performed, new issues, and issues that have been resolved.

Save time and money

Teams are able to better collaborate across legal, business and engineering roles on the Checks platform. Our product provides everyone access to the same unique insights — without the customer having to perform any technical integrations — which helps reduce the number of messages, meetings and documents necessary to track down information. Teams can focus on evaluating what actions to take and respond faster.

Screenshot of Checks’ Data Monitoring report, spotlighting the SDK findings. A list of SDKs that are in use by an app is provided, and any changes in the last 30 days are flagged as new.

Gain visibility

Software Developer Kits (SDKs) can change their functionality at any time, sometimes without the app developer knowing it. Checks helps mobile app developers who use SDKs by detecting when their app’s data sharing practices have changed and then sending them an automated alert. If the change was not intended, the developer can further investigate where the new data is being shared and make necessary changes.

Screenshot of Checks’ Store Disclosure report. A chart indicates what data types may be collected or shared by an app, and if evidence of the data type was found in permissions, network traffic, or an app’s privacy policy.

Help completing Google Play’s Data safety section

Many mobile app developers are still preparing for the launch of Google Play’s Data safety section, which will give end users more transparency into what data apps collect or share and how apps use their data. Checks can help developers get started by identifying what information they may need to declare and the basis for the recommendation. This can help them feel confident as they decide what to include.

Request early access today

We want to help developers build mobile apps that their users can enjoy and trust. We look forward to continuing to work closely with developers to ensure Checks provides solutions that developers need.

If you’re working on privacy compliance for mobile apps, visit checks.area120.google.com to learn more and get started today.