Grow with Google: helping more Australians access new digital skills

It’s no secret that digital technology is a part of everyday life for Australians - whether you are checking the bus timetable on your phone or building an online business - digital skills are essential. 

While the digital opportunities for Australian businesses and individuals are huge, we know that some people do not feel they have the skills they need to make the most of this opportunity.

Australians want the chance to learn new skills - that’s why we’re launching Grow with Google - a new initiative to provide free skills training both online and in person to help people make the most of the web.

We launched Grow with Google today at an event in Fairfield with Chris Bowen, the Shadow Treasurer, Shadow Minister for Small Business and Federal Member for McMahon. 



More than 500 people attended training sessions over two days at the Fairfield RSL in Western Sydney, which included workshops for business to learn how to be found on Google and use analytics, training for job seekers and students, and digital training for not-for-profit organisations.

Grow with Google
 aims to help all Australians get the skills they need, whether you’re a student, a job-seeker or a business wanting to get online.




We’re excited to expand our skills offering to more people, building on our existing digital skills programs.

Since 2014, Google has trained nearly half a million people across Australia through online and in-person training sessions, as well as curriculum integrated through school and partner programs.

For Australians at all points on the digital journey, there are significant benefits to getting online - whether it’s finding new employment opportunities or expanding the reach of your business.

If you want to grow your skills, careers, and businesses check out the free tools, training, and events at: g.co/GrowAustralia.

Disable SMS or voice codes for 2-Step Verification for more secure accounts

What’s changing 

We’re adding an option for admins to disable telephony options as 2-Step Verification methods for G Suite accounts in their domain. This option will prevent their users from using SMS and voice codes for 2-factor authentication.

Who’s impacted 

Admins only

Why you’d use it 

There are many forms of 2-Step Verification—from text (SMS) message codes, to the Google Authenticator app, to hardware second factors like security keys. And while any second factor will greatly improve the security of your account, we’ve long advocated the use of security keys for those who want the strongest account protection.

As awareness of the potential vulnerabilities associated with SMS and voice codes has increased, some admins asked us for more control over the ability to use phone-based 2-Step Verification methods within organizations. The present release does just that - admins get a policy that can enforce the use of multi-factor authentication without permitting SMS and voice verification codes. 

This new policy gives admins more control over the security methods used in their domain, and increases the security of user accounts and associated data.

How to get started 


  • Admins: Apply the new policy by changing the setting at Admin console > Security > Advanced security settings > Allowed two step verification methods
  • End users: No action needed unless admin changes configuration. 

2-factor authentication options in the G Suite Admin console 


Additional details


How users can configure 2-Step Verification once the policy is enforced 
Users with the new policy applied will not be able to add SMS or voice based codes as an option - either when enrolling in 2-Step Verification for the first time or later at myaccount.google.com. A user enrolling in 2-Step Verification for the first time will see the screen below. This first provides an option to set up Google Prompt, as well as ‘Choose another option’ which will let them add a Security Key instead.


Avoid user sign-in issues 
Users affected by the new policy who have SMS/Voice as the only 2SV method on their account will not be able to sign in. To avoid this lock-out situation, see our Help Center to get tips for how to ensure a smooth transition to an enforcement policy.

Helpful links 



Availability 

Rollout details 
G Suite editions 
Available to all G Suite editions

On/off by default? 
The new policy is not enabled by default. Admin needs to explicitly choose to apply this policy on a OU / Group basis, like the other existing 2SV enforcement policies.

Stay up to date with G Suite launches

Harnessing Organizational Knowledge for Machine Learning



One of the biggest bottlenecks in developing machine learning (ML) applications is the need for the large, labeled datasets used to train modern ML models. Creating these datasets involves the investment of significant time and expense, requiring annotators with the right expertise. Moreover, due to the evolution of real-world applications, labeled datasets often need to be thrown out or re-labeled.

In collaboration with Stanford and Brown University, we present "Snorkel Drybell: A Case Study in Deploying Weak Supervision at Industrial Scale," which explores how existing knowledge in an organization can be used as noisier, higher-level supervision—or, as it is often termed, weak supervision—to quickly label large training datasets. In this study, we use an experimental internal system, Snorkel Drybell, which adapts the open-source Snorkel framework to use diverse organizational knowledge resources—like internal models, ontologies, legacy rules, knowledge graphs and more—in order to generate training data for machine learning models at web scale. We find that this approach can match the efficacy of hand-labeling tens of thousands of data points, and reveals some core lessons about how training datasets for modern machine learning models can be created in practice.

Rather than labeling training data by hand, Snorkel DryBell enables writing labeling functions that label training data programmatically. In this work, we explored how these labeling functions can capture engineers' knowledge about how to use existing resources as heuristics for weak supervision. As an example, suppose our goal is to identify content related to celebrities. One can leverage an existing named-entity recognition (NER) model for this task by labeling any content that does not contain a person as not related to celebrities. This illustrates how existing knowledge resources (in this case, a trained model) can be combined with simple programmatic logic to label training data for a new model. Note also, importantly, that this labeling function returns None---i.e. abstains---in many cases, and thus only labels some small part of the data; our overall goal is to use these labels to train a modern machine learning model that can generalize to new data.

In our example of a labeling function, rather than hand-labeling a data point (1), one utilizes an existing knowledge resource—in this case, a NER model (2)—together with some simple logic expressed in code (3) to heuristically label data.
This programmatic interface for labeling training data is much faster and more flexible than hand-labeling individual data points, but the resulting labels are obviously of much lower quality than manually-specified labels. The labels generated by these labeling functions will often overlap and disagree, as the labeling functions may not only have arbitrary unknown accuracies, but may also be correlated in arbitrary ways (for example, from sharing a common data source or heuristic).

To solve the problem of noisy and correlated labels, Snorkel DryBell uses a generative modeling technique to automatically estimate the accuracies and correlations of the labeling functions in a provably consistent way—without any ground truth training labels—then uses this to re-weight and combine their outputs into a single probabilistic label per data point. At a high level, we rely on the observed agreements and disagreements between the labeling functions (the covariance matrix), and learn the labeling function accuracy and correlation parameters that best explain this observed output using a new matrix completion-style approach. The resulting labels can then be used to train an arbitrary model (e.g. in TensorFlow), as shown in the system diagram below.

Using Diverse Knowledge Sources as Weak Supervision
To study the efficacy of Snorkel Drybell, we used three production tasks and corresponding datasets, aimed at classifying topics in web content, identifying mentions of certain products, and detecting certain real-time events. Using Snorkel DryBell, we were able to make use of various existing or quickly specified sources of information such as:
  • Heuristics and rules: e.g. existing human-authored rules about the target domain.
  • Topic models, taggers, and classifiers: e.g. machine learning models about the target domain or a related domain.
  • Aggregate statistics: e.g. tracked metrics about the target domain.
  • Knowledge or entity graphs: e.g. databases of facts about the target domain.
In Snorkel DryBell, the goal is to train a machine learning model (C), for example to do content or event classification over web data. Rather than hand-labeling training data to do this, in Snorkel DryBell users write labeling functions that express various organizational knowledge resources (A), which are then automatically reweighted and combined (B).
We used these organizational knowledge resources to write labeling functions in a MapReduce template-based pipeline. Each labeling function takes in a data point and either abstains, or outputs a label. The result is a large set of programmatically-generated training labels. However, many of these labels were very noisy (e.g. from the heuristics), conflicted with each other, or were far too coarse-grained (e.g. the topic models) for our task, leading to the next stage of Snorkel DryBell, aimed at automatically cleaning and integrating the labels into a final training set.

Modeling the Accuracies to Combine & Repurpose Existing Sources
To handle these noisy labels, the next stage of Snorkel DryBell combines the outputs from the labeling functions into a single, confidence-weighted training label for each data point. The challenging technical aspect is that this must be done without any ground-truth labels. We use a generative modeling technique that learns the accuracy of each labeling function using only unlabeled data. This technique learns by observing the matrix of agreements and disagreements between the labeling functions' outputs, taking into account known (or statistically estimated) correlation structures between them. In Snorkel DryBell, we also implement a new faster, sampling-free version of this modeling approach, implemented in TensorFlow, in order to handle web-scale data.

By combining and modeling the output of the labeling functions using this procedure in Snorkel DryBell, we were able to generate high-quality training labels. In fact, on the two applications where hand-labeled training data was available for comparison, we achieved the same predictive accuracy training a model with Snorkel DryBell's labels as we did when training that same model with 12,000 and 80,000 hand-labeled training data points.

Transferring Non-Servable Knowledge to Servable Models
In many settings, there is also an important distinction between servable features—which can be used in production—and non-servable features, that are too slow or expensive to be used in production. These non-servable features may have very rich signal, but a general question is how to use them to train or otherwise help servable models that can be deployed in production?


In many settings, users write labeling functions that leverage organizational knowledge resources that are not servable in production (a)—e.g. aggregate statistics, internal models, or knowledge graphs that are too slow or expensive to use in production—in order to train models that are only defined over production-servable features (b), e.g. cheap, real-time web signals.
In Snorkel DryBell, we found that users could write the labeling functions—i.e. express their organizational knowledge—over one feature set that was not servable, and then use the resulting training labels output by Snorkel DryBell to train a model defined over a different, servable feature set. This cross-feature transfer boosted our performance by an average 52% on the benchmark datasets we created. More broadly, it represents a simple but powerful way to use resources that are too slow (e.g. expensive models or aggregate statistics), private (e.g. entity or knowledge graphs), or otherwise unsuitable for deployment, to train servable models over cheap, real-time features. This approach can be viewed as a new type of transfer learning, where instead of transferring a model between different datasets, we're transferring domain knowledge between different feature sets- an approach which has potential use cases not just in industry, but in medical settings and beyond.

Next Steps
Moving forward, we're excited to see what other types of organizational knowledge can be used as weak supervision, and how the approach used by Snorkel DryBell can enable new modes of information reuse and sharing across organizations. For more details, check out our paper, and for further technical details, blog posts, and tutorials, check out the open-source Snorkel implementation at snorkel.stanford.edu.

Acknowledgments
This research was done in collaboration between Google, Stanford, and Brown. We would like to thank all the people who were involved, including Stephen Bach (Brown), Daniel Rodriguez, Yintao Liu, Chong Luo, Haidong Shao, Souvik Sen, Braden Hancock (Stanford), Houman Alborzi, Rahul Kuchhal, Christopher Ré (Stanford), Rob Malkin.

Source: Google AI Blog


Giving users more control over their location data

Posted by Jen Chai, Product Manager

Location data can deliver amazing, rich mobile experiences for users on Android such as finding a restaurant nearby, tracking the distance of a run, and getting turn-by-turn directions as you drive. Location is also one of the most sensitive types of personal information for a user. We want to give users simple, easy-to-understand controls for what data they are providing to apps, and yesterday, we announced in Android Q that we are giving users more control over location permissions. We are delighted by the innovative location experiences you provide to users through your apps, and we want to make this transition as straightforward for you as possible. This post dives deeper into the location permission changes in Q, what it may mean for your app, and how to get started with any updates needed.

Previously, a user had a single control to allow or deny an app access to device location, which covered location usage by the app both while it was in use and while it wasn't. Starting in Android Q, users have a new option to give an app access to location only when the app is being used; in other words, when the app is in the foreground. This means users will have a choice of three options for providing location to an app:

  • "All the time" - this means an app can access location at any time
  • "While in use" - this means an app can access location only while the app is being used
  • "Deny" - this means an app cannot access location

Some apps or features within an app may only need location while the app is being used. For example, if a feature allows a user to search for a restaurant nearby, the app only needs to understand the user's location when the user opens the app to search for a restaurant.

However, some apps may need location even when the app is not in use. For example, an app that automatically tracks the mileage you drive for tax filing, without requiring you to interact with the app.

The new location control allows users to decide when device location data is provided to an app and prevents an app from getting location data that it may not need. Users will see this new option in the same permissions dialog that is presented today when an app requests access to location. This permission can also be changed at any time for any app from Settings-> Location-> App permission.

Here's how to get started

We know these updates may impact your apps. We respect our developer community, and our goal is to approach any change like this very carefully. We want to support you as much as we can by (1) releasing developer-impacting features in the first Q Beta to give you as much time as possible to make any updates needed in your apps and (2) providing detailed information in follow-up posts like this one as well as in the developer guides and privacy checklist. Please let us know if there are ways we can make the guides more helpful!

If your app has a feature requiring "all the time" permission, you'll need to add the new ACCESS_BACKGROUND_LOCATION permission to your manifest file when you target Android Q. If your app targets Android 9 (API level 28) or lower, the ACCESS_BACKGROUND_LOCATION permission will be automatically added for you by the system if you request either ACCESS_FINE_LOCATION or ACCESS_COARSE_LOCATION. A user can decide to provide or remove these location permissions at any time through Settings. To maintain a good user experience, design your app to gracefully handle when your app doesn't have background location permission or when it doesn't have any access to location.

Users will also be more likely to grant the location permission if they clearly understand why your app needs it. Consider asking for the location permission from users in context, when the user is turning on or interacting with a feature that requires it, such as when they are searching for something nearby. In addition, only ask for the level of access required for that feature. In other words, don't ask for "all the time" permission if the feature only requires "while in use" permission.

To learn more, read the developer guide on how to handle the new location controls.

Android Jetpack Navigation Stable Release

Posted by Ian Lake, Software Engineering Lead & Jisha Abubaker, Product Manager

Cohesive tooling and guidance for implementing predictable in-app navigation

Today we're happy to announce the stable release of the Android Jetpack Navigation component.

The Jetpack Navigation component's suite of libraries, tooling and guidance provides a robust, complete navigation framework, freeing you from the challenges of implementing navigation yourself and giving you certainty that all edge cases are handled correctly.

With the Jetpack Navigation component you can:

  • Handle basic user actions like Up & Back buttons so that they work consistently across devices and screens.
  • Allow users to land on any part of your app via deep links and build consistent and predictable navigation within your app.
  • Improve type safety of arguments passed from one screen to another, decreasing the chances of runtime crashes as users navigate in your app.
  • Add navigation experiences like navigation drawers and bottom navigation consistent with the Material Design guidelines.
  • Visualize and manipulate your navigation flows easily with the Navigation Editor in Android Studio 3.3

The Jetpack Navigation component adheres to the Principles of Navigation, providing consistent and predictable navigation no matter how simple or complex your app may be.

Simplify navigation code with Jetpack Navigation Libraries

The Jetpack Navigation component provides a framework for in-app navigation that makes it possible to abstract away the implementation details, keeping your app code free of navigation boilerplate.

To get started with the Jetpack Navigation component in your project, add the Navigation artifacts available on Google's Maven repository in Java or Kotlin to your app's build.gradle file:

 dependencies {
    def nav_version = 2.0.0

    // Java
    implementation "androidx.navigation:navigation-fragment:$nav_version"
    implementation "androidx.navigation:navigation-ui:$nav_version"

    // Kotlin KTX 
    implementation "androidx.navigation:navigation-fragment-ktx:$nav_version"
    implementation "androidx.navigation:navigation-ui-ktx:$nav_version"
  }

Note: If you have not yet migrated to androidx.*, the Jetpack Navigation stable component libraries are also available as android.arch.* artifacts in version 1.0.0.

navigation-runtime : This core library powers the navigation graph, which provides the structure of your in-app navigation: the screens or destinations that make up your app and the actions that link them. You can control how you navigate to destinations with a simple navigate() call. These destinations may be fragments, activities or custom destinations.

navigation-fragment: This library builds upon navigation-runtime and provides out-of-the-box support for fragments as destinations. With this library, fragment transactions are now handled for you automatically.

navigation-ui: This library allows you to easily add navigation drawers, menus and bottom navigation to your app consistent with the Material Design guidelines.

Each of these libraries provide an Android KTX artifact with the -ktx suffix that builds upon the Java API, taking advantage of Kotlin-specific language features.

Tools to help you build predictable navigation workflows

Available in Android Studio 3.3 and above, the Navigation Editor lets you visually create your navigation graph , allowing you to manage user journeys within your app.

With integration into the manifest merger tool, Android Studio can automatically generate the intent filters necessary to enable deep linking to a specific screen in your app. With this feature, you can associate URLs with any screen of your app by simply setting an attribute on the navigation destination.

Navigation often requires passing data from one screen to another. For example, your list screen may pass an item ID to a details screen. Many of the runtime exceptions during navigation have been attributed to a lack of type safety guarantees as you pass arguments. These exceptions are hard to replicate and debug. Learn how you can provide compile time type safety with the Safe Args Gradle Plugin.

Guidance to get it right on the first try

Check out our brand new set of developer guides that encompass best practices to help you implement navigation correctly:

What developers say

Here's what Emery Coxe, Android Lead @ HomeAway, has to say about the Jetpack Navigation component :

"The Navigation library is well-designed and fully configurable, allowing us to integrate the library according to our specific needs.

With the Navigation Library, we refactored our legacy navigation drawer to support a dynamic, runtime-based configuration using custom views. It allowed us to add / remove new screens to the top-level experience of our app without creating any interdependencies between discreetly packaged modules.

We were also able to get rid of all anti-patterns in our app around top-level navigation, removing explicit casts and hardcoded assumptions to instead rely directly on Navigation. This library is a fundamental component of modern Android development, and we intend to adopt it more broadly across our app moving forward.

Get started

Check out the migration guide and the developer guide to learn how you can get started using the Jetpack Navigation component in your app. We also offer a hands-on codelab and a sample app.

Also check out Google's Digital Wellbeing to see another real-world example of in-app navigation using the Android Jetpack Navigation component.

Feedback

Please continue to tell us about your experience with the Navigation component. If you have specific feedback on features or if you run into any issues, please file a bug via one of the following links:

Call Screen beta comes to Pixel phones in Canada



You may have been in a situation where you see an incoming call but you don’t recognize the number. If you’re like me, I pretty much don’t answer these any more as I worry they will be spam. However, that also means you can miss legitimate callers like your kid’s daycare, realtor, or your bank.

Starting today, Canadians with a Pixel can now opt in to the Call Screen beta, a new feature that gives you help from the Google Assistant to find out who's calling and why.

To use Call Screen, when you get an incoming call, just hit the “Screen call” button and the Google Assistant will help you get answers to specific questions like who's calling, why and more. You'll see a transcript of the caller's responses in real-time, and then you can decide whether to pick up, respond by tapping quick replies like “I’ll call you back later,” hang up, or mark the call as spam.

Call Screen is a feature on Pixel devices powered by the Google Assistant to make life easier and simpler. Like many AI-powered features on Pixel, including camera features and our music feature Now Playing which helps your discover new music playing around you, Call Screen processes call details on-device, which means these experiences are fast, private to you, and use up less battery.

Select Canadian Pixel 2 and Pixel 3 owners will receive an email today with instructions on how to opt in to the Call Screen beta. All Pixel users can opt in to the beta here. Call Screen is currently available in English only.

A recipe for beating the record of most-calculated digits of pi

Editor’s note: Today, March 14, is Pi Day (3.14). Here at Google, we’re celebrating the day with a new milestone: A team at Google has broken the Guinness World RecordsTMtitle for most accurate value of pi.

Whether or not you realize it, pi is everywhere you look. It’s the ratio of the circumference of a circle to its diameter, so the next time you check your watch or see the turning wheels of a vehicle go by, you’re looking at pi. And since pi is an irrational number, there’s no end to how many of its digits can be calculated. You might know it as 3.14, but math and science pros are constantly working to calculate more and more digits of pi, so they can test supercomputers (and have a bit of healthy competition, too).

While I’ve been busy thinking about which flavor of pie I’m going to enjoy later today, Googler Emma Haruka Iwao has been busy using Google Compute Engine, powered by Google Cloud, to calculate the most accurate value of pi—ever. That’s 31,415,926,535,897 digits, to be exact. Emma used the power of the cloud for the task, making this the first time the cloud has been used for a pi calculation of this magnitude.

Here’s Emma’s recipe for what started out as a pie-in-the-sky idea to break a Guinness World Records title:

Step 1: Find inspiration for your calculation.

When Emma was 12 years old, she became fascinated with pi. “Pi seems simple—it starts with 3.14. When I was a kid, I downloaded a program to calculate pi on my computer,” she says. “At the time, the world record holders were Yasumasa Kanada and Daisuke Takahashi, who are Japanese, so it was really relatable for me growing up in Japan.”

Later on, when Emma was in college, one of her professors was Dr. Daisuke Takahashi, then the record holder for calculating the most accurate value of pi using a supercomputer. “When I told him I was going to start this project, he shared his advice and some technical strategies with me.”

Step 2: Combine your ingredients.

To calculate pi, Emma used an application called y-cruncher on 25 Google Cloud virtual machines. “The biggest challenge with pi is that it requires a lot of storage and memory to calculate,” Emma says. Her calculation required 170 terabytes of data to complete—that's roughly equivalent to the amount of data in the entire Library of Congress print collections.

Emma

Step 3: Bake for four months.

Emma’s calculation took the virtual machines about 121 days to complete. During that whole time, the Google Cloud infrastructure kept the servers going. If there’d been any failures or interruptions, it would’ve disrupted the calculation. When Emma checked to see if her end result was correct, she felt relieved when the number checked out. “I started to realize it was an exciting accomplishment for my team,” she says.

Step 4: Share a slice of your achievement.

Emma thinks there are a lot of mathematical problems out there to solve, and we’re just at the beginning of exploring how cloud computing can play a role. “When I was a kid, I didn’t have access to supercomputers. But even if you don’t work for Google, you can apply for various scholarships and programs to access computing resources,” she says. “I was very fortunate that there were Japanese world record holders that I could relate to. I’m really happy to be one of the few women in computer science holding the record, and I hope I can show more people who want to work in the industry what’s possible.”

At Google, Emma is a Cloud Developer Advocate, focused on high-performance computing and programming language communities. Her job is to work directly with developers, helping them to do more with the cloud and share information about how products work. And now, she’s also sharing her calculations: Google Cloud has published the computed digits entirely as disk snapshots, so they’re available to anyone who wants to access them. This means anyone can copy the snapshots, work on the results and use the computation resources in less than an hour. Without the cloud, the only way someone could access such a large dataset would be to ship physical hard drives. 

Today, though, Emma and her team are taking a moment to celebrate the new world record. And maybe a piece of pie, too. Emma’s favorite flavor? “I like apple pie—not too sweet.”

For the technical details on how Emma used Google Compute Engine to calculate pi, head over to the Google Cloud Platform blog.

Enabling a Safe Digital Advertising Ecosystem

Google has a crucial stake in a healthy and sustainable digital advertising ecosystem—something we've worked to enable for nearly 20 years. Every day, we invest significant team hours and technological resources in protecting the users, advertisers and publishers that make the internet so useful. And every year, we share key actions and data about our efforts to keep the ecosystem safe by enforcing our policies across platforms.

Bad ads taken down

Dozens of new ads policies to take down billions of bad ads

In 2018, we faced new challenges in areas where online advertising could be used to scam or defraud users offline. For example, we created a new policy banning ads from for-profit bail bond providers because we saw evidence that this sector was taking advantage of vulnerable communities. Similarly, when we saw a rise in ads promoting deceptive experiences to users seeking addiction treatment services, we consulted with experts and restricted advertising to certified organizations. In all, we introduced 31 new ads policies in 2018 to address abuses in areas including third-party tech support, ticket resellers, cryptocurrency and local services such as garage door repairmen, bail bonds and addiction treatment facilities.

We took down 2.3 billion bad ads in 2018 for violations of both new and existing policies, including nearly 207,000 ads for ticket resellers, over 531,000 ads for bail bonds and approximately 58.8 million phishing ads. Overall, that’s more than six million bad ads, every day.

Ticket Resellers

As we continue to protect users from bad ads, we’re also working to make it easier for advertisers to ensure their creatives are policy compliant. Similar to our AdSense Policy Center, next month we’ll launch a new Policy manager in Google Ads that will give tips on common policy mistakes to help well-meaning advertisers and make it easier to create and launch compliant ads.

Taking on bad actors with improved technology

Last year, we also made a concerted effort to go after the bad actors behind numerous bad ads, not just the ads themselves. Using improved machine learning technology, we were able to identify and terminate almost one million bad advertiser accounts, nearly double the amount we terminated in 2017. When we take action at the account level, it helps to address the root cause of bad ads and better protect our users.

In 2017, we launched new technology that allows for more granular removal of ads from websites when only a small number of pages on a site are violating our policies. In 2018, we launched 330 detection classifiers to help us better detect "badness" at the page level—that's nearly three times the number of classifiers we launched in 2017. So while we terminated nearly 734,000 publishers and app developers from our ad network, and removed ads completely from nearly 1.5 million apps, we were also able to take more granular action by taking ads off of nearly 28 million pages that violated our publisher policies. We use a combination of manual reviews and machine learning to catch these kinds of violations.

Addressing key challenges within the digital ads ecosystem

From reports of “fake news” sites, to questions about who is purchasing political ads, to massive ad fraud operations, there are fundamental concerns about the role of online advertising in society. Last year, we launched a new policy for election ads in the U.S. ahead of the 2018 midterm elections. We verified nearly 143,000 election ads in the U.S. and launched a new political ads transparency report that gives more information about who bought election ads. And in 2019, we’re launching similar tools ahead of elections in the EU and India.

We also continued to tackle the challenge of misinformation and low-quality sites, using several different policies to ensure our ads are supporting legitimate, high-quality publishers. In 2018, we removed ads from approximately 1.2 million pages, more than 22,000 apps, and nearly 15,000 sites across our ad network for violations of policies directed at misrepresentative, hateful or other low-quality content. More specifically, we removed ads from almost 74,000 pages for violating our “dangerous or derogatory” content policy, and took down approximately 190,000 ads for violating this policy. This policy includes a prohibition on hate speech and protects our users, advertisers and publishers from hateful content across platforms.  


How we took down one of the biggest ad fraud operations ever in 2018

In 2018, we worked closely with cybersecurity firm White Ops, the FBI, and others in the industry to take down one of the largest and most complex international ad fraud operations we’ve ever seen. Codenamed "3ve", the operation used sophisticated tactics aimed at exploiting data centers, computers infected with malware, spoofed fraudulent domains and fake websites. In aggregate, 3ve produced more than 10,000 counterfeit domains, and generated over 3 billion daily bid requests at its peak.

3ve tried to evade our enforcements, but we conducted a coordinated takedown of their infrastructure. We referred the case to the FBI, and late last year charges were announced against eight individuals for crimes including aggravated identity theft and money laundering. Learn more about 3ve and our work to take it down on our Security Blog, as well as through this white paper that we co-authored with White Ops.


We will continue to tackle these issues because as new trends and online experiences emerge, so do new scams and bad actors. In 2019, our work to protect users and enable a safe advertising ecosystem that works well for legitimate advertisers and publishers continues to be a top priority.

Source: Google Ads


Dev Channel Update for Desktop

The dev channel has been updated to 74.0.3729.6 for Windows, Mac & Linux.


A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.
Krishna Govind
Google Chrome