Author Archives: Alexis R. Shellhammer

Preventing unauthorized inventory

Advertising should be free of invalid activity – including unauthorized, misrepresented, and fake ad inventory – which diverts revenue from legitimate publishers and tricks marketers into wasting their money. Earlier this year we worked with the IAB Tech Lab to create the ads.txt standard, a simple solution to help stop bad actors from selling unauthorized inventory across the industry. Since then, we’ve shared our plans to integrate the standard into our advertiser and publisher advertising platforms.

As of November 8th, Google’s advertising platforms filter all unauthorized ad inventory identified by published ads.txt files:
  • Marketers and agencies using DoubleClick Bid Manager and AdWords will not buy unauthorized impressions as identified by publishers’ ads.txt files.
  • DoubleClick Ad Exchange and AdSense publishers that use ads.txt are protected against unauthorized inventory being sold in our auctions.

Preventing the sale of unauthorized inventory depends on having complete and accurate ads.txt information. So, to make sure our systems are filtering traffic as accurately as possible, we built an ads.txt crawler based on concepts used in our search index technology. It scans all active sites across our network daily, over 30m domains, for ads.txt files, to prevent unauthorized inventory from entering our systems.



The adoption of ads.txt has been growing quickly and the standard is reaching scale across publishers:
  • Over 100,000 ads.txt files have been published
  • 750 of the comScore 2,000 have ads.txt files
  • Over 50% of inventory seen by DBM comes from domains with ads.txt files

We believe ads.txt is a significant step in cleaning up bad inventory and it's great to have the broad support of our partners like L’Oreal, Omnicom Media Group, and the Financial Times.
“Consumers place enormous value on the ability to trust brands, which is why transparency in advertising is a top priority at L’Oreal. We look forward to collaborating with Google on this initiative as we continue to encourage the industry to follow suit.”
- Marie Gulin-Merle, CMO L’Oreal USA
"Removing counterfeit inventory from the ecosystem is critical to maintaining trust in digital. The simple act of publishing an ads.txt file helps provide the transparency we need to quickly reduce counterfeit inventory from harming our clients."
- Steve Katelman, EVP Global Strategic Partnerships, Omnicom Media Group
“It's great to see adoption of ads.txt across the industry and we're happy to see Google put their support behind this initiative. By eliminating counterfeit inventory from the ecosystem, marketers' budgets will work that much harder and revenue will reach real working media to fund the independent, high-quality journalism which society depends upon."
- Anthony Hitchings, Digital Advertising Operations Director, Financial Times

It’s amazing to see how fast the industry is adopting ads.txt, but there is still more to be done. Supporting industry initiatives like ads.txt is critical to maintaining the health of the digital advertising ecosystem. That’s why we’ll continue to invest and innovate to make the ecosystem more valuable, transparent, and trusted for everyone.

Posted by Per Bjorke
Product Manager, Google Ad Traffic Quality

Preventing unauthorized inventory

Advertising should be free of invalid activity – including unauthorized, misrepresented, and fake ad inventory – which diverts revenue from legitimate publishers and tricks marketers into wasting their money. Earlier this year we worked with the IAB Tech Lab to create the ads.txt standard, a simple solution to help stop bad actors from selling unauthorized inventory across the industry. Since then, we’ve shared our plans to integrate the standard into our advertiser and publisher advertising platforms.

As of November 8th, Google’s advertising platforms filter all unauthorized ad inventory identified by published ads.txt files:
  • Marketers and agencies using DoubleClick Bid Manager and AdWords will not buy unauthorized impressions as identified by publishers’ ads.txt files.
  • DoubleClick Ad Exchange and AdSense publishers that use ads.txt are protected against unauthorized inventory being sold in our auctions.

Preventing the sale of unauthorized inventory depends on having complete and accurate ads.txt information. So, to make sure our systems are filtering traffic as accurately as possible, we built an ads.txt crawler based on concepts used in our search index technology. It scans all active sites across our network daily, over 30m domains, for ads.txt files, to prevent unauthorized inventory from entering our systems.



The adoption of ads.txt has been growing quickly and the standard is reaching scale across publishers:
  • Over 100,000 ads.txt files have been published
  • 750 of the comScore 2,000 have ads.txt files
  • Over 50% of inventory seen by DBM comes from domains with ads.txt files

We believe ads.txt is a significant step in cleaning up bad inventory and it's great to have the broad support of our partners like L’Oreal, Omnicom Media Group, and the Financial Times.
“Consumers place enormous value on the ability to trust brands, which is why transparency in advertising is a top priority at L’Oreal. We look forward to collaborating with Google on this initiative as we continue to encourage the industry to follow suit.”
- Marie Gulin-Merle, CMO L’Oreal USA
"Removing counterfeit inventory from the ecosystem is critical to maintaining trust in digital. The simple act of publishing an ads.txt file helps provide the transparency we need to quickly reduce counterfeit inventory from harming our clients."
- Steve Katelman, EVP Global Strategic Partnerships, Omnicom Media Group
“It's great to see adoption of ads.txt across the industry and we're happy to see Google put their support behind this initiative. By eliminating counterfeit inventory from the ecosystem, marketers' budgets will work that much harder and revenue will reach real working media to fund the independent, high-quality journalism which society depends upon."
- Anthony Hitchings, Digital Advertising Operations Director, Financial Times

It’s amazing to see how fast the industry is adopting ads.txt, but there is still more to be done. Supporting industry initiatives like ads.txt is critical to maintaining the health of the digital advertising ecosystem. That’s why we’ll continue to invest and innovate to make the ecosystem more valuable, transparent, and trusted for everyone.

Posted by Per Bjorke
Product Manager, Google Ad Traffic Quality

How we fought bad ads in 2015

Cross-posted from the Official Google Blog

When ads are good, they connect you to products or services you’re interested in and make it easier to get stuff you want. They also keep a lot of what you love about the web—like news sites or mobile apps—free.

But some ads are just plain bad—like ads that carry malware, cover up content you’re trying to see, or promote fake goods. Bad ads can ruin your entire online experience, a problem we take very seriously. That’s why we have a strict set of policies for the kinds of ads businesses can run with Google—and why we’ve invested in sophisticated technology and a global team of 1,000+ people dedicated to fighting bad ads. Last year alone we disabled more than 780 million ads for violating our policies—a number that's increased over the years thanks to new protections we've put in place. If you spent one second looking at each of these ads, it’d take you nearly 25 years to see them all!

Here are some of the top areas we focused on in our fight against bad ads in 2015:

Busting bad ads

Some bad ads, like those for products that falsely claim to help with weight loss, mislead people. Others help fraudsters carry out scams, like those that lead to “phishing” sites that trick people into handing over personal information. Through a combination of computer algorithms and people at Google reviewing ads, we’re able to block the vast majority of these bad ads before they ever get shown. Here are some types of bad ads we busted in 2015:

Counterfeiters

We suspended more than 10,000 sites and 18,000 accounts for attempting to sell counterfeit goods (like imitation designer watches).

Pharmaceuticals

We blocked more than 12.5 million ads that violated our healthcare and medicines policy, such as ads for pharmaceuticals that weren’t approved for use or that made misleading claims to be as effective as prescription drugs.

Weight loss scams

Weight loss scams, like ads for supplements promising impossible-to-achieve weight loss without diet or exercise, were one of the top user complaints in 2015. We responded by suspending more than 30,000 sites for misleading claims.

Phishing

In 2015, we stepped up our efforts to fight phishing sites, blocking nearly 7,000 sites as a result.

Unwanted software

Unwanted software can slow your devices down or unexpectedly change your homepage and keep you from changing it back. With powerful new protections, we disabled more than 10,000 sites offering unwanted software, and reduced unwanted downloads via Google ads by more than 99 percent.

Trick to click

We got even tougher on ads that mislead or trick people into interacting with them—like ads designed to look like system warnings from your computer. In 2015 alone we rejected more than 17 million.

Creating a better experience

Sometimes even ads that offer helpful and relevant information behave in ways that can be really annoying—covering up what you’re trying to see or sending you to an advertiser’s site when you didn’t intend to go there. In 2015, we disabled or banned the worst offenders.

Accidental mobile clicks

We’ve all been there. You’re swiping through a slideshow of the best moments from the Presidential debate when an ad redirects you even though you didn’t mean to click on it. We’re working to end that. We've developed technology to determine when clicks on mobile ads are accidental. Instead of sending you off to an advertiser page you didn't mean to visit, we let you continue enjoying your slideshow (and the advertiser doesn't get charged).

Bad sites and apps

In 2015, we stopped showing ads on more than 25,000 mobile apps because the developers didn’t follow our policies. More than two-thirds of these violations were for practices like mobile ads placed very close to buttons, causing someone to accidentally click the ad. There are also some sites and apps that we choose not to work with because they don’t follow our policies. We also reject applications from sites and mobile apps that want to show Google ads but don't follow our policies. In 2015 alone, we rejected more than 1.4 million applications.

Putting you in control

We also give you tools to control the type of ads you see. You can always let us know when you believe an ad might be violating our policies.

Mute This Ad

Maybe you’ve just seen way too many car ads recently. “Mute This Ad” lets you click an “X” at the top on many of the ads we show and Google will stop showing you that ad and others like it from that advertiser. You can also tell us why. The 4+ billion pieces of feedback we received in 2015 are helping us show better ads and shape our policies.

Ads Settings

In 2015, we rolled out a new design for our Ads Settings where you can manage your ads experience. You can update your interests to make the ads you see more relevant, or block specific advertisers all together.

Looking ahead to 2016

We’re always updating our technology and our policies based on your feedback—and working to stay one step ahead of the fraudsters. In 2016, we’re planning updates like further restricting what can be advertised as effective for weight loss, and adding new protections against malware and bots. We want to make sure all the ads you see are helpful and welcome and we’ll keep fighting to make that a reality.

Posted by Sridhar Ramaswamy
SVP, Ads & Commerce

How we fought bad ads in 2015

Cross-posted from the Official Google Blog

When ads are good, they connect you to products or services you’re interested in and make it easier to get stuff you want. They also keep a lot of what you love about the web—like news sites or mobile apps—free.

But some ads are just plain bad—like ads that carry malware, cover up content you’re trying to see, or promote fake goods. Bad ads can ruin your entire online experience, a problem we take very seriously. That’s why we have a strict set of policies for the kinds of ads businesses can run with Google—and why we’ve invested in sophisticated technology and a global team of 1,000+ people dedicated to fighting bad ads. Last year alone we disabled more than 780 million ads for violating our policies—a number that's increased over the years thanks to new protections we've put in place. If you spent one second looking at each of these ads, it’d take you nearly 25 years to see them all!

Here are some of the top areas we focused on in our fight against bad ads in 2015:

Busting bad ads

Some bad ads, like those for products that falsely claim to help with weight loss, mislead people. Others help fraudsters carry out scams, like those that lead to “phishing” sites that trick people into handing over personal information. Through a combination of computer algorithms and people at Google reviewing ads, we’re able to block the vast majority of these bad ads before they ever get shown. Here are some types of bad ads we busted in 2015:

Counterfeiters

We suspended more than 10,000 sites and 18,000 accounts for attempting to sell counterfeit goods (like imitation designer watches).

Pharmaceuticals

We blocked more than 12.5 million ads that violated our healthcare and medicines policy, such as ads for pharmaceuticals that weren’t approved for use or that made misleading claims to be as effective as prescription drugs.

Weight loss scams

Weight loss scams, like ads for supplements promising impossible-to-achieve weight loss without diet or exercise, were one of the top user complaints in 2015. We responded by suspending more than 30,000 sites for misleading claims.

Phishing

In 2015, we stepped up our efforts to fight phishing sites, blocking nearly 7,000 sites as a result.

Unwanted software

Unwanted software can slow your devices down or unexpectedly change your homepage and keep you from changing it back. With powerful new protections, we disabled more than 10,000 sites offering unwanted software, and reduced unwanted downloads via Google ads by more than 99 percent.

Trick to click

We got even tougher on ads that mislead or trick people into interacting with them—like ads designed to look like system warnings from your computer. In 2015 alone we rejected more than 17 million.

Creating a better experience

Sometimes even ads that offer helpful and relevant information behave in ways that can be really annoying—covering up what you’re trying to see or sending you to an advertiser’s site when you didn’t intend to go there. In 2015, we disabled or banned the worst offenders.

Accidental mobile clicks

We’ve all been there. You’re swiping through a slideshow of the best moments from the Presidential debate when an ad redirects you even though you didn’t mean to click on it. We’re working to end that. We've developed technology to determine when clicks on mobile ads are accidental. Instead of sending you off to an advertiser page you didn't mean to visit, we let you continue enjoying your slideshow (and the advertiser doesn't get charged).

Bad sites and apps

In 2015, we stopped showing ads on more than 25,000 mobile apps because the developers didn’t follow our policies. More than two-thirds of these violations were for practices like mobile ads placed very close to buttons, causing someone to accidentally click the ad. There are also some sites and apps that we choose not to work with because they don’t follow our policies. We also reject applications from sites and mobile apps that want to show Google ads but don't follow our policies. In 2015 alone, we rejected more than 1.4 million applications.

Putting you in control

We also give you tools to control the type of ads you see. You can always let us know when you believe an ad might be violating our policies.

Mute This Ad

Maybe you’ve just seen way too many car ads recently. “Mute This Ad” lets you click an “X” at the top on many of the ads we show and Google will stop showing you that ad and others like it from that advertiser. You can also tell us why. The 4+ billion pieces of feedback we received in 2015 are helping us show better ads and shape our policies.

Ads Settings

In 2015, we rolled out a new design for our Ads Settings where you can manage your ads experience. You can update your interests to make the ads you see more relevant, or block specific advertisers all together.

Looking ahead to 2016

We’re always updating our technology and our policies based on your feedback—and working to stay one step ahead of the fraudsters. In 2016, we’re planning updates like further restricting what can be advertised as effective for weight loss, and adding new protections against malware and bots. We want to make sure all the ads you see are helpful and welcome and we’ll keep fighting to make that a reality.

Posted by Sridhar Ramaswamy
SVP, Ads & Commerce

Working together to filter automated data-center traffic

Today the Trustworthy Accountability Group (TAG) announced a new pilot blacklist to protect advertisers across the industry. This blacklist comprises data-center IP addresses associated with non-human ad requests. We're happy to support this effort along with other industry leaders—Dstillery, Facebook, MediaMath, Quantcast, Rubicon Project, TubeMogul and Yahoo—and contribute our own data-center blacklist. As mentioned to Ad Age and in our recent call to action, we believe that if we work together we can raise the fraud-fighting bar for the whole industry.

Data-center traffic is one of many types of non-human or illegitimate ad traffic. The newly shared blacklist identifies web robots or “bots” that are being run in data centers but that avoid detection by the IAB/ABC International Spiders & Bots List. Well-behaved bots announce that they're bots as they surf the web by including a bot identifier in their declared User-Agent strings. The bots filtered by this new blacklist are different. They masquerade as human visitors by using User-Agent strings that are indistinguishable from those of typical web browsers.

In this post, we take a closer look at a few examples of data-center traffic to show why it’s so important to filter this traffic across the industry.
Impact of the data-center blacklist
When observing the traffic generated by the IP addresses in the newly shared blacklist, we found significantly distorted click metrics. In May of 2015 on DoubleClick Campaign Manager alone, we found the blacklist filtered 8.9% of all clicks. Without filtering these clicks from campaign metrics, advertiser click-through rates would have been incorrect and for some advertisers this error would have been very large.

Below is a plot that shows how much click-through rates in May would have been inflated across the most impacted of DoubleClick Campaign Manager’s larger advertisers.

Two examples of bad data-center traffic
There are two distinct types of invalid data-center traffic: where the intent is malicious and where the impact on advertisers is accidental. In this section we consider two interesting examples where we’ve observed traffic that was likely generated with malicious intent.

Publishers use many different strategies to increase the traffic to their sites. Unfortunately, some are willing to use any means necessary to do so. In our investigations we’ve seen instances where publishers have been running software tools in data centers to intentionally mislead advertisers with fake impressions and fake clicks.

First example
UrlSpirit is just one example of software that some unscrupulous publishers have been using to collaboratively drive automated traffic to their websites. Participating publishers install the UrlSpirit application on Windows machines and they each submit up to three URLs through the application’s interface. Submitted URLs are then distributed to other installed instances of the application, where Internet Explorer is used to automatically visit the list of target URLs. Publishers who have not installed the application can also leverage the network of installations by paying a fee.

At the end of May more than 82% of the UrlSpirit installations were being run on machines in data centers. There were more than 6,500 data-center installations of UrlSpirit, with each data-center installation running in a separate virtual machine. In aggregate, the data-center installations of UrlSpirit were generating a monthly rate of at least half a billion ad requests— an average of 2,500 fraudulent ad requests per installation per day.

Second example
HitLeap is another example of software that some publishers are using to collaboratively drive automated traffic to their websites. The software also runs on Windows machines, and each instance uses the Chromium Embedded Framework to automatically browse the websites of participating publishers—rather than using Internet Explorer.

Before publishers can use the network of installations to drive traffic to their websites, they need browsing minutes. Participating publishers earn browsing minutes by running the application on their computers. Alternatively, they can simply buy browsing minutes—with bundles starting at $9 for 10,000 minutes or up to 1,000,000 minutes for $625. 

Publishers can specify as many target URLs as they like. The number of visits they receive from the network of installations is a function of how long they want the network of bots to spend on their sites. For example, ten browsing minutes will get a publisher five visits if the publisher requests two-minute visit durations.

In mid-June, at least 4,800 HitLeap installations were being run in virtual machines in data centers, with a unique IP associated with each HitLeap installation. The data-center installations of HitLeap made up 16% of the total HitLeap network, which was substantially larger than the UrlSpirit network.

In aggregate the data-center installations of HitLeap were generating a monthly rate of at least a billion fraudulent ad requests—or an average of 1,600 ad requests per installation per day.

Not only were these publishers collectively responsible for billions of automated ad requests, but their websites were also often extremely deceptive. For example, of the top ten webpages visited by HitLeap bots in June, nine of these included hidden ad slots -- meaning that not only was the traffic fake, but the ads couldn’t have been seen even if they had been legitimate human visitors. 

http://vedgre.com/7/gg.html is illustrative of these nine webpages with hidden ad slots. The webpage has no visible content other than a single 300×250px ad. This visible ad is actually in a 300×250px iframe that includes two ads, the second of which is hidden. Additionally, there are also twenty-seven 0×0px hidden iframes on this page with each hidden iframe including two ad slots. In total there are fifty-five hidden ads on this page and one visible ad. Finally, the ads served on http://vedgre.com/7/gg.html appear to advertisers as though they have been served on legitimate websites like indiatimes.com, scotsman.com, autotrader.co.uk, allrecipes.com, dictionary.com and nypost.com, because the tags used on http://vedgre.com/7/gg.html to request the ad creatives have been deliberately spoofed.

An example of collateral damage
Unlike the traffic described above, there is also automated data-center traffic that impacts advertising campaigns but that hasn’t been generated for malicious purposes. An interesting example of this is an advertising competitive intelligence company that is generating a large volume of undeclared non-human traffic.

This company uses bots to scrape the web to find out which ad creatives are being served on which websites and at what scale. The company’s scrapers also click ad creatives to analyze the landing page destinations. To provide its clients with the most accurate possible intelligence, this company’s scrapers operate at extraordinary scale and they also do so without including bot identifiers in their User-Agent strings.

While the aim of this company is not to cause advertisers to pay for fake traffic, the company’s scrapers do waste advertiser spend. They not only generate non-human impressions; they also distort the metrics that advertisers use to evaluate campaign performance—in particular, click metrics. Looking at the data across DoubleClick Campaign Manager this company’s scrapers were responsible for 65% of the automated data-center clicks recorded in the month of May.

Going forward
Google has always invested to prevent this and other types of invalid traffic from entering our ad platforms. By contributing our data-center blacklist to TAG, we hope to help others in the industry protect themselves. 

We’re excited by the collaborative spirit we’ve seen working with other industry leaders on this initiative. This is an important, early step toward tackling fraudulent and illegitimate inventory across the industry and we look forward to sharing more in the future. By pooling our collective efforts and working with industry bodies, we can create strong defenses against those looking to take advantage of our ecosystem. We look forward to working with the TAG Anti-fraud working group to turn this pilot program into an industry-wide tool.


Posted by Vegard Johnsen, Product Manager Google Ad Traffic Quality

Working together to filter automated data-center traffic

Today the Trustworthy Accountability Group (TAG) announced a new pilot blacklist to protect advertisers across the industry. This blacklist comprises data-center IP addresses associated with non-human ad requests. We're happy to support this effort along with other industry leaders—Dstillery, Facebook, MediaMath, Quantcast, Rubicon Project, TubeMogul and Yahoo—and contribute our own data-center blacklist. As mentioned to Ad Age and in our recent call to action, we believe that if we work together we can raise the fraud-fighting bar for the whole industry.

Data-center traffic is one of many types of non-human or illegitimate ad traffic. The newly shared blacklist identifies web robots or “bots” that are being run in data centers but that avoid detection by the IAB/ABC International Spiders & Bots List. Well-behaved bots announce that they're bots as they surf the web by including a bot identifier in their declared User-Agent strings. The bots filtered by this new blacklist are different. They masquerade as human visitors by using User-Agent strings that are indistinguishable from those of typical web browsers.

In this post, we take a closer look at a few examples of data-center traffic to show why it’s so important to filter this traffic across the industry.
Impact of the data-center blacklist
When observing the traffic generated by the IP addresses in the newly shared blacklist, we found significantly distorted click metrics. In May of 2015 on DoubleClick Campaign Manager alone, we found the blacklist filtered 8.9% of all clicks. Without filtering these clicks from campaign metrics, advertiser click-through rates would have been incorrect and for some advertisers this error would have been very large.

Below is a plot that shows how much click-through rates in May would have been inflated across the most impacted of DoubleClick Campaign Manager’s larger advertisers.

Two examples of bad data-center traffic
There are two distinct types of invalid data-center traffic: where the intent is malicious and where the impact on advertisers is accidental. In this section we consider two interesting examples where we’ve observed traffic that was likely generated with malicious intent.

Publishers use many different strategies to increase the traffic to their sites. Unfortunately, some are willing to use any means necessary to do so. In our investigations we’ve seen instances where publishers have been running software tools in data centers to intentionally mislead advertisers with fake impressions and fake clicks.

First example
UrlSpirit is just one example of software that some unscrupulous publishers have been using to collaboratively drive automated traffic to their websites. Participating publishers install the UrlSpirit application on Windows machines and they each submit up to three URLs through the application’s interface. Submitted URLs are then distributed to other installed instances of the application, where Internet Explorer is used to automatically visit the list of target URLs. Publishers who have not installed the application can also leverage the network of installations by paying a fee.

At the end of May more than 82% of the UrlSpirit installations were being run on machines in data centers. There were more than 6,500 data-center installations of UrlSpirit, with each data-center installation running in a separate virtual machine. In aggregate, the data-center installations of UrlSpirit were generating a monthly rate of at least half a billion ad requests— an average of 2,500 fraudulent ad requests per installation per day.

Second Example
HitLeap is another example of software that some publishers are using to collaboratively drive automated traffic to their websites. The software also runs on Windows machines, and each instance uses the Chromium Embedded Framework to automatically browse the websites of participating publishers—rather than using Internet Explorer.

Before publishers can use the network of installations to drive traffic to their websites, they need browsing minutes. Participating publishers earn browsing minutes by running the application on their computers. Alternatively, they can simply buy browsing minutes—with bundles starting at $9 for 10,000 minutes or up to 1,000,000 minutes for $625. 

Publishers can specify as many target URLs as they like. The number of visits they receive from the network of installations is a function of how long they want the network of bots to spend on their sites. For example, ten browsing minutes will get a publisher five visits if the publisher requests two-minute visit durations.

In mid-June, at least 4,800 HitLeap installations were being run in virtual machines in data centers, with a unique IP associated with each HitLeap installation. The data-center installations of HitLeap made up 16% of the total HitLeap network, which was substantially larger than the UrlSpirit network.

In aggregate the data-center installations of HitLeap were generating a monthly rate of at least a billion fraudulent ad requests—or an average of 1,600 ad requests per installation per day.

Not only were these publishers collectively responsible for billions of automated ad requests, but their websites were also often extremely deceptive. For example, of the top ten webpages visited by HitLeap bots in June, nine of these included hidden ad slots -- meaning that not only was the traffic fake, but the ads couldn’t have been seen even if they had been legitimate human visitors. 

http://vedgre.com/7/gg.html is illustrative of these nine webpages with hidden ad slots. The webpage has no visible content other than a single 300×250px ad. This visible ad is actually in a 300×250px iframe that includes two ads, the second of which is hidden. Additionally, there are also twenty-seven 0×0px hidden iframes on this page with each hidden iframe including two ad slots. In total there are fifty-five hidden ads on this page and one visible ad. Finally, the ads served on http://vedgre.com/7/gg.html appear to advertisers as though they have been served on legitimate websites like indiatimes.com, scotsman.com, autotrader.co.uk, allrecipes.com, dictionary.com and nypost.com, because the tags used on http://vedgre.com/7/gg.html to request the ad creatives have been deliberately spoofed.

An example of collateral damage
Unlike the traffic described above, there is also automated data-center traffic that impacts advertising campaigns but that hasn’t been generated for malicious purposes. An interesting example of this is an advertising competitive intelligence company that is generating a large volume of undeclared non-human traffic.

This company uses bots to scrape the web to find out which ad creatives are being served on which websites and at what scale. The company’s scrapers also click ad creatives to analyze the landing page destinations. To provide its clients with the most accurate possible intelligence, this company’s scrapers operate at extraordinary scale and they also do so without including bot identifiers in their User-Agent strings.

While the aim of this company is not to cause advertisers to pay for fake traffic, the company’s scrapers do waste advertiser spend. They not only generate non-human impressions; they also distort the metrics that advertisers use to evaluate campaign performance—in particular, click metrics. Looking at the data across DoubleClick Campaign Manager this company’s scrapers were responsible for 65% of the automated data-center clicks recorded in the month of May.

Going forward
Google has always invested to prevent this and other types of invalid traffic from entering our ad platforms. By contributing our data-center blacklist to TAG, we hope to help others in the industry protect themselves. 

We’re excited by the collaborative spirit we’ve seen working with other industry leaders on this initiative. This is an important, early step toward tackling fraudulent and illegitimate inventory across the industry and we look forward to sharing more in the future. By pooling our collective efforts and working with industry bodies, we can create strong defenses against those looking to take advantage of our ecosystem. We look forward to working with the TAG Anti-fraud working group to turn this pilot program into an industry-wide tool.


Posted by Vegard Johnsen, Product Manager Google Ad Traffic Quality

Native Ads come to DoubleClick

Last week at the DoubleClick Leadership Summit, we announced the availability of Native Ads for Apps on our DoubleClick platforms. In this post, we’ll dive into the details of this new ad format and what it means for our clients.

The mobile revolution has changed the way we engage with content. We check our phones literally hundreds of times a day: to catch up with friends and family, read an article, or watch a video while waiting in line. In these moments, we believe ads have the best chance to be effective when they are placed with respect to a user’s context.

At Google, helping advertisers connect with the right audience in the right moments has been our aim from the beginning. From search ads complementing Google search results to TrueView ads on YouTube, we’ve found that the less disruptive we can make ads, the more open consumers are to them. That’s why we’re adding access to YouTube’s TrueView format and Twitter’s Promoted Tweets on DoubleClick Bid Manager, our programmatic platform. And now, we’re excited to help advertisers connect with publishers to bring rich native ad experiences to apps with our native ads solution in DoubleClick. 

Introducing Native Ads on DoubleClick
Native ads fit in with the look and feel of publisher content, enabling better, more effective ad experiences for users. Context is incredibly important on mobile, and that’s why over the next few weeks we’re rolling out our native ad solution for apps to DoubleClick for Publishers clients globally.
Native ads for apps in DFP provides publishers with the full flexibility needed to create seamless ad experiences for their users. Instead of serving a static banner ad, DFP delivers ad components (headline, image, links, etc) to a publisher’s app where they’re rendered into a native ad. By providing the building blocks of an ad, our native solution allows an advertiser to work with their DFP partners to create ads that are seamless with content, can take advantage of mobile features like swipe gestures and 3D animation, and can be adjusted to create beautiful ads for any device or screen size. 

Setting up native ads for apps with your DFP partners will be easy. Publishers that enable native ads will be able to offer two of the most popular mobile formats, app install ads or content ads, or create fully custom native ads by including any additional fields for DFP to send to their app.

Of course, it’s essential that native ads are clearly marked as advertising. Ads that trick users into clicking or are indistinguishable from content are bad for the whole ecosystem including users, advertisers, and publishers.

Native experiences are essential on mobile
When users pick up their phones it’s critical that they’re presented with a seamless ad experience. With native ads in DFP, publishers can maintain a beautiful user experience in their apps while providing brands an opportunity to reach their audience on mobile. Advertisers should reach out to their publisher partners to find out how they can use native ads to connect with their customers and reach them when they’re most receptive.

If you want to learn more about native ads in DoubleClick reach out to your account manager today. Also, visit the mobile solutions section of our website to see how DoubleClick can help you engage your audience on every screen. 

Next to come in the DLS series, Google Preferred and Google Partner Select on DoubleClick.


Posted by Josh Cohen, Senior Product Manager 

Native Ads come to DoubleClick

Last week at the DoubleClick Leadership Summit, we announced the availability of Native Ads for Apps on our DoubleClick platforms. In this post, we’ll dive into the details of this new ad format and what it means for our clients. 

The mobile revolution has changed the way we engage with content. We check our phones literally hundreds of times a day: to catch up with friends and family, read an article, or watch a video while waiting in line. In these moments, we believe ads have the best chance to be effective when they are placed with respect to a user’s context.

Imagine an ad that pops up in the middle of your game; it would be incredibly disruptive. But, one that appears between levels would feel more natural. Or, an ad that blocks your feed as you scroll through; that would be annoying. But, one that’s stitched within the feed would be almost expected. What’s necessary in today’s environment is native advertising—advertising that’s clearly marked and complements the user experience.

Introducing Native Ads on DoubleClick
Native ads fit in with the look and feel of publisher content, enabling better, more effective ad experiences for users. Context is incredibly important on mobile, and that’s why over the next few weeks we’re rolling out our native ad solution for apps to DoubleClick for Publishers clients globally.
Native ads for apps in DFP provides publishers with the full flexibility needed to create seamless ad experiences for their users. Instead of serving a static banner ad, DFP delivers ad components (headline, image, links, etc) to a publisher’s app where they’re rendered into a native ad defined by the developer’s code. By providing the building blocks of an ad, our native solution allows a publisher to create ads that are seamless with content, can take advantage of mobile features like swipe gestures and 3D animation, and can be adjusted to create beautiful ads for any device or screen size. 

Setting up native ads for apps in DoubleClick for Publishers is easy. Publishers can choose from two of the most popular mobile formats, app install ads or content ads, or create fully custom native ads by including any additional fields for DFP to send to their app. In addition, publishers using our standard native ad formats can maximize revenue by accessing demand from our native ads beta in DoubleClick Ad Exchange. 

Of course, it’s essential that publishers clearly mark native ads as advertising. Ads that trick users into clicking or are indistinguishable from content are bad for the whole ecosystem, including users, advertisers, and publishers. 

Native experiences are essential on mobile
When users pick up their phones, it’s critical that they’re presented with a seamless ad experience. With native ads in DFP, publishers can maintain a beautiful user experience in their apps while providing brands an opportunity to reach their audience on mobile. Advertisers should reach out to their publisher partners to find out how they can use native ads to connect with their customers and reach them when they’re most receptive.

To learn more about native ads in DoubleClick, check out our help center or, reach out to your account manager today. Also, visit the mobile solutions section of our website to see how DoubleClick can help you engage your audience on every screen. 

Next to come in the DLS series, Google Partner Select and mDialog on DoubleClick.


Posted by Josh Cohen, Senior Product Manager 

Programmatic Selling just got better: Announcing Marketplace in DoubleClick Ad Exchange and Programmatic Guaranteed

As Neal Mohan announced last week at the DoubleClick Leadership Summit, Marketplace in DoubleClick Ad Exchange is now available to all customers globally, and we’re working to bring Programmatic Guaranteed to more publishers as soon as possible.

Today, the biggest brands and agencies are increasingly running premium digital campaigns across apps, videos, and native formats using programmatic technology, and we’re seeing this shift reflected across our platforms. Over the last year, the overall volume of programmatic transactions on our systems has grown 59%, while programmatic direct deals have jumped 2X. Today, eight of our top 25 publishers are selling at least 10% of their ad inventory through direct programmatic deals.

The growth of programmatic is just part of the story. Programmatic Direct deals, the channel of choice for many premium publishers, are also driving higher inventory prices. Preferred Deals and Private Auctions are generating CPMs that are double or triple what publishers see in the open auction. 

Introducing Marketplace in DoubleClick Ad Exchange
We see tremendous value in Programmatic Direct but, finding and connecting with all potential advertisers looking for premium offers is a challenge. That’s why we developed Marketplace, an easy to use interface for Ad Exchange buyers to discover, negotiate, and manage deals with the world’s best apps, sites, and properties. For publishers, Marketplace is where their brand is showcased through a customizable publisher profile, and where their programmatic direct offers are discoverable by programmatic buyers globally. 

Marketplace in DoubleClick Ad Exchange is just the beginning of how we see Programmatic sales evolving in the future. The programmatic buying trend shows no sign of slowing and, we believe even more premium deals will happen through programmatic channels. That’s why, over the rest of this year we’ll be rolling out a brand new way to transact through DoubleClick: Programmatic Guaranteed.

Blending Direct with Programmatic Sales
Programmatic Guaranteed allows publishers to offer their reserved, premium inventory via a new programmatic channel in DoubleClick for Publishers, and provide brands an opportunity to buy reservations in a more efficient way. Publishers can lock in revenue, while giving advertisers guaranteed access to premium inventory with programmatic targeting and frequency management. It simplifies the workflow of a guaranteed deal, cutting down the steps it takes to implement, from 40 steps to 4. And the best part, it does this all through our Real-Time Bidding infrastructure.

In our pilot testing with DoubleClick Bid Manager, Programmatic Guaranteed deals have been creating tremendous value. We’ve seen CPMs at 15-times open auction prices - on par with upfront or reserved campaigns. But in the future, we see incredible new opportunities for Programmatic Guaranteed. Since our solution utilizes Real-Time Bidding, instead of just automating line item booking in the ad server, we can open up innovative new deal types to give publishers enhanced flexibility that truly blends direct and programmatic capabilities.

We’re excited about the future of programmatic buying and selling and the possibilities it will bring for all of our partners. If you’re interested in learning more about Marketplace in DoubleClick Ad Exchange reach out to your account manager today, and stay tuned as we look to expand our pilot of Programmatic Guaranteed to more buyers. 

This is the first announcement in our post-DLS series. Join us over the next week as we release more details of all our recent product announcements. Next up, Native Ads in DoubleClick


Posted by Scott Spencer, Director of Product Management

A call to action: Stopping digital advertising fraud

A lot of ink has recently poured onto the subject of digital advertising fraud—which is a great thing. Fraud is a real and serious problem, but some, we think, still hold a mental image of fraudsters as one-off bad actors sitting in a dark room racking up clicks on ads on their site to make a few extra bucks. The truth is far more troubling: the majority of ad fraud today is perpetrated by sophisticated organizations that devote vast resources to build and operate large scale botnets run on hijacked devices, to reap multi-million dollar payouts [1,2].

Stopping these bad actors requires an industry-wide, long term commitment to identifying and filtering fake traffic from the ecosystem. This is not a task any one company can take on alone. We need everyone across the industry to take steps toward making digital advertising more secure and transparent. Here are some actions we’re taking to help move the entire industry forward. (We hope others join us.)

Describing threats in common, precise language
Many of the statistics and headline-grabbing disclosures in the market today do a great job of creating panic, but share very little detail to help anyone actually solve the problem.

Imagine if police officers looking for a bank robber could only describe the criminal as “suspicious”. The robber would be free for life. And yet this is disappointingly how advertising fraud is policed today. “Fraud” and “suspicious” are seen as synonymous and applied to everything from completely legitimate ad impressions to fake traffic generated by zombie PCs infected with malware. Before we can stop advertising fraud, everyone needs to start using common, precise language to disclose fraudulent activity.

The IAB introduced its Anti-Fraud Principles and Proposed Taxonomy last September providing the industry with this common language and we strongly support these standards. But these are early steps – as an industry we can’t stop there. When fraud is identified it should be shared in a clear structured threat disclosure, mirroring how security researchers release security vulnerabilities. By increasing the amount of data we share in a transparent, helpful way, others in the industry will be able to corroborate any claims being made, remove the threat from their systems, removing it from the ecosystem. Further, if a public disclosure could lead to further damage, then vulnerable parties should be notified in advance.

Ensuring bad actors can't hide: Supplier Identifiers
If you bought a designer scarf in a store only to find out it’s a knock-off with a fake label, you’d expect a refund. You’d also know which store to avoid in the future. The same should hold true for fraudulent inventory. When fraud is identified, it should also be possible to identify the seller or reseller who should take responsibility for the inventory. 

Today this doesn’t hold true. As an illustration of the problem, we are currently finding significant volumes of inventory misrepresenting where the ads will actually appear and in many instances there is no reliable and verifiable mechanism to identify who in the supply chain is responsible for this misrepresented inventory.
To address this problem, we propose that the buyer of any branded (non-blind) impression should be passed a chain of unique supplier identifiers, one for each and every reseller (exchange, network, sell-side platform) and one for the publisher. With this full chain of identifiers for each impression, buyers can establish which supply paths for inventory can be trusted and which cannot. If a buyer finds a potential issue, and it’s clear where the problem lies in the supply path, then there should be an unambiguous process for refunds. It will also be easy to avoid this supply path in the future.

Ultimately the burden for ensuring the quality of online inventory starts with those who sell it. To this end, we submitted a proposal to create an industry managed supplier identifier to the IAB Anti-Fraud Working Group in February, and we’ve heard others in the industry support this call for more transparency. We've come to take this type of guarantee for granted when we shop in a store – let's work together and make it a standard for digital advertising as well.

Cleaning up campaign metrics
Before investing your hard-earned money in a local business, you’d definitely review their financial reports to understand if it’s a good investment or not. In digital, campaign metrics are the record of truth. They help advertisers evaluate which inventory sources provide the greatest value and outline a roadmap of where ad spend should be invested. But if these metrics are polluted with fake and fraudulent activity, it’s impossible to know which inventory sources provide the best return on spend.

Now, imagine if you invested in that small business only to find out it was actually a fictional front created by an organized crime ring, complete with receipts and a cashier, to cover up their back office money laundering operation. Fraudsters work hard to disguise their bot traffic as being human by having them do things like go window shopping or plan a vacation to create a whole world of made-up conversions and interactions before directing them to their final destination.

As long as fake traffic still appears to be delivering value, advertisers’ spend will continue flowing to the operators of fake traffic sources. Of course our industry should push for 100% fraud free ecosystem. The reality, though, is that some will likely always slip through. When it does, it's also our responsibility to keep it from skewing marketers' metrics. If we can keep reporting systems from giving credit to fake traffic, this removes the incentive for publishers to buy this bad traffic from bad actors.

As an industry, we owe it to our clients and ourselves to ensure that metrics are clean and accurate. Let’s work together to identify fraudulent traffic and invest in systems to filter it out of campaign metrics. 

A fraud-free ecosystem?
Advertising fraud is a real and serious problem, one that creates significant costs for advertisers, takes revenue from legitimate publishers, and enables the spread of malware to users, among other harms. To eliminate it, we must take action to remove the incentive for bad actors to create and sell fraudulent traffic. The steps I’ve outlined above seek to do this by cutting off their access to advertising spend and making it difficult for fraudsters to hide.

Over the coming months, we’ll be taking these steps and working with the industry to help others clean bad traffic from the ecosystem. 

Posted by Vegard Johnsen, Product Manager Google Ad Traffic Quality