Tag Archives: optimization

The cpu_features library

Originally posted by Guillaume Chatelet from the Google Compiler Research Team on the Google Open Source Blog

"Write Once, Run Anywhere." That was the promise of Java back in the 1990s. You could write your Java code on one platform, and it would run on any CPU implementing a Java Virtual Machine.

Copyright Andrew Dunn, licensed CC-BY-SA-2.0

But for developers who need to squeeze every bit of performance out of their applications, that's not enough. Since the dawn of computing, performance-minded programmers have used insights about hardware to fine tune their code.

Let's say you're working on code for which speed is paramount, perhaps a new video codec or a library to process tensors. There are individual instructions that will dramatically improve performance, like fused multiply-add, as well as entire instruction sets like SSE2 and AVX, that can give the critical portions of your code a speed boost.

Here's the problem: there's no way to know a priori which instructions your CPU supports. Identifying the CPU manufacturer isn't sufficient. For instance, Intel's Haswell architecture supports the AVX2 instruction set, while Sandy Bridge doesn't. Some developers resort to desperate measures like reading /proc/cpuinfo to identify the CPU and then consulting hardcoded mappings of CPU IDs to instructions.

Enter cpu_features, a small, fast, and simple open source library to report CPU features at runtime. Written in C99 for maximum portability, it allocates no memory and is suitable for implementing fundamental functions and running in sandboxed environments.

The library currently supports x86, ARM/AArch64, and MIPS processors, and we'll be adding to it as the need arises. We also welcome contributions from others interested in making programs "write once, run fast everywhere."

The cpu_features library

"Write Once, Run Anywhere." That was the promise of Java back in the 1990s. You could write your Java code on one platform, and it would run on any CPU implementing a Java Virtual Machine.

But for developers who need to squeeze every bit of performance out of their applications, that's not enough. Since the dawn of computing, performance-minded programmers have used insights about hardware to fine tune their code.

Let's say you're working on code for which speed is paramount, perhaps a new video codec or a library to process tensors. There are individual instructions that will dramatically improve performance, like fused multiply-add, as well as entire instruction sets like SSE2 and AVX, that can give the critical portions of your code a speed boost.
Photo by Andrew Dunn, licensed CC-BY-SA-2.0.

Here's the problem: there's no way to know a priori which instructions your CPU supports. Identifying the CPU manufacturer isn't sufficient. For instance, Intel’s Haswell architecture supports the AVX2 instruction set, while Sandy Bridge doesn't. Some developers resort to desperate measures like reading /proc/cpuinfo to identify the CPU and then consulting hardcoded mappings of CPU IDs to instructions.

Enter cpu_features, a small, fast, and simple open source library to report CPU features at runtime. Written in C89 for maximum portability, it allocates no memory and is suitable for implementing fundamental functions and running in sandboxed environments.

The library currently supports x86, ARM/AArch64, and MIPS processors, and we'll be adding to it as the need arises. We also welcome contributions from others interested in making programs “write once, run fast everywhere.”

By Guillaume Chatelet, Google Compiler Research Team

Announcing the NYC Algorithms and Optimization Site



New York City is home to several Google algorithms research groups. We collaborate closely with the teams behind many Google products and work on a wide variety of algorithmic challenges, like optimizing infrastructure, protecting privacy, improving friend suggestions and much more.

Today, we’re excited to provide more insights into the research done in the Big Apple with the launch of the NYC Algorithms and Optimization Team page. The NYC Algorithms and Optimization Team comprises multiple overlapping research groups working on large-scale graph mining, large-scale optimization and market algorithms.

Large-scale Graph Mining
The Large-scale Graph Mining Group is tasked with building the most scalable library for graph algorithms and analysis and applying it to a multitude of Google products. We formalize data mining and machine learning challenges as graph algorithms problems and perform fundamental research in those fields leading to publications in top venues.

Our projects include:
  • Large-scale Similarity Ranking: Our research in pairwise similarity ranking has produced a number of innovative methods, which we have published in top venues such as WWW, ICML, and VLDB, e.g., improving friend suggestion using ego-networks and computing similarity rankings in large-scale multi-categorical bipartite graphs.
  • Balanced Partitioning: Balanced partitioning is often a crucial first step in solving large-scale graph optimization problems. As our paper shows, we are able to achieve a 15-25% reduction in cut size compared to state-of-the-art algorithms in the literature.
  • Clustering and Connected Components: We have state-of-the-art implementations of many different algorithms including hierarchical clustering, overlapping clustering, local clustering, spectral clustering, and connected components. Our methods are 10-30x faster than the best previously studied algorithms and can scale to graphs with trillions of edges.
  • Public-private Graph Computation: Our research on novel models of graph computation based on a personal view of private data preserves the privacy of each user.
Large-scale Optimization
The Large-scale Optimization Group’s mission is to develop large-scale optimization techniques and use them to improve the efficiency and robustness of infrastructure at Google. We apply techniques from areas such as combinatorial optimization, online algorithms, and control theory to make Google’s massive computational infrastructure do more with less. We combine online and offline optimizations to achieve such goals as increasing throughput, decreasing latency, minimizing resource contention, maximizing the efficacy of caches, and eliminating unnecessary work in distributed systems.

Our research is used in critical infrastructure that supports core products:
  • Consistent Hashing: We designed memoryless balanced allocation algorithms to assign a dynamic set of clients to a dynamic set of servers such that the load on each server is bounded, and the allocation does not change by much for every update operation. This technique is currently implemented in Google Cloud Pub/Sub and externally in the open-source haproxy.
  • Distributed Optimization Based on Core-sets: Composable core-sets provide an effective method for solving optimization problems on massive datasets. This technique can be used for several problems including distributed balanced clustering and distributed submodular maximization.
  • Google Search Infrastructure Optimization: We partnered with the Google Search infrastructure team to build a distributed feedback control loop to govern the way queries are fanned out to machines. We also improved the efficacy of caching by increasing the homogeneity of the stream of queries seen by any single machine.
Market Algorithms
The Market Algorithms Group analyzes, designs, and delivers economically and computationally efficient marketplaces across Google. Our research serves to optimize display ads for DoubleClick’s reservation ads and exchange, as well as sponsored search and mobile ads.

In the past few years, we have explored a number of areas, including:
For a summary of our research activities, you can take a look at talks at our recent market algorithms workshop.

It is our hope that with the help of this new Google NYC Algorithms and Optimization Team page that we can more effectively share our work and broaden our dialogue with the research and engineering community. Please visit the site to learn about our latest projects, publications, seminars, and research areas!

Federated Learning: Collaborative Machine Learning without Centralized Training Data



Standard machine learning approaches require centralizing the training data on one machine or in a datacenter. And Google has built one of the most secure and robust cloud infrastructures for processing this data to make our services better. Now for models trained from user interaction with mobile devices, we're introducing an additional approach: Federated Learning.

Federated Learning enables mobile phones to collaboratively learn a shared prediction model while keeping all the training data on device, decoupling the ability to do machine learning from the need to store the data in the cloud. This goes beyond the use of local models that make predictions on mobile devices (like the Mobile Vision API and On-Device Smart Reply) by bringing model training to the device as well.

It works like this: your device downloads the current model, improves it by learning from data on your phone, and then summarizes the changes as a small focused update. Only this update to the model is sent to the cloud, using encrypted communication, where it is immediately averaged with other user updates to improve the shared model. All the training data remains on your device, and no individual updates are stored in the cloud.
Your phone personalizes the model locally, based on your usage (A). Many users' updates are aggregated (B) to form a consensus change (C) to the shared model, after which the procedure is repeated.
Federated Learning allows for smarter models, lower latency, and less power consumption, all while ensuring privacy. And this approach has another immediate benefit: in addition to providing an update to the shared model, the improved model on your phone can also be used immediately, powering experiences personalized by the way you use your phone.

We're currently testing Federated Learning in Gboard on Android, the Google Keyboard. When Gboard shows a suggested query, your phone locally stores information about the current context and whether you clicked the suggestion. Federated Learning processes that history on-device to suggest improvements to the next iteration of Gboard’s query suggestion model.
To make Federated Learning possible, we had to overcome many algorithmic and technical challenges. In a typical machine learning system, an optimization algorithm like Stochastic Gradient Descent (SGD) runs on a large dataset partitioned homogeneously across servers in the cloud. Such highly iterative algorithms require low-latency, high-throughput connections to the training data. But in the Federated Learning setting, the data is distributed across millions of devices in a highly uneven fashion. In addition, these devices have significantly higher-latency, lower-throughput connections and are only intermittently available for training.

These bandwidth and latency limitations motivate our Federated Averaging algorithm, which can train deep networks using 10-100x less communication compared to a naively federated version of SGD. The key idea is to use the powerful processors in modern mobile devices to compute higher quality updates than simple gradient steps. Since it takes fewer iterations of high-quality updates to produce a good model, training can use much less communication. As upload speeds are typically much slower than download speeds, we also developed a novel way to reduce upload communication costs up to another 100x by compressing updates using random rotations and quantization. While these approaches are focused on training deep networks, we've also designed algorithms for high-dimensional sparse convex models which excel on problems like click-through-rate prediction.

Deploying this technology to millions of heterogenous phones running Gboard requires a sophisticated technology stack. On device training uses a miniature version of TensorFlow. Careful scheduling ensures training happens only when the device is idle, plugged in, and on a free wireless connection, so there is no impact on the phone's performance.
Your phone participates in Federated Learning only
when it won't negatively impact your experience.
The system then needs to communicate and aggregate the model updates in a secure, efficient, scalable, and fault-tolerant way. It's only the combination of research with this infrastructure that makes the benefits of Federated Learning possible.

Federated learning works without the need to store user data in the cloud, but we're not stopping there. We've developed a Secure Aggregation protocol that uses cryptographic techniques so a coordinating server can only decrypt the average update if 100s or 1000s of users have participated — no individual phone's update can be inspected before averaging. It's the first protocol of its kind that is practical for deep-network-sized problems and real-world connectivity constraints. We designed Federated Averaging so the coordinating server only needs the average update, which allows Secure Aggregation to be used; however the protocol is general and can be applied to other problems as well. We're working hard on a production implementation of this protocol and expect to deploy it for Federated Learning applications in the near future.

Our work has only scratched the surface of what is possible. Federated Learning can't solve all machine learning problems (for example, learning to recognize different dog breeds by training on carefully labeled examples), and for many other models the necessary training data is already stored in the cloud (like training spam filters for Gmail). So Google will continue to advance the state-of-the-art for cloud-based ML, but we are also committed to ongoing research to expand the range of problems we can solve with Federated Learning. Beyond Gboard query suggestions, for example, we hope to improve the language models that power your keyboard based on what you actually type on your phone (which can have a style all its own) and photo rankings based on what kinds of photos people look at, share, or delete.

Applying Federated Learning requires machine learning practitioners to adopt new tools and a new way of thinking: model development, training, and evaluation with no direct access to or labeling of raw data, with communication cost as a limiting factor. We believe the user benefits of Federated Learning make tackling the technical challenges worthwhile, and are publishing our work with hopes of a widespread conversation within the machine learning community.

Acknowledgements
This post reflects the work of many people in Google Research, including Blaise Agüera y Arcas, Galen Andrew, Dave Bacon, Keith Bonawitz, Chris Brumme, Arlie Davis, Jac de Haan, Hubert Eichner, Wolfgang Grieskamp, Wei Huang, Vladimir Ivanov, Chloé Kiddon, Jakub Konečný, Nicholas Kong, Ben Kreuter, Alison Lentz, Stefano Mazzocchi, Sarvar Patel, Martin Pelikan, Aaron Segal, Karn Seth, Ananda Theertha Suresh, Iulia Turc, Felix Yu, and our partners in the Gboard team.

Consistent Hashing with Bounded Loads



Running a large-scale web service, such as content hosting, necessarily requires load balancing — distributing clients uniformly across multiple servers such that none get overloaded. Further, it is desirable to find an allocation that does not change very much over time in a dynamic environment in which both clients and servers can be added or removed at any time. In other words, we need the allocation of clients to servers to be consistent over time.

In collaboration with Mikkel Thorup, a visiting researcher from university of Copenhagen, we developed a new efficient allocation algorithm for this problem with tight guarantees on the maximum load of each server, and studied it theoretically and empirically. We then worked with our Cloud team to implement it in Google Cloud Pub/Sub, a scalable event streaming service, and observed substantial improvement on uniformity of the load allocation (in terms of the maximum load assigned to servers) while maintaining consistency and stability objectives. In August 2016 we described our algorithm in the paper “Consistent Hashing with Bounded Loads”, and shared it on ArXiv for potential use by the broader research community.

Three months later, Andrew Rodland from Vimeo informed us that he had found the paper, implemented it in haproxy (a widely-used piece of open source software), and used it for their load balancing project at Vimeo. The results were dramatic: applying these algorithmic ideas helped them decrease the cache bandwidth by a factor of almost 8, eliminating a scaling bottleneck. He recently summarized this story in a blog post detailing his use case. Needless to say, we were excited to learn that our theoretical research was not only put into application, but also that it was useful and open-sourced.

Background
While the concept of consistent hashing has been developed in the past to deal with load balancing in dynamic environments, a fundamental issue with all the previously developed schemes is that, in certain scenarios, they may result in sub-optimal load balancing on many servers.

Additionally, both clients and servers may be added or removed periodically, and with such changes, we do not want to move too many clients. Thus, while the dynamic allocation algorithm has to always ensure a proper load balancing, it should also aim to minimize the number of clients moved after each change to the system. Such allocation problems become even more challenging when we face hard constraints on the capacity of each server - that is, each server has a capacity that the load may not exceed. Typically, we want capacities close to the average loads.

In other words, we want to simultaneously achieve both uniformity and consistency in the resulting allocations. There is a vast amount of literature on solutions in the much simpler case where the set of servers is fixed and only the client set is updated, but in this post we discuss solutions that are relevant in the fully dynamic case where both clients and servers can be added and removed.

The Algorithm
We can think about the servers as bins and clients as balls to have a similar notation with well-studied balls-to-bins stochastic processes. The uniformity objective encourages all bins to have a load roughly equal to the average density (the number of balls divided by the number of bins). For some parameter ε, we set the capacity of each bin to either floor or ceiling of the average load times (1+ε). This extra capacity allows us to design an allocation algorithm that meets the consistency objective in addition to the uniformity property.

Imagine a given range of numbers overlaid on a circle. We apply a hash function to balls and a separate hash function to bins to obtain numbers in that range that correspond to positions on that circle. We then start allocating balls in a specific order independent of their hash values (let’s say based on their ID). Then each ball is moved clockwise and is assigned to the first bin with spare capacity.
Consider the example above where 6 balls and 3 bins are assigned using two separate hash functions to random locations on the circle. For the sake of this instance, assume the capacity of each bin is set to 2. We start allocating balls in the increasing order of their ID values. Ball number 1 moves clockwise, and goes to bin C. Ball number 2 goes to A. Balls 3 and 4 go to bin B. Ball number 5 goes to bin C. Then ball number 6 moves clockwise and hits bin B first. However bin B has capacity 2 and already contains balls 3 and 4. So ball 6 keeps moving to reach bin C but that bin is also full. Finally, ball 6 ends up in bin A that has a spare slot for it.

Upon any update in the system (ball or bin insertion/deletion), the allocation is recomputed to keep the uniformity objective. The art of the analysis is to show that a small update (a few number of insertions and deletions) results in minor changes in the state of the allocation and therefore the consistency objective is met. In our paper we show that every ball removal or insertion in the system results in O(1/ε2) movements of other balls. The most important thing about this upper bound is that it is independent of the total number of balls or bins in the system. So if the number of balls or bins are doubled, this bound will not change. Having an upper bound independent of the number of balls or bins introduces room for scalability as the consistency objective is not violated if we move to bigger instances. Simulations for the number of movements (relocations) per update is shown below when an update occurs on a bin/server.
The red curve shows the average number of movements and the blue bars indicate the variance for different values of ε (the x-axis). The dashed curve is the upper bound suggested by our theoretical results which fits nicely as a prediction of the actual number of movements. Furthermore, for any value of ε, we know the load of each bin is at most (1+ε) times the average load. Below we see the load distribution of bins for different values of ε=0.1, ε=0.3 and ε=0.9.
The distribution of loads for several values of ε. The load distribution is nearly uniform covering all ranges of loads from 0 to (1+ε) times average, and many bins with load equal to (1+ε) times average.
As one can see there is a tradeoff — a lower ε helps with uniformity but not with consistency, while larger ε values help with consistency. A lower ε will ensure that many loads will be equal to the hard capacity limit of (1+ε) times the average, and the rest have a decaying distribution.

When providing content hosting services, one must be ready to face a variety of instances with different characteristics. This consistent hashing scheme is ideal for such scenarios as it performs well even for worst-case instances.

While our internal results are exciting, we are even more pleased that the broader community found our solution useful enough to open-source, allowing anyone to use this algorithm. If you are interested in further details of this research, please see the paper on ArXiv, and stay tuned for more research from the NYC Algorithms Team!

Acknowledgements:
We would like to thank Alex Totok, Matt Gruskin, Sergey Kondratyev and Haakon Ringberg from the Google Cloud Pub/Sub team, and of course Mikkel Thorup for his invaluable contributions to this paper.

How to optimize your Adsense ad placements for mobile users

This is the final guest post from AdSense publisher Brandon Gaille. Brandon has built his small business marketing blog, BrandonGaille.com, to over 2 million monthly visitors in less than three years. He’s featured as our guest blogger to share insights and tips from his personal blogging experience to help AdSense publishers grow earnings. If you’re new to AdSense, be sure to sign up for AdSense and start turning your #PassionIntoProfit. 


Every year more people are using their phones and devices to browse web pages. In 2013, mobile made up only 17% of web traffic. In 2016, this number has risen to over 38%. Within the next couple of years, mobile traffic will easily surpass 50%.


Mobile's Share of Global Web Traffic


This is why you need to take time to optimize your AdSense ads for mobile traffic. Although you can easily grab a responsive AdSense ad unit, there are more ways to optimize your ad units for mobile. It may be the easiest way, but I’ve found that the easy way usually does not always produce the best results. I’ve tested the responsive ad units on my blogs against manual optimization, and the results were staggering.


The manual optimization of my ads produced a 54% increase in my AdSense revenue.


Here’s what I learned from the tests I ran:


#1 A large mobile banner at the top of the page earned the most money on my site

The highest producing location was below the title of a post and above the first paragraph. It’s important to know that  AdSense amended their policy on ads above the fold on mobile devices, and you can no longer use the 300x250 ad above the fold on mobile.


#2 Hide the sidebar ads in tablets and mobile

The sidebar is going to be pushed down to the bottom of the post when it is viewed in mobile. This is essentially banishing any ads in the sidebar to no man’s land. Most premium WordPress themes will allow you to turn off ad spots in the sidebar. This will allow you to drop in an additional AdSense ad into the post to get maximum monetization from mobile.


#3 The best ad grouping was top, middle, and bottom

Out of all the mobile ad groupings, this one easily produced the most revenue for me. The grouping was made up of three 250x250 ads. The first ad was below the title and above the first paragraph. The second ad was placed after the 6th paragraph of the post. The final ad was placed at the end of the post.


In addition to mobile optimization, I applied four AdSense optimization strategies, which resulted in an overall revenue increase of close to 300%.  Whether you are making $500/month or $5000/month, a 300% increase can make a huge impact on your yearly earnings.


Go here to read all of my “5 AdSense Optimization Strategies that Will Increase Your Earnings.”


Posted By
Brandon Gaille
Brandon Gaille

Brandon Gaille is an AdSense publisher. You can learn more about Brandon at BrandonGaille.com and listen to his popular blogging podcast, The Blog Millionaire.

If you’re new to AdSense, be sure to sign up for AdSense and start turning your #PassionIntoProfit. 

Source: Inside AdSense


How to earn more money with AdSense by decreasing your bounce rate

This is the fourth of five guest posts from AdSense publisher Brandon Gaille. Brandon has built his small business marketing blog, BrandonGaille.com, to over 2 million monthly visitors in less than three years. He’s featured as our guest blogger to share insights and tips from his personal blogging experience to help AdSense publishers grow earnings. If you’re new to AdSense, be sure to sign up for AdSense and start turning your #PassionIntoProfit. 


Google Analytics defines bounce rate as the percentage of single-page sessions, which essentially means the people that left your site after seeing only a single page. When your bounce rate is high, it also means that your AdSense ads may not be seen by a large percentage of your audience.

Over the years, I've researched this topic many times over in an effort to constantly decrease the bounce rate of my sites and my clients’ sites. Through countless hours of A/B testing and deep analytics research, I was able to identify 25 tactics that consistently reduced the bounce rate.

The great thing about most of these tactics is that they usually only take a matter of minutes to incorporate, and you can start seeing results the next day.


#1 Do not use more than 7 sentences per paragraph

You never want to block too much text together. One really long paragraph can easily overwhelm your visitors and lead them to hitting the back button.

Most bloggers write their posts on a desktop or laptop computer. From a computer, the occasional 12 to 15 sentence paragraph does not look too intimidating. However, over 50% of my blog visitors are using their phones to read the posts on my site. On a phone, these long paragraphs will fill up the entire screen and add to your bounce rate.


I like to break up my paragraphs into different sizes. This can make the text of a post visually stimulating, which can turn scanners into readers.


Using an occasional single sentence paragraph will speed up the flow of article and add some nice white space.


#2 Keep your column width between 700 and 800 pixels


There have been many big name bloggers that have been considering ditching the sidebar. Although the sidebar does not get as many clicks as it once did, this is largely due to the increase in mobile traffic.


A post without a sidebar will have a column width well beyond 800 pixels. This is going to make your content look very long on a desktop computer. The ideal width for engagement is 700 pixels, which will allow between 80 and 90 characters per line.


Smashing Magazine did a study on the typographic design patterns in websites. When they looked at a segment of websites with the highest engagement, they found the majority of these sites had between 75 and 90 characters per line.
average-characters-per-line
Source of image: Smashing Magazine

#3 Organize your content with headers and sub-headers


Based on reviewing heat maps of million and millions of page views, I’ve found that visitors of blog posts are made up of a mix of readers and scanners. To be precise, the results showed that 40% are readers and 60% are scanners. The readers start by reading the introduction paragraph, and the scanners scroll through the entire post. The scanners consistently stop scrolling to read each header and sub-header.


For the readers, most bloggers are pulling them into the post with a great introduction. However, the vast majority fail to create compelling headers. The easiest type of post to break into headers is the list post. For example, “13 Habits that Lead to Success.”


Each habit should be turned into a bold header and be able to stand alone as its own title. The goal here is to create thirteen compelling titles. Each title is designed to grab the reader’s attention and drive them into reading that section.


If you’ve enjoyed these three tips to decrease your bounce rate, go here to read all of the “25 Proven Ways to Decrease Your Bounce Rate.”


Posted By
Brandon Gaille

Brandon Gaille


Brandon Gaille is an AdSense publisher. You can learn more about Brandon at BrandonGaille.com and listen to his popular blogging podcast, The Blog Millionaire.

 If you’re new to AdSense, be sure to sign up for AdSense and start turning your #PassionIntoProfit. 



Source: Inside AdSense


How to choose the right ad tools for your site

Welcome to the #SuccessStack, a new series of articles designed to help you:
  • Access Google’s large network of advertisers 
  • Grow your publishing business 
  • Earn more from the ads on your site
This first article can help you choose the right tools to sell and manage the ads on your site.




AdSense: Get started with easy access to Google’s network of advertisers
Who it’s for:
Publishers looking for a smart and easy-to-use tool to optimize their ad space and revenue.

What it does: AdSense makes it easy for you to place, manage and earn revenue from ads on your site. With AdSense, Google is your advertising sales team, bringing you ads from millions of advertisers using AdWords and other Google advertising programs.

AdSense includes simple and easy controls to help you get started with earning money from ads, but it also does a lot of work behind the scenes to help you make more money. It’s a bit like an automatic car -- it removes some of the manual adjustment, allowing you to cruise along with less effort. You still need regular “tune ups” to get optimal performance, but you won’t need to shift gears all the time.

DoubleClick Ad Exchange: Control who gets programmatic access to the ads on your site with advanced features
Who it’s for: Publishers who require a more granular control over their inventory and who have the resources and expertise to manage ongoing optimizations. This product is suited to publishers with yield management expertise and those who have need for advanced features like Programmatic Direct.

What it does: DoubleClick Ad Exchange gives you real-time access to the largest pool of advertisers. This means that in addition to AdWords advertisers, you can also access major ad networks and agencies.

A major difference between DoubleClick Ad Exchange and AdSense is that AdSense does a lot of the technical settings and optimization work for you, such as automating the sale of all your ad space to the highest bidder. With DoubleClick Ad Exchange, you can control these adjustments yourself and control exactly how your inventory is sold. As an example, DoubleClick Ad Exchange allows you to choose which ad space is for public sale and which is reserved for private auctions. This increased amount of user input is necessary for you to get the best results from DoubleClick Ad Exchange. Another important distinction is that the AdSense demand is majority AdWords advertisers, whereas DoubleClick Ad Exchange pulls demand from multiple sources. You can see the full list of differences at our help center.

DoubleClick For Publishers: Scale your advertising business
Who it’s for: Publishers who are looking for a tool that has AdSense or Ad Exchange built in, along with lots of useful features to help them, schedule, deliver and measure their ad inventory regardless of how they sell it, to networks, programmatically or through their own direct sales teams.

What it does:
DoubleClick for Publishers is a single platform that allows you to manage and deliver all of your web, mobile, and video advertising across all your sales channels. It doesn’t come with it’s own ads, but rather helps you scale your ads business by managing your ad sales across a variety of ad networks such as AdSense, ad exchanges like DoubleClick and direct advertising partners. You can get started with the small business version right away for free, or talk to us about integrating with the premium, paid version that is built for large organizations with sophisticated ad sales teams.

Both versions have a simple interface, lots of great tools, built-in revenue optimization, and Google powered ad delivery to provide a simple, worry-free way to potentially increase the value of your ad impressions.

Ready to get started?
You can arrange a consultation with one of our experts who can help you to choose the right solution for your business, and setup AdSense, DoubleClick Ad Exchange or DoubleClick for Publishers.



Posted by Jay Castro, from the AdSense team.

Source: Inside AdSense


How to create better blog titles that can drive more traffic to your ads

This is the third of five guest posts from AdSense publisher Brandon Gaille. Brandon has built his small business marketing blog, BrandonGaille.com, to over 2 million monthly visitors in less than three years. He’s featured as our guest blogger to share insights and tips from his personal blogging experience to help AdSense publishers grow earnings. If you’re new to AdSense, be sure to sign up for AdSense and start turning your #PassionIntoProfit. 

Over the past three years, I’ve crafted titles for over 5,000 blog posts and have received over 58 million unique visitors to date. With that many titles and that much traffic, it’s allowed me to identify what types of titles get the most traffic.


The title of your page or blog post will play one of the largest roles in how much traffic you receive. From my extensive experience, a really great title can move your blog post dis and increase the number of social shares by over 300%.


The bottom line is… If you fail to write a compelling title that gets people to click, then your post is doomed to wallow in mediocrity.


Here are a few title optimization tactics that have proven to drive the most traffic.


#1 Place a number at the beginning of your title


If you have a list formatted post, then you need to be using numbered titles every single time. Titles that begin with numbers are proving to drive traffic. This is largely due to the increased consumption of users reading list posts more than any other type of blog post. A list post typically has anywhere from seven to forty key points, which are listed out numerically.
This makes it really easy for anyone to scan through the big takeaways and decide whether to dive deeper into the article. When people see the number 13 at the beginning of the title, they know they can scan through all 13 key points in a matter of seconds.


A numbered title paired with a list post will drive more clicks to your post and list style posts have one of the highest engagement rates. Posts with more clicks and higher engagement often are rewarded by becoming more discoverable to users.

Here are a couple of examples of numbered blog titles:


  • 11 Tools to Create Share-Worthy Content
  • 17 Incredible Social Media Statistics


I recommend crafting numbered blog post titles for more than half of your posts.

A Conductor study on headline preferences also backs up what I’ve found to be true on my blog.

blog headline statistics numbered titles

#2 The odd number gets 20% more clicks than the even number


Although no one has figured out exactly why this happens, the odd numbered titles get more clicks than the even numbered titles. Here’s an example.
Odd Numbered Title: 11 Keys to Earning More Money on Adsense
Even Numbered Title: 12 Types of Ads that Convert

Before you hit publish on the blog post titled, “8 Crazy Ways to Double Your Ad Revenue,” take a moment to either add one more tip or remove the least valuable tip. This will allow you to capitalize on the extra twenty percent of clicks by having an odd numbered title.

Learn more about creating better blog titles from my blog and read all of the “17 Ways to Create Catchy Blog Titles That Drive Traffic.”


Posted By
Brandon Gaille

Brandon Gaille

Brandon Gaille is an AdSense publisher. You can learn more about Brandon at BrandonGaille.com and listen to his popular blogging podcast, The Blog Millionaire.

If you’re new to AdSense, be sure to sign up for AdSense and start turning your #PassionIntoProfit. 


Source: Inside AdSense


Put your users first with the four S’: Speed, Scroll, Style, Simple

We’re all consumers of web content. Yet as content creators it can be easy to forget what we need as users. But don’t worry, you’ve got this, and we’ve got you covered with just four S’.

 If you’re new to AdSense, be sure to sign up today and start turning your #PassionIntoProfit. 


1. Speed 

We all know how frustrating it is when a page takes forever to load. We twiddle our thumbs and look from side to side. And after just three seconds, we bounce.

But somehow publishers aren’t responding to this primal need that we all know as users.

According to Google's research from the Mobile Speed Matters report, the average load time for mobile sites across the web is 19 seconds. This is a LONG time. Usain Bolt can run 200m in 19.9s - think of what your users can do with a tap and a swipe.

But how does this impact me? Well, the report also states that…

  • 53% of mobile site visits are abandoned if pages take longer than 3 seconds to load.
  • Publishers whose mobile sites load in 5 seconds earn up to 2x more mobile ad revenue than those whose sites load in 19 seconds. 


By now we think you’re sold on speed. So what’s next?

2. Scroll

The magic scroll. It’s an infinite, endless, perfectly loaded stream of content. There’s no need to click, to wait for a page to load, to navigate to that tiny ‘next’ with your giant thumb. It’s all right here, content, just waiting for you to consume it.

There are, of course, a few caveats before developing an infinite scroll. Like almost everything online, this isn’t a one size fits all solution. 

Infinite scroll is great for ...
  • UGC publishers with constantly evolving content - think Tumblr, Facebook, Pinterest.
  • Sites with lengthy articles or tutorials. No one wants to click ‘more’ or ‘page 2’ anymore. It’s just too dang hard. 
  • Publishers using a slideshow with pagination. Consider a lazy loaded infinite scroll instead. Users love it. 
  • Publishers considering mobile first (aren’t we all?!).
Watch out for … 
  • Crawler errors & SEO impact and check out this article for creating a search friendly infinite scroll. 

3. Style 

Style should never be an afterthought. You and your users want to interact with something that looks good and feels good. 

There are two primary components to style: content style & ad style. 

First: Content Style 

Great websites are able to maintain a consistent style  across pages and platforms. Consistency gives users a sense of familiarity when interacting with your content. 
  • Choose a color scheme and stick to it 
  • Choose a layout and stick to it 
  • Choose a theme and stick to it 
We can’t stress this enough - stick to it. 

As the industry continues to migrate towards a mobile first perspective, consistency across device types and platforms becomes increasingly important. Responsive web design enables your site to adapt to various device sizes without changing the overall look and feel or compromising user experience. 

If you're up for the challenge, check out more on responsive design. 

Second: Ad Style 

In the internet of yesteryear it was nearly impossible to monetize without stripping a site of what made it beautiful. The good news? It’s 2016 and now you have the ability to make a profit and maintain your site’s style. 


When implementing ads think about what makes sense for you and your users.
Here's a sample of a native ad design.
Most importantly use ads to complement the content of your site. Since content is king, it’s important to ensure that you give your users what they're looking for in a format that’s easy to find and navigate, this includes the ads on your site.

Place ads at natural breaks or where the user’s attention may have waned. Not only will this improve user experience but it also may encourage a higher CTR and increased audience engagement.

4. Simple

Keep it simple, folks. 

This rule underlines most everything that is targeted towards consumers, but it is even more important for a mobile first audience. 

When it comes to consuming digital content, we’re a generation of hungry hippos. We want headlines, snippets, concise and clear information. We want minimalist design with streamlined content and easy navigation.

Tips on keeping it simple
  • Make it touch friendly. What’s easier than that?
  • Bullet points make your content easily consumable 
  • Be brief in sign-ups. If your site requires users to sign-up or sign-in, keep input requirements to a minimum or consider adding a Google sign-in option to speed up the process 

So there you have it; the four S’ of user experience: speed, scroll, style, simple. If you’re new to AdSense, be sure to sign up today and start turning your #PassionIntoProfit.


Posted by: Sarah Hornsey, from the AdSense team


Source: Inside AdSense