Headphones optimized for your Aussie Google Assistant

Your Assistant is already available to help on phones, Google Home and more. But sometimes you need something a bit more personal, just for you, on your headphones. Like when you’re commuting on the train and want some time to yourself. Or reading at home and looking for some peace and quiet.


To help with those “in between” moments, together with Bose, we’re announcing headphones that are optimized for the Assistant, starting with the QC35. So now, you can keep up to date on your messages, music and more—using your eligible Android phone or iPhone.


To get started, connect your QC 35 II headphones to your phone via Bluetooth, open your Google Assistant and follow the instructions. From there, your Assistant is just a button away—push (and hold) the Action button to easily and quickly talk to your Assistant.


  • Stay connected to what matters: Hear your incoming messages, events and more, automatically, right from your headphones. So if you’re listening to your favorite song and you get a text, your Assistant can read it to you, no extra steps.
  • Listen to news and more: Now it’s easy to keep up with news while you walk to the bus, hop on the train or go for a run. Just ask your Assistant to “play the news” and you’ll get a read-out of the current hot topics. You can choose from a variety of news sources, like ABC News, The Australian and more.
  • Keep in touch with friends: With your Assistant on headphones, you can make a call with just a few simple words—“Call dad”—take the call from your headphones and continue on your way. No stopping or dialing, just talking.


We’ve worked together with Bose to create a great Assistant experience on the QC35 II—whether you’re on a crowded street or squished on a train, Bose’s active noise cancellation will help eliminate unwanted sounds around you, so you’re able to hear your Assistant, your music and more. The Assistant on the QC35 II will be available in English to all Aussies as well as in the U.K., the U.S., Canada, Germany and France.

We’ll continue to add features, apps and more to your Assistant on headphones over the coming weeks.


Stable Channel Update for Desktop

The stable channel has been updated to 61.0.3163.100 for Windows, Mac and Linux which will roll out over the coming days/weeks.

Security Fixes and Rewards


Note: Access to bug details and links may be kept restricted until a majority of users are updated with a fix. We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven’t yet fixed.


This update includes 3 security fixes. Below, we highlight fixes that were contributed by external researchers. Please see the Chrome Security Page for more information.


[$7500][765433] High CVE-2017-5121: Out-of-bounds access in V8. Reported by Jordan Rabet, Microsoft Offensive Security Research and Microsoft ChakraCore team on 2017-09-14
[$3000][752423] High CVE-2017-5122: Out-of-bounds access in V8. Reported by Choongwoo Han of Naver Corporation on 2017-08-04


We would also like to thank all security researchers that worked with us during the development cycle to prevent security bugs from ever reaching the stable channel.

As usual, our ongoing internal security work was responsible for a wide range of fixes:
  • [767508] Various fixes from internal audits, fuzzing and other initiatives

Many of our security bugs are detected using AddressSanitizer, MemorySanitizer, UndefinedBehaviorSanitizer, Control Flow Integrity, libFuzzer, or AFL.



A list of changes is available in the log. Interested in switching release channels? Find out how.  If you find a new issue, please let us know by filing a bug.  The community help forum is also a great place to reach out for help or learn about common issues.


Krishna Govind
Google Chrome

Introducing limits on file access requests in Team Drives

Starting today, in situations where a user is sent a link to a file in a Team Drive that they don’t have access to, we’ll only send the Request Access notification to the creator of the file, or a limited group of individuals who had relevant interaction with the Team Drive. We will no longer always send it to all members of the Team Drive.

This change not only helps to ensure that the members of your Team Drives aren’t receiving unnecessary emails, it also prevents unwanted oversharing of Team Drive content.

Please Note: We are continuing to add signals and improve the quality of our Request Access notifications so the members of your Team Drives can quickly gain access to content.

Launch Details
Release track:
Launching to both Rapid Release and Scheduled Release

Editions:
Available to G Suite Enterprise, Business, Nonprofit, and Education editions only

Rollout pace:
Full rollout (1–3 days for feature visibility)

Impact:
All end users

Action:
Change management suggested/FYI

More Information on how to use Team Drives
Help Center: Share files with Team Drives

Launch release calendar
Launch detail categories
Get these product update alerts by email
Subscribe to the RSS feed of these updates

Search in the new Google Sites

Recently, we made it easier to surface content in the new Google Sites. Now we’re making it easier to find content on those sites. Going forward, users can simply click the magnifying glass in the top right corner of their screen and search across an entire site.


In addition, Google Cloud Search users will now see content from the new Google Sites in their Cloud Search results.


These improvements will allow Google Sites to better serve the needs of both site creators and viewers. Check out the Help Center for more details.

Launch Details
Release track:
Launching to both Rapid Release and Scheduled Release

Editions:
Search bar in the new Google Sites

  • Available to all G Suite editions

New Google sites appear in Google Cloud Search results

  • Available to G Suite Business and Enterprise editions only

Rollout pace:
Search bar in the new Google Sites

  • Full rollout (1–3 days for feature visibility)

New Google sites appear in Google Cloud Search results

  • Extended rollout (potentially longer than 15 days for feature visibility)

Impact:
All end users

Action:
Change management suggested/FYI

More Information
Help Center: Preview and publish your site on the web


Launch release calendar
Launch detail categories
Get these product update alerts by email
Subscribe to the RSS feed of these updates

Set restrictions on what content people interact with and share on Google+

Google+ is a great tool for helping employees discover and engage with external content that can be relevant to work. From an administrative perspective, we’ve heard from you that having more controls around content and commenting rights for your users would be helpful. That’s why today, we’re making this possible by adding new content restriction settings in Google+.

To change Google+ sharing settings for a specific organizational unit (OU), you can go to Apps > G Suite > Google+ > Advanced Settings in the Admin console, where you can first select the appropriate OU.



Depending on your preferences and the needs of the people in that OU, you can then pick one of three options:

  • Public Mode - View/comment on internal and external content. This mode allows G Suite users to view/share/interact with content that is both inside and outside of the domain. This is similar to how Google+ operates today. This mode is best for specific OUs (e.g. those used for outward-facing roles like marketing and support) who should be able to interact with customers and external partners via Google+.
  • Private Mode - View/comment on internal content only. This setting offers the most control over Google+ activity, as it restricts G Suite users to viewing/sharing/interacting with only people inside their domain. Please note, G Suite users can still view content outside of their domain if they get a direct link or had joined/followed an external community/person/collection prior to being placed into this mode.
  • Hybrid Mode - View external content (e.g. industry news), but only engage with it internally. This setting offers moderate control over Google+ activity, as it allows G Suite users to view content outside their domain, but only share/interact with it internally.
The table below outlines what G Suite users can do under each setting:


Since these settings can be customized at the OU level, you’ll have the opportunity to differentiate permissions. For example, you can use the Public setting only for OUs that need to interact with people outside of the domain, whether that be the support team, marketing team, or others in customer-facing roles.

For more information on the impact to user experiences at each level of this setting, please review this Help Center article.

Launch Details
Release track:
Launching to both Rapid Release and Scheduled Release

Editions:
Available to all G Suite editions

Rollout pace:
Gradual rollout (up to 15 days for feature visibility)

Impact:
All end users

Action:
Admin action suggested/FYI

More Information
Help Center: Manage Google+ content sharing

Launch release calendar
Launch detail categories
Get these product update alerts by email
Subscribe to the RSS feed of these updates

How Machine Learning with TensorFlow Enabled Mobile Proof-Of-Purchase at Coca-Cola

In this guest editorial, Patrick Brandt of The Coca-Cola Company tells us how they're using AI and TensorFlow to achieve frictionless proof-of-purchase.

Coca-Cola's core loyalty program launched in 2006 as MyCokeRewards.com. The "MCR.com" platform included the creation of unique product codes for every Coca-Cola, Sprite, Fanta, and Powerade product sold in 20oz bottles and cardboard "fridge-packs" purchasable at grocery stores and other retail outlets. Users could enter these product codes at MyCokeRewards.com to participate in promotional campaigns.

Fast-forward to 2016: Coke's loyalty programs are still hugely popular with millions of product codes having been entered for promotions and sweepstakes. However, mobile browsing went from non-existent in 2006 to over 50% share by the end of 2016. The launch of Coke.com as a mobile-first web experience (replacing MCR.com) was a response to these changes in browsing behavior. Thumb-entering 14-character codes into a mobile device could be a difficult enough user experience to impact the success of our programs. We want to provide our mobile audience the best possible experience, and recent advances in artificial intelligence opened new opportunities.

The quest for frictionless proof-of-purchase

For years Coke attempted to use off-the-shelf optical character recognition (OCR) libraries and services to read product codes with little success. Our printing process typically uses low-resolution dot-matrix fonts with the cap or fridge-pack media running under the printhead at very high speeds. All of this translates into a low-fidelity string of characters that defeats off-the-shelf OCR offerings (and can sometimes be hard to read with the human eye as well). OCR is critical to simplifying the code-entry process for mobile users: they should be able to take a picture of a code and automatically have the purchase registered for a promotional entry. We needed a purpose-built OCR system to recognize our product codes.

Bottlecap and fridge-pack examples

Our research led us to a promising solution: Convolutional Neural Networks. CNNs are one of a family of "deep learning" neural networks that are at the heart of modern artificial intelligence products. Google has used CNNs to extract street address numbers from StreetView images. CNNs also perform remarkably well at recognizing handwritten digits. These number-recognition use-cases were a perfect proxy for the type of problem we were trying to solve: extracting strings from images that contain small character sets with lots of variance in the appearance of the characters.

CNNs with TensorFlow

In the past, developing deep neural networks like CNNs has been a challenge because of the complexity of available training and inference libraries. TensorFlow, a machine learning framework that was open sourced by Google in November 2015, is designed to simplify the development of deep neural networks.

TensorFlow provides high-level interfaces to different kinds of neuron layers and popular loss functions, which makes it easier to implement different CNN model architectures. The ability to rapidly iterate over different model architectures dramatically reduced the time required to build Coke's custom OCR solution because different models could be developed, trained, and tested in a matter of days. TensorFlow models are also portable: the framework supports model execution natively on mobile devices ("AI on the edge") or in servers hosted remotely in the cloud. This enables a "create once, run anywhere" approach for model execution across many different platforms, including web-based and mobile.

Machine learning: practice makes perfect

Any neural network is only as good as the data used to train it. We knew that we needed a large set of labeled product-code images to train a CNN that would achieve our performance goals. Our training set would be built in three phases:

  1. Pre-launch simulated images
  2. Pre-launch real-world images
  3. Images labeled by our users in production

The pre-launch training phase began by programmatically generating millions of simulated product-code images. These simulated images included variations in tilt, lighting, shadows, and blurriness. The prediction accuracy (i.e. how often all 14 characters were correctly predicted within the top-10 predictions) was at 50% against real-world images when the model was trained using only simulated images. This provided a baseline for transfer-learning: a model initially trained with simulated images was the foundation for a more accurate model that would be trained against real-world images.

The challenge now turned to enriching the simulated images with enough real-world images to hit our performance goals. We created a purpose-built training app for iOS and Android devices that "trainers" could use to take pictures of codes and label them; these labeled images were then transferred to cloud storage for training. We did a production run of several thousand product codes on bottle caps and fridge-packs and distributed these to multiple suppliers who used the app to create the initial real-world training set.

Even with an augmented and enriched training set, there is no substitute for images created by end-users in a variety of environmental conditions. We knew that scans would sometimes result in an inaccurate code prediction, so we needed to provide a user-experience that would allow users to quickly correct these predictions. Two components are essential to delivering this experience: a product-code validation service that has been in use since the launch of our original loyalty platform in 2006 (to verify that a predicted code is an actual code) and a prediction algorithm that performs a regression to determine a per-character confidence at each one of the 14 character positions. If a predicted code is invalid, the top prediction as well as the confidence levels for each character are returned to the user interface. Low-confidence characters are visually highlighted to guide the user to update characters that need attention.

Error correction user interface lets users correct invalid predictions and generate useful training data

This user interface innovation enables an active learning process: a feedback loop allows the model to gradually improve by returning corrected predictions to the training pipeline. In this way, our users organically improve the accuracy of the character recognition model over time.

Product-code recognition pipeline

Optimizing for maximum performance

To meet user expectations around performance, we established a few ambitious requirements for the product-code OCR pipeline:

  • It had to be fast: we needed a one-second average processing time once the image of the product-code was sent into the OCR pipeline
  • It had to be accurate: our goal was to achieve 95% string recognition accuracy at launch with the guarantee that the model could be improved over time via active learning
  • It had to be small: the OCR pipeline needs to be small enough to be distributed directly to mobile apps and accommodate over-the-air updates as the model improves over time
  • It had to handle diverse product code media: dozens of different combinations of font types, bottlecaps, and cardboard fridge-pack media

We initially explored an architecture that used a single CNN for all product-code media. This approach created a model that was too large to be distributed to mobile apps and the execution time was longer than desired. Our applied-AI partners at Quantiphi, Inc.began iterating on different model architectures, eventually landing on one that used multiple CNNs.

This new architecture reduced the model size dramatically without sacrificing accuracy, but it was still on the high end of what we needed in order to support over-the-air updates to mobile apps. We next used TensorFlow's prebuilt quantization module to reduce the model size by reducing the fidelity of the weights between connected neurons. Quantization reduced the model size by a factor of 4, but a dramatic reduction in model size occurred when Quantiphi had a breakthrough using a new approach called SqueezeNet.

The SqueezeNet model was published by a team of researchers from UC Berkeley and Stanford in November of 2016. It uses a small but highly complex design to achieve accuracy levels on par with much larger models against popular benchmarks such as Imagenet. After re-architecting our character recognition models to use a SqueezeNet CNN, Quantiphi was able to reduce the model size of certain media types by a factor of 100. Since the SqueezeNet model was inherently smaller, a richer feature detection architecture could be constructed, achieving much higher accuracy at much smaller sizes compared to our first batch of models trained without SqueezeNet. We now have a highly accurate model that can be easily updated on remote devices; the recognition success rate of our final model before active learning was close to 96%, which translates into a 99.7% character recognition accuracy (just 3 misses for every 1000 character predictions).

Valid product-code recognition examples with different types of occlusion, translation, and camera focus issues

Crossing boundaries with AI

Advances in artificial intelligence and the maturity of TensorFlow enabled us to finally achieve a long-sought proof-of-purchase capability. Since launching in late February 2017, our product code recognition platform has fueled more than a dozen promotions and resulted in over 180,000 scanned codes; it is now a core component for all of Coca-Cola North America's web-based promotions.

Moving to an AI-enabled product-code recognition platform has been valuable for two key reasons:

  • Frictionless proof-of-purchase was enabled in a timely fashion, corresponding to our overall move to a mobile-first marketing platform.
  • Coke saved millions of dollars by avoiding the requirement to update printers in our production lines to support higher-fidelity fonts that would work with existing off-the-shelf OCR software.

Our product-code recognition platform is the first execution of new AI-enabled capabilities at scale within Coca-Cola. We're now exploring AI applications across multiple lines of business, from new product development to ecommerce retail optimization.

Cooling off with #teampixel

We enjoyed all the fun in the sun with #teampixel this summer. From a whirlwind tour around the globe to getting one with nature, our Pixel photographers shared some stunning shots that gave us the chills (in a good way). Before we head into fall, we’re paying one last homage to the warmer months with a series spotlighting the cooler tones. Thanks for keeping us cool this summer, #teampixel. 😎

Shout out to @nemod96, whose photo is featured above and makes an appearance on our Instagram today. Tag your photos with #teampixel and you could be featured, too. 

How Google went all in on video meetings (and you can, too)

Editor’s note: this is the first article in a five-part series on Google Hangouts.

I’ve worked at Google for more than a decade and have seen the company expand across geographies—including to Stockholm where I have worked from day one. My coworkers and I build video conferencing technology to help global teams work better together.

It’s sometimes easy to forget what life was like before face-to-face video conferencing (VC) at work, but we struggled with many of the same issues that other companies deal with—cobbled together communication technologies, dropped calls, expensive solutions. Here’s a look at how we transitioned Google to be a cloud video meeting-first company.

2004 - 2007: Life before Hangouts

In the mid-2000s, Google underwent explosive growth. We grew from nearly 3,000 employees to more than 17,000 across 40 offices globally. Historically, we relied on traditional conference phone bridging and email to communicate across time zones, but phone calls don’t exactly inspire creativity and tone gets lost in translation with email threads.

We realized that the technology we used didn’t mirror how our teams actually like to work together. If I want to sort out a problem or present an idea, I’d rather be face-to-face with my team, not waiting idly on a conference bridge line.

Google decided to go all in on video meetings. We outsourced proprietary video conferencing (VC) technology and outfitted large meeting rooms with these devices. 

If I need to sort out a problem or present an idea, I’d rather be face-to-face with my team, not waiting idly on a conference bridge line.
Hangouts 1
A conference room in Google’s Zurich office in 2007 which had outsourced VC technology.

While revolutionary, this VC technology was extremely costly. Each unit could cost upwards of $50,000, and that did not include support, licensing and network maintenance fees. To complicate matters, the units were powered by complex, on-prem infrastructure and required several support technicians. By 2007, nearly 2,400 rooms were equipped with the technology.

Then we broke it.

The system was built to host meetings for team members in the office, but didn't cater to people on the go. As more and more Googlers used video meetings, we reached maximum capacity on the technology’s infrastructure and experienced frequent dropped calls and poor audio/visual (AV) quality. I even remember one of the VC bridges catching on fire! We had to make a change.

2008 - 2013: Taking matters into our own hands

In 2008, we built our own VC solution that could keep up with the rate at which we were growing. We scaled with software and moved meetings to the cloud.

Our earliest “Hangouts” prototype was Gmail Video Chat, a way to connect with contacts directly in Gmail. Hours after releasing the service to the public, it had hundreds of thousands of users.

Gmail voice and video chat

The earliest software prototype for video conferencing at Google, Gmail Video Chat.

Hangouts 2

Arthur van der Geer tests out the earliest prototype for Hangouts, go/meet. 

While a good start, we knew we couldn’t scale group video conferencing within Gmail. We built our second iteration, which tied meeting rooms to unique URLs. We introduced it to Googlers in 2009 and the product took off.

During this journey, we also built our own infrastructure (WebRTC) so we no longer had to rely on third-party audio and video components. Our internal IT team created our own VC hardware prototypes; we used  touchscreen computers and custom software with the first version of Hangouts and called it “Google Video Conferencing” (“GVC” for short).

First Google Video Conferencing Prototype | 2008

Google engineers test the first Google Video Conferencing hardware prototype in 2008.

With each of these elements, we had now built our earliest version of Hangouts. After a few years of testing—and widespread adoption by Googlers—we made the platform available externally to customers in 2014 (“Chromebox for Meetings”). In the first two weeks, we sold more than 2,000 units. By the end of the year, every Google conference room and company device had access to VC.

2014 - today: Transforming how businesses do business

GIF test

Nearly a decade has passed since we built the first prototype. Face-to-face collaboration is ingrained in Google’s DNA now—more than 16,500 meetings rooms are VC-equipped at Google and our employees join Hangouts 240,000 times per day! That's equivalent to spending more than 10 years per day collaborating in video meetings. And, now, more than 3 million businesses are using Hangouts to transform how they work too.

We learned a lot about what it takes to successfully collaborate as a scaling business. If you’re looking to transition your meetings to the cloud with VC, here are a few things to keep in mind:

  1. Encourage video engagement from the start. Every good idea needs a champion. Be seen as an innovator by evangelizing video engagement in company meetings from the start. Your team will thank you for it.
  2. If you’re going to move to VC, make it available everywhere. We transformed our work culture to be video meeting-first because we made VC ubiquitous. Hangouts Meet brings you a consistent experience across web, mobile and conference rooms.  If you’re going to make the switch, go all in and make it accessible to everyone.
  3. Focus on the benefits. Video meetings can help distributed teams feel more engaged and help employees collaborate whenever, and wherever, inspiration strikes. This means you’ll have more diverse perspectives which makes for better quality output.

What’s next? Impactful additions and improvements to Hangouts Meet will be announced soon. All the while, we’re continuing to research how teams work together and how we can evolve VC technology to reflect that collaboration. For example, we’re experimenting with making scheduling easier for teams thanks to the @meet AI bot in the early adopter version of Hangouts Chat.

Related Article

Meet the new Hangouts

Last year, we talked about doubling down on our enterprise focus for Hangouts and our commitment to building communication tools focused ...

Read Article

Source: Google Cloud


How Google went all in on video meetings (and you can, too)

Editor’s note: this is the first article in a five-part series on Google Hangouts.

I’ve worked at Google for more than a decade and have seen the company expand across geographies—including to Stockholm where I have worked from day one. My coworkers and I build video conferencing technology to help global teams work better together.

It’s sometimes easy to forget what life was like before face-to-face video conferencing (VC) at work, but we struggled with many of the same issues that other companies deal with—cobbled together communication technologies, dropped calls, expensive solutions. Here’s a look at how we transitioned Google to be a cloud video meeting-first company.

2004 - 2007: Life before Hangouts

In the mid-2000s, Google underwent explosive growth. We grew from nearly 3,000 employees to more than 17,000 across 40 offices globally. Historically, we relied on traditional conference phone bridging and email to communicate across time zones, but phone calls don’t exactly inspire creativity and tone gets lost in translation with email threads.

We realized that the technology we used didn’t mirror how our teams actually like to work together. If I want to sort out a problem or present an idea, I’d rather be face-to-face with my team, not waiting idly on a conference bridge line.

Google decided to go all in on video meetings. We outsourced proprietary video conferencing (VC) technology and outfitted large meeting rooms with these devices. 

If I need to sort out a problem or present an idea, I’d rather be face-to-face with my team, not waiting idly on a conference bridge line.
Hangouts 1

While revolutionary, this VC technology was extremely costly. Each unit could cost upwards of $50,000, and that did not include support, licensing and network maintenance fees. To complicate matters, the units were powered by complex, on-prem infrastructure and required several support technicians. By 2007, nearly 2,400 rooms were equipped with the technology.

Then we broke it.

The system was built to host meetings for team members in the office, but didn't cater to people on the go. As more and more Googlers used video meetings, we reached maximum capacity on the technology’s infrastructure and experienced frequent dropped calls and poor audio/visual (AV) quality. I even remember one of the VC bridges catching on fire! We had to make a change.

2008 - 2013: Taking matters into our own hands

In 2008, we built our own VC solution that could keep up with the rate at which we were growing. We scaled with software and moved meetings to the cloud.

Our earliest “Hangouts” prototype was Gmail Video Chat, a way to connect with contacts directly in Gmail. Hours after releasing the service to the public, it had hundreds of thousand of users.

Hangouts 2

While a good start, we knew we couldn’t scale group video conferencing within Gmail. We built our second iteration, which tied meeting rooms to unique URLs. We introduced it to Googlers in 2009 and the product took off.

During this journey, we also built our own infrastructure (WebRTC) so we no longer had to rely on third-party audio and video components. Our internal IT team created our own VC hardware prototypes; we used  touchscreen computers and custom software with the first version of Hangouts and called it “Google Video Conferencing” (“GVC” for short).

With each of these elements, we had now built our earliest version of Hangouts. After a few years of testing—and widespread adoption by Googlers—we made the platform available externally to customers in 2014 (“Chromebox for Meetings”). In the first two weeks, we sold more than 2,000 units. By the end of the year, every Google conference room and company device had access to VC.

2014 - today: Transforming how businesses do business

GIF test

Nearly a decade has passed since we built the first prototype. Face-to-face collaboration is ingrained in Google’s DNA now—more than 16,500 meetings rooms are VC-equipped at Google and our employees join Hangouts 240,000 times per day! That's equivalent to spending more than 10 years per day collaborating in video meetings. And, now, more than 3 million businesses are using Hangouts to transform how they work too.

We learned a lot about what it takes to successfully collaborate as a scaling business. If you’re looking to transition your meetings to the cloud with VC, here are a few things to keep in mind:

  1. Encourage video engagement from the start. Every good idea needs a champion. Be seen as an innovator by evangelizing video engagement in company meetings from the start. Your team will thank you for it.
  2. If you’re going to move to VC, make it available everywhere. We transformed our work culture to be video meeting-first because we made VC ubiquitous. Hangouts Meet brings you a consistent experience across web, mobile and conference rooms.  If you’re going to make the switch, go all in and make it accessible to everyone.
  3. Focus on the benefits. Video meetings can help distributed teams feel more engaged and help employees collaborate whenever, and wherever, inspiration strikes. This means you’ll have more diverse perspectives which makes for better quality output.

What’s next? Impactful additions and improvements to Hangouts Meet will be announced soon. All the while, we’re continuing to research how teams work together and how we can evolve VC technology to reflect that collaboration. For example, we’re experimenting with making scheduling easier for teams thanks to the @meet AI bot in the early adopter version of Hangouts Chat.

Related Article

Meet the new Hangouts

Last year, we talked about doubling down on our enterprise focus for Hangouts and our commitment to building communication tools focused ...

Read Article

Supercharge your call-only ads with ad extensions

When people want dedicated service or to get specific questions answered, they often pick up the phone to speak to a real person. Advertisers also drive more value from having these direct conversations with customers - on average, calls convert three times better than web clicks.

Hundreds of thousands of advertisers are already using call-only ads to generate more phone calls from mobile search. We are now introducing upgrades to call-only ads, starting with the launch of ad extensions. For the first time, you’ll be able to show ad extensions with call-only ads to promote more relevant information about your products and services, and give people more reasons to choose your business. In early experiments, we’ve found that implementing new extensions to call-only ads can improve clickthrough rate by 10% on average.

Introducing ad extensions for call-only ads


The following extensions for call-only ads will begin rolling out to all advertisers starting today:
  • Location extensions - highlight information about your nearby business locations for customers who want to visit your store in person.
  • Callout extensions - promote unique offers and benefits, such as 24-hour call center service.
  • Structured snippets - provide more specific details about your products and services using predefined headers like “Destinations” and “ Types”. For example, a rental car company might list various car classes like sedans, hybrids and SUVs.


Advertisers and agencies like Hertz, Vortex Industries and DexYP will be taking advantage of extensions to enhance their call-only ads with additional details and improve their visibility in search results.


“Calls help us effectively engage an increasingly mobile-first audience. They also drive better conversion rates compared to mobile text ads that take customers to our site. With new ad extensions for call-only ads, we hope to improve our CTR and call volume by taking up more real estate in search results, and showing customers additional relevant information like different car classes and ancillary product options.”
- Jeremy Venlet, Director- Digital Operations and Performance, Hertz

“Choosing the right partner for commercial door repair is an important decision for any business, which is why we offer extensive service and support over the phone to help guide our customers to book on-site appointments with our expertly trained technicians. Partnering with our digital agency, YPM, Inc. we've made call-only campaigns a big part of our digital strategy since phone calls are so important to our business. Making ad extensions available with call-only ads is huge step forward because they'll allow us to go beyond the standard description text in ad copy to showcase our unique value propositions, such as prompt and efficient service, and a proven track record of delivering results for our customers.”
- Stacey Muto, Marketing Director, Vortex Industries, Inc.


“DexYP partners with hundreds of thousands of local businesses to sustain and grow their customer base through online marketing channels. For many of these clients, driving leads and sales from phone calls is their top performance goal. For those that utilize Search Engine Marketing, call-only campaigns have been instrumental to helping these businesses generate more calls from mobile search, where more and more consumers are looking to connect with local businesses each year. We're incredibly excited to further improve our clients’ campaigns by adding new ad extensions. For example, the enhancement with location extensions will help further promote businesses who can assist customers over the phone, or at a nearby store location. Providing more relevant details in call-only ads will help them stand out more in search results and give users more reasons to call.”
- Brandon Hulme, Sr. Product Manager, SEM Products and Platforms, DexYP

If you already have location, callout, or structured snippet extensions set up at the account-level, you don’t need to take any extra steps - the extensions will automatically be eligible to appear with your call-only ads. You can also tailor the messaging on your extensions at the campaign-level to help them work better with call-only ads. For example, highlight your fast call center service or offer a special discount when customers book an appointment over the phone for the same week.

To learn more about call-only ads, visit the Help Center and check out best practices for driving more calls to your business.

Source: Inside AdWords