Tag Archives: Announcements

Adopting a Community-Oriented Approach to Open Source License Compliance

Today Google joins Red Hat, Facebook, and IBM alongside the Linux Kernel Community in increasing the predictability of open source license compliance and enforcement.

We are taking an approach to compliance enforcement that is consistent with the Principles of Community-Oriented GPL Enforcement. We hope that this will encourage greater collaboration on open source projects, and foster discussion on how we can all continue to work closely together.

You can learn more about today’s announcement in Red Hat’s press release and in our GPL Enforcement Statement.

By Chris DiBona, Director of Open Source

New lower prices for GPUs and preemptible Local SSDs



We’ve been seeing customers (like Shazam and Schlumberger) harnessing the scale of Google Cloud, and the power of NVIDIA Tesla GPUs to innovate, accelerate and save money. Today we’re extending the benefits of GPUs by cutting the price of NVIDIA Tesla GPUs attached to on-demand Google Compute Engine virtual machines by up to 36 percent. In US regions, each K80 GPU attached to a VM is priced at $0.45 per hour while each P100 costs $1.46 per hour.

Lower priced GPUs, together with Custom VM shapes and Sustained Usage Discounts, which provide up to an additional 30% off of instance pricing, allow you to run highly parallelized compute tasks on GPUs with strong performance, all at a great price.

Our GPU virtual machines allow you to create exact performance and cost VM configuration for your workload. Specifically, we enable you to create VM shapes with the right number of vCPUs, GPUs and memory for your specific application. Optionally, if you need fast disk performance with your GPUs, you can attach up to 3TB of Local SSD to any GPU-enabled VM. In addition, to help ensure our Cloud GPU customers receive bare metal performance, the hardware is passed through directly to the virtual machine.

Scientists, artists and engineers need access to massively parallel computational power. Deep learning, physical simulation and molecular modeling can take hours instead of days on NVIDIA Tesla GPUs.

Regardless of the size of your workload, GCP can provide the right amount of computational power to help you get the job done.

As an added bonus, we’re also lowering the price of preemptible Local SSDs by almost 40 percent compared to on-demand Local SSDs. In the US this means $0.048 per GB-month.

We hope that the price reduction on NVIDIA Tesla GPUs and preemptible Local SSDs unlocks new opportunities and helps you solve more interesting business, engineering and scientific problems.

For more details, check out our documentation for GPUs. For more pricing information, take a look at the Compute Engine GPU pricing page or try out our pricing calculator. If you have questions or feedback, go to the Getting Help page.

Getting started with the power of GPU-enabled instances is easy—just start one up in the Google Cloud Platform Console. If you don’t have a GCP account yet, sign up today and get $300 in credits.

Introducing Dialogflow Enterprise Edition, a new way to build voice and text conversational apps



From chatbots to IoT devices, conversational apps provide a richer and more natural experience for users. Dialogflow (formerly API.AI) was created for exactly that purpose — to help developers build interfaces that offer engaging, personal interactions.

We’ve seen hundreds of thousands of developers use Dialogflow to create conversational apps for customer service, commerce, productivity, IoT devices and more. Developers have consistently asked us to add enterprise capabilities, which is why today we’re announcing the beta release of Dialogflow Enterprise Edition. The enterprise edition expands on all the benefits of Dialogflow, offering greater flexibility and support to meet the needs of large-scale businesses. In addition, we're also announcing speech integration within Dialogflow, enabling developers to build rich voice-based applications.

Here’s a little more on what Dialogflow offers:

  • Conversational interaction powered by machine learning: Dialogflow uses natural language processing to build conversational experiences faster and iterate more quickly. Provide a few examples of what a user might say and Dialogflow will build a unique model that can learn what actions to trigger and what data to extract so it provides the most relevant and precise responses to your users.
  • Build once and deploy everywhere: Use Dialogflow to build a conversational app and deploy it on your website, your app or 32 different platforms, including the Google Assistant and other popular messaging services. Dialogflow also supports multiple languages and multilingual experiences so you can reach users around the world.
  • Advanced fulfillment options: Fulfillment defines the corresponding action in response to whatever a user says, such as processing an order for a pizza or triggering the right answer to your user's question. Dialogflow allows you to connect to any webhook for fulfillment whether it's hosted in the public cloud or on-premises. Dialogflow’s integrated code editor allows you to code, test and implement these actions directly within Dialogflow's console.
  • Voice control with speech recognition: Starting today, Dialogflow enables your conversational app to respond to voice commands or voice conversations. It's available within a single API call, combining speech recognition with natural language understanding.


Dialogflow Enterprise Edition also offers:

  • Google Cloud Platform Terms of Service: Dialogflow Enterprise Edition is covered by the Google Cloud Platform Terms of Service, including the Data Privacy and Security Terms. Enterprise Edition users are also eligible for Cloud Support packages, and the Enterprise Edition will soon provide SLAs with committed availability levels.
  • Flexibility and scale: Dialogflow Enterprise Edition offers higher default quotas so it’s easier to scale your app up or down based on user demand.
  • Unlimited pay-as-you-go voice support: While both the standard and enterprise editions now allow your conversational app to detect voice commands or respond to voice conversations, Dialogflow Enterprise Edition offers unlimited pay-as-you-go voice support.

Companies such as Uniqlo, PolicyBazaar and Strayer University have already used Dialogflow to design and deploy conversational experiences.

Creating new online shopping experiences for Uniqlo


UNIQLO is a modern Japanese retailer that operates nearly 1,900 stores worldwide. The company integrated a chatbot into its mobile app to provide quick, relevant answers to a range of customer questions, regardless of whether customers are shopping online or in-store. This makes the shopping experience easier and more enjoyable. Since deploying the chatbot, 40% of users have interacted with it on a weekly basis.
“Our shopping chatbot was developed using Dialogflow to offer a new type of shopping experience through a messaging interface, with responses continually being improved through machine learning. Going forward, we’re also looking to expand the functionality to include voice recognition and multiple languages.” 
Shinya Matsuyama, Director of Global Digital Commerce, Uniqlo

Changing the way we buy insurance with PolicyBazaar


PolicyBazaar is the leading insurance marketplace in India, founded in the year 2008, with the purpose of educating consumers, enabling easy comparisons and purchasing insurance products. The company today hosts over 80 million visitors yearly, and records nearly 150,000 transactions a month.

Using Dialogflow Enterprise Edition, PolicyBazaar created and deployed a conversational assisted chatbot, PBee, to better serve its visitors and transform the way customers purchase insurance online. The company has been using the logging and training module to track top customer requests and improve fulfillment capabilities. In just a few months, PBee now handles over 60% of customer queries over chat, resulting in faster fulfillment of requests from its users.

Since deploying the chatbot, the company has seen a five-fold increase in customers using their chat interface for auto insurance, and chat now contributes to 40% of the company's auto insurance sales.
“Dialogflow is by far the best platform for text-based conversational chatbots. With it, we derive all the benefits of machine learning without restrictions on the frontend. Through our chatbot, we are now closing over 13,000 sales totaling a premium of nearly $2 million (USD) every month and growing at a 30% month-over-month rate.”  
Ashish Gupta, CTO & CPO, Policybazaar.com

For more on the differences between the standard and the enterprise editions of Dialogflow, we recommend reading our documentation.

We look forward to seeing what you'll build during our public beta. To learn more about Dialogflow Enterprise Edition, visit our product page.

With Multi-Region support in Cloud Spanner, have your cake and eat it too



Today, we’re thrilled to announce the general availability of Cloud Spanner Multi-Region configurations. With this release, we’ve extended Cloud Spanner’s transactions and synchronous replication across regions and continents. That means no matter where your users may be, apps backed by Cloud Spanner can read and write up-to-date (strongly consistent) data globally and do so with minimal latency for end users. In other words, your app now has an accurate, consistent view of the data it needs to support users whether they’re around the corner or around the globe. Additionally, when running a Multi-Region instance, your database is able to survive a regional failure.

This release also delivers an industry-leading 99.999% availability SLA with no planned downtime. That’s 10x less downtime (< 5min / year) than database services with four nines of availability.

Cloud Spanner is the first and only enterprise-grade, globally distributed and strongly consistent database service built specifically for the cloud that combines the benefits and familiarity of relational database semantics with non-relational scale and performance. It now supports a wider range of application workloads, from a single node in a single region to massive instances that span regions and continents. At any scale, Cloud Spanner behaves the same, delivering a single database experience.


Since we announced the general availability of Cloud Spanner in May, customers, from startups to enterprises, have rethought what a database can do, and have been migrating their mission critical production workloads to it. For example, Mixpanel, a business analytics service, moved their sharded MySQL database to Cloud Spanner to handle user-id lookups when processing events from their customers' end-users web browser and mobile devices.

No more trade-offs


For years, developers and IT organizations were forced to make painful compromises between the horizontal scalability of non-relational databases and the transactions, structured schema and complex SQL queries offered by traditional relational databases. With the increase in volume, variety and velocity of data, companies had to layer additional technologies and scale-related workarounds to keep up. These compromises introduced immense complexity and only addressed the symptoms of the problem, not the actual problem.

This summer, we announced an alliance with marketing automation provider Marketo, Inc., which is migrating to GCP and Cloud Spanner. Companies around the world rely on Marketo to orchestrate, automate, and adapt their marketing campaigns via the Marketo Engagement Platform. To meet the demands of its customers today, and tomorrow, Marketo needed to be able to process trillions of activities annually, creating an extreme-scale big data challenge. When it came time to scale its platform, Marketo did what many companies do  it migrated to a non-relational database stack. But if your data is inherently transactional, going to a system without transactions and keeping data ordered and readers consistent is very hard.

"It was essential for us to have order sequence in our app logic, and with Cloud Spanner, it’s built in. When we started looking at GCP, we quickly identified Cloud Spanner as the solution, as it provided relational semantics and incredible scalability within a managed service. We hadn’t found a Cloud Spanner-like product in other clouds. We ran a successful POC and plan to move several massive services to Cloud Spanner. We look forward to Multi-Region configurations, as they give us the ability to expand globally and reduce latencies for customers on the other side of the world" 
— Manoj Goyal, Marketo Chief Product Officer

Mission-critical high availability


For global businesses, reliability is expected but maintaining that reliability while also rapidly scaling can be a challenge. Evernote, a cross-platform app for individuals and teams to create, assemble, nurture and share ideas in any form, migrated to GCP last year. In the coming months, it will mark the next phase of its move to the cloud by migrating to a single Cloud Spanner instance to manage over 8 billion plus pieces of its customers’ notes, replacing over 750 MySQL instances in the process. Cloud Spanner Multi-Region support gives Evernote the confidence it needs to make this bold move.
"At our size, problems such as scalability and reliability don't have a simple answer, Cloud Spanner is a transformational technology choice for us. It will give us a regionally distributed database storage layer for our customers’ data that can scale as we continue to grow. Our whole technology team is excited to bring this into production in the coming months."
Ben McCormack, Evernote Vice President of Operations

Strong consistency with scalability and high performance


Cloud Spanner delivers scalability and global strong consistency so apps can rely on an accurate and ordered view of their data around the world with low latency. Redknee, for example, provides enterprise software to mobile operators to help them charge their subscribers for their data, voice and texts. Its customers' network traffic currently runs through traditional database systems that are expensive to operate and come with processing capacity limitations.
“We want to move from our current on-prem per-customer deployment model to the cloud to improve performance and reliability, which is extremely important to us and our customers. With Cloud Spanner, we can process ten times more transactions per second (using a current benchmark of 55k transactions per second), allowing us to better serve customers, with a dramatically reduced total cost of ownership." 
— Danielle Royston, CEO, Redknee

Revolutionize the database admin and management experience


Standing up a globally consistent, scalable relational database instance is usually prohibitively complex. With Cloud Spanner, you can create an instance in just a few clicks and then scale it simply using the Google Console or programmatically. This simplicity revolutionizes database administration, freeing up time for activities that drive the business forward, and enabling new and unique end-user experiences.

A different way of thinking about databases


We believe Cloud Spanner is unique among databases and cloud database services, offering a global relational database, not just a feature to eventually copy or replicate data around the world. At Google, Spanner powers apps that process billions of transactions per day across many Google services. In fact, it has become the default database internally for apps of all sizes. We’re excited to see what your company can do with Cloud Spanner as your database foundation.

Want to learn more? Check out the many whitepapers discussing the technology behind Cloud Spanner. Then, when you’re ready to get started, follow our Quickstart guide to Cloud Spanner, or Kelsey Hightower’s post How to get started with Cloud Spanner in 5 minutes.

Introducing Certified Kubernetes (and Google Kubernetes Engine!)



When Google launched Kubernetes three years ago, we knew based on our 10 years of experience with Borg how useful it would be to developers. But even we couldn’t have predicted just how successful it would become. Kubernetes is one of the world’s highest velocity open source projects, supported by a diverse community of contributors. It was designed at its heart to run anywhere, and dozens of vendors have created their own Kubernetes offerings.

It's critical to Kubernetes users that their applications run reliably across different Kubernetes environments, and that they can access the new features in a timely manner. To ensure a consistent developer experience across different Kubernetes offerings, we’ve been working with the Cloud Native Computing Foundation (CNCF) and the Kubernetes community to create the Certified Kubernetes Conformance Program. The Certified Kubernetes program officially launched today, and our Kubernetes service is among the first to be certified.

Choosing a Certified Kubernetes platform like ours and those from our partners brings both benefits and peace of mind, especially for organizations with hybrid deployments. With the greater compatibility of Certified Kubernetes, you get:
  • Smooth migrations between on-premises and cloud environments, and a greater ability to split a single workload across multiple environments 
  • Consistent upgrades
  • Access to community software and support resources
The CNCF hosts a complete list of of Certified Kubernetes platforms and distributions. If you use a Kubernetes offering that's not on the list, encourage them to become certified as soon as possible!

Putting the K in GKE


One of the benefits of participating in the Certified Kubernetes Conformance Program is being able to use the name “Kubernetes” in your product. With that, we’re taking this opportunity to rename Container Engine to Kubernetes Engine. From the beginning, Container Engine’s acronym has been GKE in a nod to Kubernetes. Now, as a Certified Kubernetes offering, we can officially put the K in GKE.

While the Kubernetes Engine name is new, everything else about the service is unchanged—it’s still the same great managed environment for deploying containerized applications that you trust to run your production environments. To learn more about Kubernetes Engine, visit the product page, or the documentation for a wealth of quickstarts, tutorials and how-tos. And as always, if you’re just getting started with containers and Google Cloud Platform, be sure to sign up for a free trial.

Announcing Architecture Components 1.0 Stable

Posted by Lukas Bergstrom, Product Manager, Android Developer Frameworks Team

Android runs on billions of devices, from high-end phones to airplane seatbacks. The Android OS manages resources aggressively to perform well on this huge range of devices, and sometimes that can make building robust apps complicated. To make it easier, we launched a preview of Architecture Components at Google I/O to provide guidance on app architecture, with libraries for common tasks like lifecycle management and data persistence. Together, these foundational components make it possible to write modular apps with less boilerplate code, so developers can focus on innovating instead of reinventing the wheel - and we hope to keep building on this foundation in the future.

Today we're happy to announce that the Room and Lifecycle Architecture Components libraries have reached 1.0 stable. These APIs are ready for production apps and libraries, and are our recommendation for developers looking for help with app architecture and local storage (although they're only recommended, not required.) Lifecycles are now also integrated with the Support Library, so you can use them with standard classes like AppCompatActivity.

Although we're declaring them stable today, the beta components are already used in apps that together, have billions of installs. Top developers, like Zappos, have been able to spend more time on what's important thanks to Architecture Components:

Prior to the release of Android Architecture Components we had our own ViewModel implementation. We used Loaders and Dependency Injection to persist our ViewModel through config changes. We recently switched to the Architecture Components ViewModel implementation and all that boilerplate went away. We found that we were able to spend more time on design, business logic and testing, and less on writing boilerplate or worrying about Android lifecycle issues.

We've also started to use LiveData which hooks directly into the Activity lifecycle. We use it to retrieve and display network data and no longer have to concern ourselves with ​network call subscription management. - David Henry, Android Software Engineer, Zappos

Architecture Components provide a simple, flexible and practical approach that frees developers from some common problems so they can focus on building great experiences. This is based on core building blocks tied together by guidance on app architecture.

Lifecycles

Every Android developer has to deal with the operating system starting, stopping and destroying their Activities. That means managing the state of components - such as observables used to update UI - as you move through the lifecycle. Lifecycles enables the creation of lifecycle-aware components that manage their own lifecycles, reducing the possibility of leaks or crashes. The Lifecycle library is the foundation for other Architecture Components like LiveData.

LiveData

LiveData is a lifecycle-aware observable that holds data and provides updates. Your UI code subscribes to changes and provides LiveData a reference to its Lifecycle. Because LiveData is lifecycle-aware, it provides updates when its Lifecycle is started or resumed, but stops providing updates when the LifecycleOwner is destroyed. LiveData is a simple way to build reactive UIs that are safer and more performant.

ViewModel

ViewModel separates ownership of view data and logic from lifecycle-bound entities like Activities and Fragments. A ViewModel is retained until its associated Activity or Fragment is disposed of forever - that means view data survives events like a Fragment being recreated due to rotation. ViewModels not only eliminate common lifecycle issues, they help build UIs that are more modular and easier to test.

Room

Nearly all apps need to store data locally. While Android has bundled SQLite with the platform since version 1, using it directly can be painful. Room is a simple object-mapping layer that provides the full power of SQlite with less boilerplate. Features like compile-time query verification and built-in migration make it easier to build a robust persistence layer, while integration with LiveData lets Room provide database-backed, lifecycle-aware observables. Room blends of simplicity, power and robustness for managing local storage, and we hope you give it a try.

Guide to App Architecture and more

Last but not least, we created a Guide to App Architecture that provides core principles applicable to all developers, and specific guidance on using Architecture Components together. Because we've heard from you that clear and consistent guidance is important, today we're updating developer documentation to point to Architecture Components where appropriate. We also have a rich set of videos, codelabs and sample apps available at the Architecture Components site, with more to come.

Watch this space

Although the first set of Architecture Components is now stable, we know there's more work to do. Over the last few months, we've listened to your feedback and made improvements. We also recently launched a new Architecture Component, PagedList, to alpha, in response to your feedback that handling large datasets with RecyclerView is too difficult. This is just the beginning - we have more major components under development that we're looking to announce in the upcoming months.

Our hope with Architecture Components is to free developers to focus on providing unique new experiences for mobile devices. We're glad we can finally announce them as stable for production use. We'd like to thank the community, which has given such great feedback along the way, and we look forward to continuing the discussion in the comments of this post. Finally, for those of you who've been waiting for this stable launch, get started today.

Google Cloud Dedicated Interconnect gets global routing, more locations, and is GA



We have major updates to Dedicated Interconnect, which helps enable fast private connections to Google Cloud Platform (GCP) from numerous facilities across the globe, so you can extend your on-premises network to your GCP Virtual Private Cloud (VPC) network. With faster private connections offered by Dedicated Interconnect, you can build applications that span on-premises infrastructure and GCP without compromising privacy or performance.

Dedicated Interconnect is now GA and ready for production-grade workloads, and covered by a service level agreement. Dedicated Interconnect can be configured to offer a 99.9% or a 99.99% uptime SLA. Please see the Dedicated Interconnect documentation for details on how to achieve these SLAs.

Going global with the help of Cloud Router


Dedicated Interconnect now supports global routing for Cloud Router, a new feature that allows subnets in GCP to be accessible from any on-premise network through the Google network. This feature presents a new flag in Cloud Router that allows the network to advertise all the subnets in a project. For example, a connection from your on-premise data center in Chicago to GCP’s Dedicated Interconnect location in Chicago now gives you access to all subnets running in all GCP regions around the globe, including those in the Americas, Asia and Europe. We believe this functionality is unique among leading cloud providers. This feature is generally available, and you can learn more about it in the Cloud Router documentation.
Using Cloud Router Global Routing to connect on-premises workloads via "Customer Peering Router" with GCP workloads in regions anywhere in the world.

Dedicated Interconnect is your new neighbor


Dedicated Interconnect is also available from four new locations: Mumbai, Munich, Montreal and Atlanta. This means you can connect to Google’s network from almost anywhere in the world. For a full list of locations, visit the Dedicated Interconnect locations page. Please note, in the graphic below, many locations (blue dots) offer service from more than one facility.
In addition to those four new Google locations, we’re also working with Equinix to offer Dedicated Interconnect access in multiple markets across the globe, ensuring that no matter where you are, there's a Dedicated Interconnect connection close to you.
"By providing direct access to Google Cloud Dedicated Interconnect, we are helping enterprises leverage Google’s network  the largest in the world and accelerate their hybrid cloud strategies globally. Dedicated Interconnect offered in collaboration with Equinix enables customers to easily build the cloud of their choice with dedicated, low-latency connections and SLAs that enterprise customers have come to expect from hybrid cloud architectures." 
Ryan Mallory, Vice President, Global Solutions Enablement, Equinix

Here at Google Cloud, we’re really excited about Dedicated Interconnect, including the 99.99% uptime SLA, four new locations, and Cloud Router Global Routing. Dedicated Interconnect will make it easier for more businesses to connect to Google Cloud, and we can’t wait to see the next generation of enterprise workloads that Dedicated Interconnect makes possible.

If you’d like to learn which connection option is right for you, more about pricing and whole lots more, please take a look at the Interconnect product page.

Welcoming 25 mentor organizations for Google Code-in 2017

We’re thrilled to introduce 25 open source organizations that are participating in Google Code-in 2017. The contest, now in its eighth year, offers 13-17 year old pre-university students an opportunity to learn and practice their skills while contributing to open source projects.

Google Code-in officially starts for students on November 28. Students are encouraged to learn about the participating organizations ahead of time and can get started by clicking on the links below:

  • Apertium: rule-based machine translation platform
  • BRL-CAD: computer graphics, 2D and 3D geometry modeling and computer-aided design (CAD)
  • Catrobat: visual programming for creating mobile games and animations
  • CCExtractor: open source tools for subtitle generation
  • CloudCV: building platforms for reproducible AI research
  • coala: a unified interface for linting and fixing code, regardless of the programming languages used
  • Drupal: content management platform
  • FOSSASIA: developing communities across all ages and borders to form a better future with Open Technologies and ICT
  • Haiku: operating system specifically targeting personal computing
  • JBoss Community: a community of projects around JBoss Middleware
  • LibreHealth: aiming to bring open source healthcare IT to all of humanity
  • Liquid Galaxy: an interactive, panoramic and immersive visualization tool
  • MetaBrainz: builds community maintained databases
  • Mifos Initiative: transforming the delivery of financial services to the poor and the unbanked
  • MovingBlocks: a Minecraft-inspired open source game
  • OpenMRS: open source medical records system for the world
  • OpenWISP: build and manage low cost networks such as public wifi
  • OSGeo: building open source geospatial tools
  • Sugar Labs: learning platform and activities for elementary education
  • SCoRe: research lab seeking sustainable solutions for problems faced by developing countries
  • Systers: community for women involved in technical aspects of computing
  • Ubuntu: an open source operating system
  • Wikimedia: non-profit foundation dedicated to bringing free content to the world, operating Wikipedia
  • XWiki: a web platform for developing collaborative applications using the wiki paradigm
  • Zulip: powerful, threaded open source group chat with apps for every major platform

These mentor organizations are hard at work creating thousands of tasks for students to work on, including code, documentation, user interface, quality assurance, outreach, research and training tasks. The contest officially starts for students on Tuesday, November 28th at 9:00am PST.

You can learn more about Google Code-in on the contest site where you’ll find Contest Rules, Frequently Asked Questions and Important Dates. There you’ll also find flyers and other helpful information including the Getting Started Guide. Our discussion mailing list is a great way to talk with other students, mentors and organization administrators about the contest.

By Josh Simmons, Google Open Source

Smarter attribution for everyone

In May, we announced Google Attribution, a new free product to help marketers measure the impact of their marketing across devices and across channels. Advertisers participating in our early tests are seeing great results. Starting today, we’re expanding the Attribution beta to hundreds of advertisers.

We built Google Attribution to bring smarter performance measurement to all advertisers, and to solve the common problems with other attribution solutions.

Google Attribution is:
  • Easy to setup and use: While some attribution solutions can take months to set up, Google Attribution can access the marketing data you need from tools like AdWords and Google Analytics with just a few clicks.
  • Cross-device: Today’s marketers need measurement tools that don't lose track of the customer journey when people switch between devices. Google Attribution uses Google’s device graph to measure the cross-device customer journey and deliver insights into cross-device behavior, all while protecting individual user privacy.
  • Cross-channel: With your marketing spread out across so many channels (like search, display, and email), it can be difficult to determine how each channel is working and which ones are truly driving sales. Google Attribution brings together data across channels so you can get a more comprehensive view of your performance.
  • Easy to take action: Attribution insights are only valuable if you can use them to improve your marketing. Integrations with tools like AdWords make it easy to update your bids or move budget between channels based on the new, more accurate performance data.


Results from Google Attribution beta customers



Last April, we shared that for AdWords advertisers, data-driven attribution typically delivers more conversions at a similar cost-per-conversion than last-click attribution. This shows that data-driven attribution is a better way to measure and optimize the performance of search and shopping ads.

Today we’re pleased to share that early results from Google Attribution beta customers show that data-driven attribution helps marketers improve their performance across channels.

Hello Fresh, a meal delivery service, grew conversions by 10% after adopting Google Attribution. By using data-driven attribution to measure across channels like search, display, and email, Google Attribution gives Hello Fresh a more accurate measurement of the number of conversions each channel is driving. And because Google Attribution is integrated with AdWords, Hello Fresh can easily use this more accurate conversion data to optimize their bidding.

"With Google Attribution, we have been able to automatically integrate cross-channel bidding throughout our AdWords search campaigns. This has resulted in a seamless change in optimization mindset as we are now able to see keyword and query performance more holistically rather than inadvertently focusing on only last-click events.
- Karl Villanueva Head of Paid Search & Display, HelloFresh

Pixers, an online marketplace, is also seeing positive results including increased conversions. Google Attribution allows Pixers to more confidently evaluate the performance of their AdWords campaigns and adopt new features that improve performance.

"By using Google Attribution data we have finally eliminated guesswork from evaluating the performance of campaigns we're running, including shopping and re-marketing. The integration with AdWords also enabled us to gradually roll-out smart bidding strategies across increasing number of campaigns. The results have significantly exceeded expectations as we managed to cut the CPA while obtaining larger conversion volumes."
- Arkadiusz Kuna, SEM & Remarketing Manager at Pixers

Google Attribution can also help brands get a better understanding of their customer’s path to purchase. eDreams ODIGEO, an online travel company, knows that people don’t usually book flights or hotels after a single interaction with their brand. It often requires multiple interactions with each touchpoint having a different impact.

“Some channels open the customer journey and bring new customers, whereas other channels are finishers and contribute to close the sales. Google Attribution is helping us to understand the added value of each interaction. It enhances of our ability to have a holistic view of how different marketing activities contribute to success.”
- Manuel Bruscas, Director of Marketing Analytics & Insights, eDreams ODIGEO


Next steps



In the coming months we’ll invite more advertisers to use Google Attribution. If you’re interested in receiving a notification when the product is available for you, please sign up here.

Don’t forget, even before adopting Google Attribution, you can get started with smarter measurement for your AdWords campaigns. With attribution in AdWords you can move from last-click to a better attribution model, like data-driven attribution, that allows you to more accurately measure and optimize search and shopping ads.

New ways to manage sensitive data with the Data Loss Prevention API



If your organization has sensitive and regulated data, you know how much of a challenge it can be to keep it secure and private. The Data Loss Prevention (DLP) API, which went beta in March, can help you quickly find and protect over 50 types of sensitive data such as credit card numbers, names and national ID numbers. And today, we’re announcing several new ways to help protect sensitive data with the DLP API, including redaction, masking and tokenization.

These new data de-identification capabilities help you to work with sensitive information, while reducing the risk of sensitive data being inadvertently revealed. If like many enterprises you follow the principle of least privilege or need-to-know access to data (only use or expose the minimum data required for an approved business process) the DLP API can help you enforce these principles in production applications and data workflows. And because it’s an API, the service can be pointed at any virtually any data source or storage system. DLP API offers native support and scale for scanning large datasets in Google Cloud Storage, Datastore and BigQuery.
Google Cloud DLP API enables our security solutions to scan and classify documents and images from multiple cloud data stores and email sources. This allows us to offer our customers critical security features, such as classification and redaction, which are important for managing data and mitigating risk. Google’s intelligent DLP service enables us to differentiate our offerings and grow our business by delivering high quality results to our customers.  
 Sateesh Narahari, VP of Products, Managed Methods

New de-identification tools in DLP API

De-identifying data removes identifying information from a dataset, making it more difficult to associate the remaining data with an individual and reducing the risk of exposure.
With the DLP API, you can classify and mask sensitive elements in both structured data and unstructured data.


The DLP API now supports a variety of new data transformation options:

Redaction and suppression 
Redaction and suppression remove entire values or entire records from a dataset. For example, if a support agent working in a customer support UI doesn’t need to see identifying details to troubleshoot the problem, you might decide to redact those values. Or, if you’re analyzing large population trends, you may decide to suppress records that contain unique demographics or rare attributes, since these distinguishing characteristics may pose a greater risk.
The DLP API identifies and redacts a name, social security number, telephone number and email address
Partial masking 
Partial masking obscures part of a sensitive attribute  for example, the last 7 digits of a US telephone number. In this example, a 10-digit phone number retains only the area code.
Tokenization or secure hashing
Tokenization, also called secure hashing, is an algorithmic transformation that replaces a direct identifier with a pseudonym or token. This can be very useful in cases where you need to retain a record identifier or join data but don’t want to reveal the sensitive underlying elements. Tokens are key-based and can be configured to be reversible (using the same key) or non-reversible (by not retaining the key).

The DLP API supports the following token types:
  • Format-Preserving Encryption - a token of the same length and character set.




  • Secure, key-based hashes - a token that's a 32-byte hexadecimal string generated using a data encryption key.



  • Dynamic data masking 
    The DLP API can apply various de-identification and masking techniques in real time, which is sometimes referred to as “Dynamic Data Masking” (DDM). This can be useful if you don’t want to alter your underlying data, but want to mask it when viewed by certain employees or users. For example, you could mask data when it’s presented in a UI, but require special privileges or generate additional audit logs if someone needs to view the underlying personally identifiable information (PII). This way, users aren’t exposed to the identifying data by default, but only when business needs dictate.
    With the DLP API, you can prevent users from seeing sensitive data in real-time

    Bucketing, K-anonymity and L-Diversity 
    The DLP API offers even more methods that can help you transform and better understand your data. To learn more about bucketing, K-anonymity, and L-Diversity techniques, check out the docs and how-to guides.


    Get started with the DLP API

    With these new transformation capabilities, the DLP API can help you classify and protect sensitive data no matter where it’s stored. With all tools that are designed to assist with data discovery and classification, there's no certainty that it will be 100% effective in meeting your business needs or obligations. To get started with DLP API today, take a look at the quickstart guides.