Tag Archives: Announcements

New Singapore GCP region – open now



The Singapore region is now open as asia-southeast1. This is our first Google Cloud Platform (GCP) region in Southeast Asia (and our third region in Asia), and it promises to significantly improve latency for GCP customers and end users in the area.

Customers are loving GCP in Southeast Asia; the total number of paid GCP customers in Singapore has increased by 100% over the last 12 months.

And the experience for GCP customers in Southeast Asia is better than ever too; performance testing shows 51% to 98% reductions in round-trip time (RTT) latency when serving customers in Singapore, Jakarta, Kuala Lumpur and Bangkok compared to using other GCP regions in Taiwan or Tokyo.

Customers with a global footprint like BBM Messenger, Carousell and Go-Jek have been looking forward to the launch of the Singapore region.
"We are excited to be able to deploy into the GCP Singapore region, as it will allow us to offer our services closer to BBM Messenger key markets. Coupled with Google's global load balancers and extensive global network, we expect to be able to provide a low latency, high-speed experience for our users globally. During our POCs, we found that GCP outperformed most vendors on key metrics such as disk I/O and network performance on like-for-like benchmarks. With sustained usage discounts and continuous support from Google's PSO and account team, we are excited to make GCP the foundation for the next generation of BBM consumer services. Matthew Talbot, CEO of Creative Media Works, the company that runs BBM Messenger Consumer globally.
"As one of the largest and fastest growing mobile classifieds marketplaces in the world, Carousell needed a platform that was agile enough for a startup, but could scale quickly as we expand. We found all these qualities in the Google Cloud Platform (GCP), which gives us a level of control over our systems and environment that we didn't find elsewhere, along with access to cutting edge technologies. We're thrilled that GCP is launching in Singapore, and look forward to being inspired by the way Google does things at scale."  — Jordan Dea-Mattson, Vice President Engineering, Carousell

"We are extremely pleased with the performance of GCP, and we are excited about the opportunities opening in Indonesia and other markets, and making use of the Singapore Cloud Region. The outcomes we’ve achieved in scaling, stability and other areas have proven how fantastic it is to have Google and GCP among our key service partners." — Ajey Gore, CTO, Go-Jek
We’ve launched Singapore with two zones and the following services:
In addition, you can combine any of the services you deploy in Singapore with other GCP services around the world such as DLP, Spanner and BigQuery.

Singapore Multi-Tier Cloud Security certification

Google Cloud is pleased to announce that having completed the required assessment, it has been recommended, by an approved certification body, for Level 3 certification of Singapore's Multi-Tier Cloud Security (MTCS) standard (SS 584:2015+C1:2016). Customers can expect formal approval of Google Cloud's certification in the coming months. As a result of achieving this certification, organizations who require compliance with the strictest levels of the MTCS standard can now confidently adopt Google Cloud services and host this data on Google Cloud's infrastructure.

Next steps

If you’re looking for help to understand how to deploy GCP, please contact local partners Sakura Sky, CloudCover, Cloud Comrade and Powerupcloud.

For more details on the Singapore region, please visit our Singapore region portal, where you’ll get access to free resources, whitepapers, on-demand video series called "Cloud On-Air" and more. These will help you get started on GCP. Our locations page provides updates on other regions coming online soon. Give us a shout to request early access to new regions and help us prioritize what we build next.

Getting started with Shared VPC



Large organizations with multiple cloud projects value the ability to share physical resources, while maintaining logical separation between groups or departments. At Google Cloud Next '17, we announced Shared VPC, which allows you to configure and centrally manage one or more virtual networks across multiple projects in your Organization, the top level Cloud Identity Access Management (Cloud IAM) resource in the Google Cloud Platform (GCP) cloud resource hierarchy.

With Shared VPC, you can centrally manage the creation of routes, firewalls, subnet IP ranges, VPN connections, etc. for the entire organization, and at the same time allow developers to own billing, quotas, IAM permissions and autonomously operate their development projects. Shared VPC is now generally available, so let’s look at how it works and how best to configure it.

How does Shared VPC work?

We implemented Shared VPC entirely in the management control plane, transparent to the data plane of the virtual network. In the control plane, the centrally managed project is enabled as a host project, allowing it to contain one or more shared virtual networks. After configuring the necessary Cloud IAM permissions, you can then create virtual machines in shared virtual networks, by linking one or more service projects to the host project. The advantage of sharing virtual networks in this way is being able to control access to critical network resources such as firewalls and centrally manage them with less overhead.

Further, with shared virtual networks, virtual machines benefit from the same network throughput caps and VM-to-VM latency as when they're not on shared networks. This is also the case for VM-to-VPN and load balancer-to-VM communication.

To illustrate, consider a single externally facing web application server that uses services such as personalization, recommendation and analytics, all internally available, but built by different development teams.

Example topology of a Shared VPC setup.

Let’s look at the recommended patterns when designing such a virtual network in your organization.

Shared VPC administrator role

The network administrator of the shared host project should also have the XPN administrator role in the organization. This allows a single central group to configure new service projects that attach to the shared VPC host project, while also allowing them to set up individual subnetworks in the shared network and configure IP ranges, for use by administrators of specific service projects. Typically, these administrators would have the InstanceAdmin role on the service project.

Subnetworks USE permission

When connecting a service project to the shared network, we recommend you grant the service project administrators compute.subnetworks.use permission (through the NetworkUser role) on one (or more) subnetwork(s) per region, such that the subnetwork(s) are used by a single service project.

This will help ensure cleaner separation of usage of subnetworks by different teams in your organization. In the future, you may choose to associate specific network policies for each subnetwork based on which service project is using it.

Subnetwork IP ranges

When configuring subnetwork IP ranges in the same or different regions, allow sufficient IP space between subnetworks for future growth. GCP allows you to expand an existing subnetwork without affecting IP addresses owned by existing VMs in the virtual network and with zero downtime.

Shared VPC and folders

When using folders to manage projects created in your organization, place all host and service projects for a given shared VPC setup within the same folder. The parent folder of the host project should be in the parent hierarchy of the service projects, so that the parent folder of the host project contains all the projects in the shared VPC setup. When associating service projects with a host project, ensure that these projects will not move to other folders in the future, while still being linked to the host project.


Control external access

In order to control and restrict which VMs can have public IPs and thus access to the internet, you can now set up an organization policy that disables external IP access for VMs. Do this only for projects that should have only internal access, e.g. the personalization, recommendation and analytics services in the example above.

As you can see, Shared VPC is a powerful tool that can make GCP more flexible and manageable for your organization. To learn more about Shared VPC, check out the documentation.

Spinnaker 1.0: a continuous delivery platform for cloud



At Google we deploy a lot of code: tens of thousands deployments a day, to thousands of services, seven of which have more than a billion users each around the globe. Along the way we’ve learned some best practices about how to deploy software at velocity -- things like automated releases, immutable infrastructure, gradual rollouts and fast rollbacks.

Back in 2014, we started working with the Netflix team that created Spinnaker, and saw in it a release management platform that embodied many of our first principles for safe, frequent and reliable releases. Excited by its potential, we partnered with Netflix to bring Spinnaker to the public, and they open-sourced it in November 2015. Since then, the Spinnaker community has grown to include dozens of organizations including Microsoft, Oracle, Target, Veritas, Schibsted, Armory and Kenzan, to name a few.

Today we’re happy to announce the release of Spinnaker 1.0, an open-source multi-cloud continuous delivery platform used in production at companies like Netflix, Waze, Target, and Cloudera, plus a new open-source command line interface (CLI) tool called halyard that makes it easy to deploy Spinnaker itself. Read on to learn what Spinnaker can do for your own software development processes.

Why Spinnaker?

Let’s look at a few of the features and new updates that make Spinnaker a great release management solution for enterprises:

Open-source, multi-cloud deployments
Here at Google Cloud Platform (GCP), we believe in an open cloud. Spinnaker, including its rich UI dashboard, is 100% open-source. You can install it locally, on-prem, or to any cloud platform, running either on a virtual machine (VM) or Kubernetes.

Spinnaker streamlines the deployment process by decoupling your release pipeline from your target cloud provider, reducing the complexity of moving from one platform to another or deploying the same application to multiple clouds.

It has built-in support for Google Compute Engine, Google Container Engine, Google App Engine, AWS EC2, Microsoft Azure, Kubernetes, and OpenStack, with more added every year by the community, including Oracle Bare Metal and DC/OS, coming soon.

Whether you’re releasing to multiple clouds or preventing vendor lock-in, Spinnaker helps you deploy your application based on what’s best for your business.

Automated releases
In Spinnaker, deployments are orchestrated using custom release pipelines, the stages of which can consist of almost anything you want -- integration or system tests, spinning a server group up or down, manual approvals, waiting a period of time, or running a custom script or Jenkins job.

Spinnaker integrates seamlessly with your existing continuous integration (CI) workflows. You can trigger pipelines from git, Jenkins, Travis CI, Docker registries, on a cron-like schedule, or even other pipelines.

Best-practice deployment strategies
Out-of-the-box, Spinnaker supports sophisticated deployment strategies like release canaries, multiple staging environments, red/black (a.k.a. blue/green) deployments, traffic splitting and easy rollbacks.

This is enabled in part by Spinnaker’s use of immutable infrastructure in the cloud, where changes to your application trigger a redeployment of your entire server fleet. Compare this to the traditional approach of configuring updates to running machines, which results in slower, riskier rollouts and hard-to-debug configuration-drift issues.

With Spinnaker, you simply choose the deployment strategy you want to use for each environment, e.g. red/black for staging, rolling red/black for production, and it orchestrates the dozens of steps necessary under-the-hood. You don’t have to write your own deployment tool or maintain a complex web of Jenkins scripts to have enterprise-grade rollouts.

Role-based authorizations and permissions
Large companies often adopt Spinnaker across multiple product areas managed by a central DevOps team. For admins that need role-based access control for a project or account, Spinnaker supports multiple authentication and authorization options, including OAuth, SAML, LDAP, X.509 certs, GitHub teams, Azure groups or Google Groups.

You can also apply permissions to manual judgements, a Spinnaker stage which requires a person’s approval before proceeding with the pipeline, ensuring that a release can’t happen without the right people signing off.

Simplified installation and management with halyard
With the release of Spinnaker 1.0, we’re also announcing the launch of a new CLI tool, halyard, that helps admins more easily install, configure and upgrade a production-ready instance of Spinnaker.

Prior to halyard and Spinnaker 1.0, admins had to manage each of the microservices that make up Spinnaker individually. Starting with 1.0, all new Spinnaker releases are individually versioned and follow semantic versioning. With halyard, upgrading to the latest Spinnaker release is as simple as running a CLI command.

Getting started

Try out Spinnaker and make your deployments fast, safe, and, dare we say, boring.

For more info on Spinnaker, visit the new spinnaker.io website and learn how to get started.

Or if you’re ready to try Spinnaker right now, click here to install and run Spinnaker with Google’s click-to-deploy option in the Cloud Launcher Marketplace.

For questions, feedback, or to engage more with the Spinnaker community, you can find us on the Spinnaker Slack channel, submit issues to the Spinnaker GitHub repository, or ask questions on Stack Overflow using the “spinnaker” tag.

More on Spinnaker




Compute Engine updates bring Skylake GA, extended memory and more VM flexibility



We’re pleased to announce several updates to Google Compute Engine that give you more powerful and flexible instances. Google Cloud is the first and only public cloud to deliver Intel’s next-generation Xeon server processor (codenamed Skylake), and starting today, it’s generally available (GA). In addition, we’ve made several other enhancements to Compute Engine:
  • Increased total amount of memory per instance by removing memory caps
  • Increased variety of machine shapes
  • Simple process to select a baseline processor type
  • Availability of 64-core processors in all regions
  • Broadwell CPUs available in all regions
These improvements help you get the performance from Compute Engine that you need, in the configuration you want.

Skylake is generally available

With up to 64 vCPUs and 455GB of RAM, Skylake-based instances support a wide range of compute-intensive workloads, including scientific modeling, genomic research, 3D rendering, data analytics and engineering simulations. Since we first launched Skylake for Compute Engine in February, GCP customers have run millions of hours of compute on Skylake VMs, seeing increased performance for a variety of applications.

With this GA release, you can create new VMs with Skylake across Compute Engine’s complete family of VM instance types  standard, highmem, highcpu, Custom Machine Types, as well as Preemptible VMs. You can provision Skylake VMs using Cloud Console, the gcloud command line tool, or our APIs. Skylake is available in three GCP regions: Western US, Western Europe and Eastern Asia Pacific. Customer demand for Skylake has been very strong; we have more capacity arriving every day, and support for additional regions and zones coming in the near future.

To help you experience Skylake, we're offering Skylake VMs at no additional cost for a limited time. After a 60-day promotional period, Skylake VMs will be priced at a 6-10% premium depending on the specific machine configuration. Given the significant performance increase over previous generations of Intel processors, this continues our record of providing a leading price-performance cloud computing platform.

CPU platform selector

Google Cloud Platform (GCP) regions and zones are equipped with a diverse set of Intel Xeon-based host machines, with CPUs including Sandy Bridge, Ivy Bridge, Haswell, Broadwell and now Skylake microarchitectures. In addition to fundamental systems features like clock speed and memory access time, these CPU platforms also support unique features like AVX-2 and AVX-512.

Now, with our Minimum CPU Platform feature, you can select a specific CPU platform for VMs in that zone, and Compute Engine will always schedule your VM to that CPU family or above. You can assign a minimum CPU platform to a VM from the Cloud Console, Google Cloud SDK, or API, with full flexibility to choose the CPU features that work best for your applications.

Enabling this enhanced flexibility also allows us to now offer Broadwell CPU support in every region, as well as the ability to create VMs up to 64 vCPUs in size.
In the gcloud command line tool, use the instances create subcommand, followed by the --min-cpu-platform flag to specify a minimum CPU platform.

For example, the following command creates an n1-standard-1 instance with the Intel Broadwell (or later) CPU platform.

gcloud beta compute instances create example-instance --machine-type 
n1-standard-1 --min-cpu-platform “Intel Broadwell”

To see which CPUs are available in different GCP zones, check our Available Regions and Zones page. For complete instructions for using --min-cpu-platform, please refer to our documentation.

Extended memory, where you want it

Compute Engine Custom Machine Types allow you to create virtual machines with the vCPU and memory ratios to fit your application needs. Now, with extended memory, we’ve removed memory ratio restrictions for a vCPU (previously set at 6.5GB), for a maximum of 455GB of memory per VM instance. This is great news for applications like in-memory databases (e.g. Memcached & Redis), high-performance relational databases (e.g. Microsoft SQL Server) and NoSQL databases (e.g. MongoDB) that benefit from flexible memory configurations to achieve optimum price-performance. To learn more about the pricing for extended memory please take a look at our pricing page.

You can create a VM with extended memory using the Cloud Console, Cloud SDK or APIs.


For example, this command creates a 2 vCPU, 15GB memory instance (including an extended memory of 2GB):

gcloud beta compute instances create example-instance 
--custom-cpu 2 --custom-memory 15 --custom-extensions

Complete instructions for using extended memory are available in our documentation.

Get started today

The minimum CPU platform selector, extended memory to 455GB, availability of 64-core machines, Broadwell processors in all regions and the GA of Skylake processors are now all available for you and your applications. If you’re new to GCP you can try all of this out when you sign up for $300 free trial. We’d love to hear about the amazing things you do with these Compute Engine enhancements in the comments below.

Oregon region (us-west1) adds third zone, Cloud SQL and Regional Managed Instance Groups



Last summer we launched the Oregon region (us-west1) with two zones and a number of Google Cloud Platform (GCP) services. The region quickly became popular with developers looking to place applications close to users along the west coast of North America.
Today we’re opening a third zone in Oregon (us-west1-c) and two additional services: Cloud SQL and Regional Managed Instance Groups (MIGs). Cloud SQL is a fully managed service supporting relational PostgreSQL BETA and MySQL databases in the cloud. Regional MIGs make it easy to improve application availability by spreading virtual machine instances across three zones.

All three zones in Oregon (us-west1) contain the following services:
  • Compute Engine
  • Container Engine
  • Dataflow
  • Dataproc
  • Datalab

As with all GCP zones, the following services are available to support compute workloads:
In addition to Oregon, we'll soon be opening new regions in North America in Montreal and California. Our locations page provides the latest updates to GCP regions, zones and the services available in each. Give us a shout to request early access to new regions and help us prioritize what we build for you next.



Solving the enterprise attribution challenge


On Tuesday, we announced Google Attribution, a new free product to help marketers measure the impact of their marketing across devices and across channels. Our goal is to help every business, large or small, solve the attribution challenge and better understand if their marketing is working. To meet the needs of our largest advertisers, we’re also introducing an updated version of our enterprise attribution product, Google Attribution 360.

Google Attribution 360

Just like with the free product, Attribution 360 is easy to set up, works across channels and across devices, and makes taking action easy. Both products also offer data-driven attribution, which uses machine learning to determine how much credit to assign to each step in the consumer journey. In addition, Attribution 360 is designed to be highly customizable and can measure ads from DoubleClick Campaign Manager. This means that you can get a view of your marketing performance that matches up with how you view your business. The new version of Attribution 360 is currently in beta, and will launch more broadly later this year.
Here’s how Attribution 360 is designed to solve the enterprise attribution challenge:
Fast setup
Attribution 360 offers seamless integrations with Google Analytics, DoubleClick Campaign Manager, DoubleClick Bid Manager, and DoubleClick Search. You’ll get all your marketing event data in Attribution 360 with no need for retagging and no data loss between systems. You simply link your accounts and reports will usually be available within 48 hours.
“The setup process for Attribution 360 reduced the time to first data from 3 months to just a matter of weeks. Using Google Analytics data was so much easier, we already had our GA tags onsite and validated. It just made life so much easier.” - Eric Bernhard, Marketing Innovation Manager at Dixons
Flexible data
Attribution 360 has a rich set of features to simplify the challenge of importing and managing your external data sources. You can ensure that your data is complete and correct with enhanced preview capabilities, in-product data quality reporting, and the ability to reprocess your data if you make changes to your setup.
Measures TV
The TV Attribution feature within Attribution 360 helps businesses integrate digital and broadcast data to understand their cross-channel performance. Good news: TV Attribution is now included in Attribution 360 with no extra cost and is available directly in the Attribution 360 UI.
Easy to take action
Of course the insights you get are only valuable if you can put them into action. Here are two ways Attribution 360 makes it easy:
  • The in-product Digital Optimizer lets you explore a variety of optimization scenarios to inform future marketing investments and make your media more effective and efficient.
  • Programmatic connectors send results directly to bidding platforms so your media buys use the most accurate attribution data.
Here’s how one of our customers, Confused.com, uses Attribution 360 to improve their search advertising.

Confused.com increases paid search conversions by 28% with Google Attribution 360

Launched in 2001, Confused.com was the first insurance comparison site in the United Kingdom. This 100% e-commerce company helps people save money on car insurance and related services.
Paid search is a critical part of Confused.com’s acquisition strategy. CEO Martin Coriat challenged his marketing team to improve paid search with data-driven insights.
To more deeply understand how people really interact with Confused.com’s marketing messages, the team implemented Attribution 360. Data-driven attribution insights showed each keyword’s role in the customer journey and the associated value to Confused.com. As suspected, data-driven attribution gave Confused.com proof of over-investment on some lower-funnel keywords.
Attribution 360 also revealed opportunities to invest in untapped upper-funnel keywords. Using these insights, the team was able to take immediate action in re-allocating spending to help drive up quote requests by 28% at a lower cost per acquisition.

“With careful data analysis and insights from Attribution 360, we’ve increased our quote volume and lowered our overall cost per acquisition. We’re now able to re-invest what we’ve saved back into paid search and put real pressure on our competitors.” - Sophia Glennon, PPC Manager at Confused.com You can read the full Confused.com case study here.

We look forward to sharing more updates on Attribution and Attribution 360 as we continue to invest in features and expand availability to more marketers.

Powering ads and analytics innovations with machine learning

This post originally appeared on the Inside AdWords blog.

Good morning, San Francisco! As the city starts to wake up, my team and I are gearing up to welcome over a thousand marketers from around the world to Google Marketing Next, our annual event where we unveil the latest innovations for ads, analytics and DoubleClick.

A big theme you’ll hear about today is machine learning. This technology is critical to helping marketers analyze countless signals in real time and reach consumers with more useful ads at the right moments. Machine learning is also key to measuring the consumer journeys that now span multiple devices and channels across both the digital and physical worlds.

It's a growing and important trend for marketers today, and will continue to shape how you build for success in the future.

Below is a sneak preview of a few of the announcements I’ll be making. There are many more that I can’t wait to share with you. Be sure to tune in at 9:00 a.m. PT/12:00 p.m. ET.


Hello Google Attribution, goodbye last-click

Today, we're announcing Google Attribution, a new product to answer the question that has challenged marketers for ages, “Is my marketing working?” For the first time, Google Attribution makes it possible for every marketer to measure the impact of their marketing across devices and across channels -- all in one place, and at no additional cost.

With today’s complex customer journey, your business might have a dozen interactions with a single person - across display, video, search, social, and on your site or app. And all these moments take place on multiple devices, making them even harder to measure.

Marketers have been trying to make attribution work for years, but existing solutions just don't cut it. Most attribution tools:

  • Are hard to set up
  • Lose track of the customer journey when people move between devices
  • Aren’t integrated with ad tools, making it difficult to take action
As a result, many marketers are stuck using last-click attribution, which misses the impact of most marketing touchpoints. With Google Attribution, we’ll help you understand how all of your marketing efforts work together and deliver the insights you need to make them work better.

Here’s how it works:
Integrations with AdWords, Google Analytics and DoubleClick Search make it easy to bring together data from all your marketing channels. The end result is a complete view of your performance.
Google Attribution also makes it easy to switch to data-driven attribution. Data-driven attribution uses machine learning to determine how much credit to assign to each step in the consumer journey -- from the first time they engage with your brand for early research down to the final click before purchase. It analyzes your account's unique conversion patterns, comparing the paths of customers who convert to those who don’t, so you get results that accurately represent your business.

Finally, you can take fast action to optimize your ads with Google Attribution because it integrates with ads tools like AdWords and DoubleClick Search. The results are immediately available for reporting, updating bids or moving budget between channels.
“Given today’s multi-device landscape, cross-channel measurement and attribution is indispensable for HelloFresh to have a 360º panorama of our customer journey and gives us the best data to make the best decisions.” - Karl Villanueva, Head of Paid Search & Display 
Google Attribution is now in beta and will roll out to more advertisers over the coming months.

Mobile-local innovations drive more consumers to stores

Mobile has blurred the line between the digital and physical worlds. While most purchases still happen in-store, people are increasingly turning to their smartphones to do research beforehand -- especially on Google.com and Google Maps.
To help consumers decide where to go, marketers are using innovations like Promoted Places and local inventory ads to showcase special offers and what’s in-stock at nearby stores. Now, you can also make it easy for them to find a store from your YouTube video ads using location extensions.

We introduced store visits measurement back in 2014 to help marketers gain more insight about consumer journeys that start online and end in a store. In under three years, advertisers globally have measured over 5 billion store visits using AdWords.

Only Google has the advanced machine learning and mapping technology to help you accurately measure store visits at scale and use these insights to deliver better local ad experiences. Our recent upgrade to deep learning models enables us to train on larger data sets and measure more store visits in challenging scenarios with greater confidence. This includes visits that happen in multi-story malls or dense cities like Tokyo, Japan and São Paulo, Brazil where many business locations are situated close together. Store visits measurement is already available for Search, Shopping and Display campaigns. And soon this technology will be available for YouTube TrueView campaigns to help you measure the impact of video ads on foot traffic to your stores.

Still, measuring store visits is just one part of the equation. You also need insights into how your online ads drive sales for your business. You need to know: are my online ads ringing my cash register? In the coming months, we’ll be rolling out store sales measurement at the device and campaign levels. This will allow you to measure in-store revenue in addition to the store visits delivered by your Search and Shopping ads.

If you collect email information at the point of sale for your loyalty program, you can import store transactions directly into AdWords yourself or through a third-party data partner. And even if your business doesn’t have a large loyalty program, you can still measure store sales by taking advantage of Google’s third-party partnerships, which capture approximately 70% of credit and debit card transactions in the United States. There is no time-consuming setup or costly integrations required on your end. You also don’t need to share any customer information. After you opt in, we can automatically report on your store sales in AdWords.

Both solutions match transactions back to Google ads in a secure and privacy-safe way, and only report on aggregated and anonymized store sales to protect your customer data.

Virgin Holidays discovered that when it factors in store sales, its search campaigns generate double the profit compared to looking at online KPIs alone. A customer purchasing in-store after clicking on a search ad is also three times more profitable than an online conversion. Says James Libor, Performance Marketing and Technology Manager, “Store sales measurement gives us a more accurate view of the impact our digital investment has on in-store results, especially through mobile. This has empowered us to invest more budget in Search to better support this critical part of the consumer journey.”


Machine learning delivers more powerful audience insights to search ads

People are often searching with the intent to buy. That’s why we’re bringing in-market audiences to Search to help you reach users who are ready to purchase the products and services you offer. For example, if you’re a car dealership, you can increase your reach among users who have already searched for “SUVs with best gas mileage” and “spacious SUVs”. In-market audiences uses the power of machine learning to better understand purchase intent. It analyzes trillions of search queries and activity across millions of websites to help figure out when people are close to buying and surface ads that will be more relevant and interesting to them.

This is an important moment for marketers. The convergence of mobile, data and machine learning will unlock new opportunities for marketers -- and I’m excited to be on this journey with all of you.
Please join us at 9:00 a.m. PT/12:00 p.m. ET to see the entire keynote at Google Marketing Next, and all the other innovations we’re planning to announce for ads, analytics and DoubleClick.

Firebase Analytics Gets New Features and a Familiar New Name

Can it be just a year since we announced the expansion of Firebase to become Google's integrated app developer platform at I/O 2016? That Firebase launch came complete with brand new app analytics reporting and features, developed in conjunction with the Google Analytics team.

Now, at I/O 2017, we're delighted to announce some exciting new features and integrations that will help take our app analytics to the next level. But first, we’d like to highlight a bit of housekeeping. As of today, we are retiring the name Firebase Analytics. Going forward, all app analytics reports will fall under the Google Analytics brand.

This latest generation of app analytics has always, and will continue to be, available in both the Firebase console and in Google Analytics. We think that unifying app analytics under the Google Analytics banner will better communicate that our users are getting the same great app data in both places. In Firebase and related documentation, you'll see app analytics referred to as Google Analytics for Firebase. Read on to the end of this post for more details about this change.

One other note: The launches highlighted below apply to our latest generation of app analytics – you need to be using the Firebase SDK to get these new features.

Now let’s take a look at what’s new.

Integration with AdMob
App analytics is now fully integrated with AdMob. Revenue, impression and click data from AdMob can now be connected with the rest of your event data collected by the Firebase SDK, all of it available in the latest Google Analytics app reports and / or in the Firebase console.

For app companies, this means that ad revenue can be factored into analytics data, so Analytics reports can capture each app’s performance. The integration combines AdMob data with Analytics data at the event level to produce brand new metrics, and to facilitate deep dives into existing metrics. You can answer questions like:
  • What is the true lifetime value for a given segment, factoring in both ad revenue and purchase revenue?
  • How do rewarded ads impact user engagement and LTV?
  • On which screens are users being exposed to advertising the most or the least?
With this change, you can now have a complete picture of the most important metrics for your business ― all in one place.

Custom parameter reporting
"What's the average amount of time users spend in my game before they make their first purchase?" Many of you have asked us for the ability to report on specific data points like these that are important to your business.

Custom parameter reporting is here to make that possible. You can now register up to 50 custom event parameters and see their details in your Analytics reports.
  • If you supply numeric parameters you’ll see a graph of the average and the sum of that parameter.
  • If you supply textual parameters you’ll see a breakdown of the most popular values.
As with the rest of your Analytics reports, you can also apply Audience and User Property filters to your custom parameter reports to identify trends among different segments of your userbase.

To start using custom parameter reporting for one of your events, look for it in the detail report for that event. You'll see instructions for setting things up there.

Integration with DoubleClick and third-parties – Now in Beta
We're also pleased to announce a new integration with both DoubleClick Campaign Manager and DoubleClick Bid Manager. Firebase-tracked install (first open) and post-install events can now easily be imported back into DoubleClick as conversions.

This is a boost for app marketers who want a clearer view of the effect their display and video marketing has on customer app behavior. Advertisers can make better decisions (for all kinds of ads, programmatic included) as they integrate app analytics seamlessly with their buying, targeting and optimization choices in DoubleClick.

We also know that some of you use advertising platforms beyond AdWords and DoubleClick, so we continue to invest in integrating more third-party networks into our system. (We're now at 50 networks and growing). The goal: to allow app data from all your networks to come together in Google Analytics, so you can make even better advertising choices using all the data you collect. Learn more.

Real-time analytics for everyone
Google Analytics pioneered real-time reporting, so we know how important it is for our customers to have access to data as it happens. That’s why we’re so excited by the real-time capabilities we’ve introduced into our latest app reports. To refresh an announcement we made in March: StreamView and DebugView are now available to the general public. These features allow you to see how real-world users are interacting and performing with your app right now.

StreamView visualizes events as they flow into our app reporting to give you a sense of how people around the world are using your app, right down to the city level. Then Snapshot lets you zoom-into a randomly selected individual user’s stream of events. And DebugView uses real-time reporting to help you improve your implementation – making it easy for you to make sure you’re measuring what you want how you want. DebugView is a terrific tool for app builders that shows you events, parameters and user properties for any individual development device. It can also highlight any events that contain invalid parameters.

Same product, familiar new name
As mentioned above, we're rebranding Firebase Analytics to make it plain that it's our recommended app analytics solution, and is fully a part of the Google Analytics family.

Our latest reports represent a new approach to app analytics, which we believe better reflects the way that users interact with apps. This means that these reports have different concepts and functionality when compared to the original app analytics reports in Google Analytics.

If you're used to using the original app analytics reports in Google Analytics, don’t worry: they're not going anywhere. But we recommend considering implementing the Firebase SDK with your next app update so you can start getting the latest features for app analytics.

Good data is one thing everyone can agree on: developers and marketers, global firms and fresh new start-ups. We've always been committed to app-centric reports, because analytics and data are the essential beginning to any long-term app strategy. We hope that these new features will give you more of what you need to build a successful future for your own apps.

Google Analytics is Enhancing Support for AMP

Over the past year, developers have adopted the Accelerated Mobile Pages (AMP) technology to build faster-loading pages for all types of sites, ranging from news to recipes to e-commerce. Billions of AMP pages have been published to date and Google Analytics continues its commitment to supporting our customers who have adopted AMP.

However, we have heard feedback from Google Analytics customers around challenges in understanding the full customer journey due to site visitors being identified inconsistently across AMP and non-AMP pages. So we're announcing today that we are rolling out an enhancement that will give you an even more accurate understanding of how people are engaging with your business across AMP and non-AMP pages of your website.

How will this work?
This change brings consistency to users across AMP and non-AMP pages served from your domain. It will have the effect of improving user analysis going forward by unifying your users across the two page formats. It does not affect AMP pages served from the Google AMP Cache or any other AMP cache.

When will this happen?
We expect these improvements to be complete, across all Google Analytics accounts, over the next few weeks.

Are there any other implications of this change?
As we unify your AMP and non-AMP users when they visit your site in the future, you may see changes in your user and session counts, including changes to related metrics. User and session counts will go down over time as we recognize that two formerly distinct IDs are in fact the same user; however, at the time this change commences, the metric New Users may rise temporarily as IDs are reset.

In addition, metrics like time on site, page views per session, and bounce rate will rise consistent with sessions with AMP and non-AMP pageviews no longer being treated as multiple sessions. This is a one-time effect that will continue until all your users who have viewed AMP pages in the past are unified (this can take a short or long period of time depending on how quickly your users return to your site/app).

Is there anything I need to do to get this update?
There is no action required on your part, these changes will be automatically rolled out.

Will there be changes to unify users who view my pages both on my domain and in other contexts?
Some AMP pages are not visited directly on the domain where the content is originally hosted but instead via AMP caches or in platform experiences. However we decided to focus on fixing the publisher domain case first as this was the fastest way we could add value for our clients.

We are committed to ensuring the best quality data for user journey analysis across AMP and non-AMP pages alike and this change makes that easy for AMP pages served on your domain. We hope you enjoy these improvements - and as always, happy analyzing!

Introducing Google Cloud IoT Core: for securely connecting and managing IoT devices at scale



Today we're announcing a new fully-managed Google Cloud Platform (GCP) service called Google Cloud IoT Core. Cloud IoT Core makes it easy for you to securely connect your globally distributed devices to GCP, centrally manage them and build rich applications by integrating with our data analytics services. Furthermore, all data ingestion, scalability, availability and performance needs are automatically managed for you in GCP style.

When used as part of a broader Google Cloud IoT solution, Cloud IoT Core gives you access to new operational insights that can help your business react to, and optimize for, change in real time. This advantage has value across multiple industries; for example:
  • Utilities can monitor, analyze and predict consumer energy usage in real time
  • Transportation and logistics firms can proactively stage the right vehicles/vessels/aircraft in the right places at the right times
  • Oil and gas and manufacturing companies can enable intelligent scheduling of equipment maintenance to maximize production and minimize downtime

So, why is this the right time for Cloud IoT Core?


About all the things


Many enterprises that rely on industrial devices such as sensors, conveyor belts, farming equipment, medical equipment and pumps particularly, globally distributed ones are struggling to monitor and manage those devices for several reasons:
  • Operational cost and complexity: The overhead of managing the deployment, maintenance and upgrades for exponentially more devices is stifling. And even with a custom solution in place, the resource investments required for necessary IT infrastructure are significant.
  • Patchwork security: Ensuring world-class, end-to-end security for globally distributed devices is out of reach or at least not a core competency for most organizations.
  • Data fragmentation: Despite the fact that machine-generated data is now an important data source for making good business decisions, the massive amount of data generated by these devices is often stored in silos with a short expiration date, and hence never reaches downstream analytic systems (nor decision makers).
Cloud IoT Core is designed to help resolve these problems by removing risk, complexity and data silos from the device monitoring and management process. Instead, it offers you the ability to more securely connect and manage all your devices as a single global system. Through a single pane of glass you can ingest data generated by all those devices into a responsive data pipeline and, when combined with other Cloud IoT services, analyze and react to that data in real time.

Key features and benefits


Several key Cloud IoT Core features help you meet these goals, including:

  • Fast and easy setup and management: Cloud IoT Core lets you connect up to millions of globally dispersed devices into a single system with smooth and even data ingestion ensured under any condition. Devices are registered to your service quickly and easily via the industry-standard MQTT protocol. For Android Things-based devices, firmware updates can be automatic.
  • Security out-of-the-box: Secure all device data via industry-standard security protocols. (Combine Cloud IoT Core with Android Things for device operating-system security, as well.) Apply Google Cloud IAM roles to devices to control user access in a fine-grained way.
  • Native integration with analytic services: Ingest all your IoT data so you can manage it as a single system and then easily connect it to our native analytic services (including Google Cloud Dataflow, Google BigQuery and Google Cloud Machine Learning Engine) and partner BI solutions (such as Looker, Qlik, Tableau and Zoomdata). Pinpoint potential problems and uncover solutions using interactive data visualizations, or build rich machine-learning models that reflect how your business works.
  • Auto-managed infrastructure: All this in the form of a fully-managed, pay-as-you-go GCP service, with no infrastructure for you to deploy, scale or manage.
"With Google Cloud IoT Core, we have been able to connect large fleets of bicycles to the cloud and quickly build a smart transportation fleet management tool that provides operators with a real-time view of bicycle utilization, distribution and performance metrics, and it forecasts demand for our customers."
 Jose L. Ugia, VP Engineering, Noa Technologies

Next steps

Cloud IoT Core is currently available as a private beta, and we’re launching with these hardware and software partners:

Cloud IoT Device Partners
Cloud IoT Application Partners

When generally available, Cloud IoT Core will serve as an important, foundational tool for hardware partners and customers alike, offering scalability, flexibility and efficiency for a growing set of IoT use cases. In the meantime, we look forward to your feedback!