Tag Archives: Storage & Databases

How to do serverless pixel tracking with GCP



Whether they’re opening a newsletter or visiting a shopping cart page, how users interact with web content is very interesting to publishers. One way to understand user behavior is by using pixels, small 1x1 transparent images embedded into the web property. When loaded, the pixel calls a web server that records the request parameters passed in the URL that can be processed later.

Adding a pixel is easy, but hosting it and processing the request can be challenging for various reasons:
  • You need to set up, manage and monitor your ad servers
  • Users are usually global, which means that you need ad servers around the world
  • User visits are spiky, so pixel servers must scale up to sustain the load and scale down to limit the spend.
Google Cloud Platform (GCP) services such as Container Engine and managed autoscaled instance groups can help with those challenges. But at Google Cloud, we think companies should avoid managing infrastructure whenever possible.

For example, we recently worked with GCP partner and professional services firm DoiT International to build a pixel tracking platform that relieves the administrator from setting up or managing any servers. Instead, this serverless pixel tracking solution leverages managed GCP services, including:
  • Google Cloud Storage: A global or regional object store that offers different options such as Standard, Nearline, Cold with various prices and SLAs depending on your needs. In our case, we used Standard, which offers low millisecond latency
  • Google HTTP(s) Load Balancer: A global anycast IP load balancer service that can scale to millions of QPS with integrated logging. It also can be leveraged by Cloud CDN to prevent useless access to Google Cloud Storage by caching pixels closer to the user in Google edges
  • BigQuery: Google's fully managed, petabyte-scale, low-cost enterprise data warehouse for analytics
  • Stackdriver Logging: A logging system that allows you to store, search, analyze, monitor and alert on log data and events from GCP and Amazon Web Services (AWS). It supports Google load balancers and can export data to Cloud Storage, BigQuery or Pub/Sub
Tracking pixels with these services works as follows:
  1. A client calls a pixel URL that's served directly by Cloud Storage.
  2. A Google Cloud Load Balancer in front of Cloud Storage records the request to Stackdriver Logging, whether there was a cache hit or not.
  3. Stackdriver Logging exports every request to BigQuery as they come in, which acts as a storage and querying engine for ad-hoc analytics that can help business analysts better understand their users.


All those services are fully managed and do not require you to set up any instances or VMs. You can learn more about this solution by:
Going forward, we look forward to building more serverless solutions on top of GCP managed offerings. Let us know in the comments if there’s a solution that you’d like us to build!

Cloud Spanner is now production-ready; let the migrations begin!



Cloud Spanner, the world’s first horizontally-scalable and strongly-consistent relational database service, is now generally available for your mission-critical OLTP applications.

We’ve carefully designed Cloud Spanner to meet customer requirements for enterprise databases — including ANSI 2011 SQL support, ACID transactions, 99.999% availability and strong consistency — without compromising latency. As a combined software/hardware solution that includes atomic clocks and GPS receivers across Google’s global network, Cloud Spanner also offers additional accuracy, reliability and performance in the form of a fully-managed cloud database service. Thanks to this unique combination of qualities, Cloud Spanner is already delivering long-term value for our customers with mission-critical applications in the cloud, including customer authentication systems, business-transaction and inventory-management systems, and high-volume media systems that require low latency and high throughput. For example, Snap uses Cloud Spanner to power part of its search infrastructure.

Looking toward migration


In preparation for general availability, we’ve been working closely with our partners to make adoption as smooth and easy as possible. Thus today, we're also announcing our initial data integration partners: Alooma, Informatica and Xplenty.

Now that these partners are in the early stages of Cloud Spanner “lift-and-shift” migration projects for customers, we asked a couple of them to pass along some of their insights about the customer value of Cloud Spanner, as well as any advice about planning for a successful migration:

From Alooma:

Cloud Spanner is a game-changer because it offers horizontally scalable, strongly consistent, highly available OLTP infrastructure in the cloud for the first time. To accelerate migrations, we recommend that customers replicate their data continuously between the source OLTP database and Cloud Spanner, thereby maintaining both infrastructures in the same state — this allows them to migrate their workloads gradually in a predictable manner.

From Informatica:
“Informatica customers are stretching the limits of latency and data volumes, and need innovative enterprise-scale capabilities to help them outperform their competition. We are excited about Cloud Spanner because it provides a completely new way for our mutual customers to disrupt their markets. For integration, migration and other use cases, we are partnering with Google to help them ingest data into Cloud Spanner and integrate a variety of heterogeneous batch, real-time, and streaming data in a highly scalable, performant and secure way.”

From Xplenty:
"Cloud Spanner is one of those cloud-based technologies for which businesses have been waiting: With its horizontal scalability and ACID compliance, it’s ideal for those who seek the lower TCO of a fully managed cloud-based service without sacrificing the features of a legacy, on-premises database. In our experience with customers migrating to Cloud Spanner, important considerations include accounting for data types, embedded code and schema definitions, as well as understanding Cloud Spanner’s security model to efficiently migrate your current security and access-control implementation."

Next steps


We encourage you to dive into a no-cost trial to experience first-hand the value of a relational database service that offers strong consistency, mission-critical availability and global scale (contact us about multi-regional instances) with no workarounds — and with no infrastructure for you to deploy, scale or manage. (Read more about Spanner’s evolution inside Google in this new paper presented at the SIGMOD ‘17 conference today.) If you like what you see, a growing partner ecosystem is standing by for migration help, and to add further value to Cloud Spanner use cases via data analytics and visualization tooling.

Compute Engine machine types with up to 64 vCPUs now ready for your production workloads



Today, we're happy to announce general availability for our largest virtual machine shapes, including both predefined and custom machine types, with up to 64 virtual CPUs and 416 GB of memory.


64 vCPU machine types are available on our Haswell, Broadwell and Skylake (currently in Alpha) generation Intel processor host machines.

Tim Kelton, co-founder and Cloud Architect of Descartes Labs, an early adopter of our 64 vCPU machine types, had this to say:
"Recently we used the 64 vCPU instances during the building of both our global composite imagery layers and GeoVisual Search. In both cases, our parallel processing jobs needed tens of thousands of CPU hours to complete the task. The new 64 vCPU instances allow us to work across more satellite imagery scenes simultaneously on a single instance, dramatically speeding up our total processing times."
The new 64 core machines are available for use today. If you're new to GCP and want to give these larger virtual machines a try, it’s easy to get started with our $300 credit for 12 months.

Google Cloud Natural Language API launches new features and Cloud Spanner graduating to GA



Today at Google Cloud Next London we're excited to announce product news that will help customers innovate and transform their businesses faster via the cloud: first, that Google Cloud Natural Language API is adding support for new languages and entity sentiment analysis, and second, that Google Cloud Spanner is graduating to general availability (GA).

Cloud Natural Language API beta


Since we launched Cloud Natural Language API, a fully managed service for extracting meaning from text via machine learning, we’ve seen customers such as Evernote and Ocado enhance their businesses in fascinating ways. For example, they use Cloud Natural Language API to analyze customer feedback and sentiment, extract key entities and metadata from unstructured text such as emails or web articles, and enable novel features (such as deriving action items from meeting notes).

These use cases, among many others, highlighted the need to expand language support and add improvements in the quality of our base NLU technology. We've incorporated this feedback into the product and are pleased to announce the following new capabilities under beta:

  • Expanded language support for entity, document sentiment and syntax analysis for the following languages: Chinese (Simplified and Traditional), French, German, Italian, Korean and Portuguese. This is in addition to existing support for English, Spanish and Japanese.
  • Understand sentiment for specific entities and not just whole document or sentence: We're introducing a new method that identifies entities in a block of text and also determines sentiment for those entities. Entity sentiment analysis is currently only available for the English language. For more information, see Analyzing Entity Sentiment.
  • Improved quality for sentiment and entity analysis: As part of the continuous effort to improve quality of our base models, we're also launching improved models for sentiment and entity analysis as part of this release.

Early access users of this new functionality such as Wootric are already using the expanded language support and new entity sentiment analysis feature to better understand customer sentiment around brands and products. For example, for customer feedback such as “the phone is expensive but has great battery life,” users can now parse that the sentiment for phone is negative while the sentiment for battery life is positive.

As the API becomes more widely adopted, we're looking forward to seeing more interesting and useful applications of it.

Cloud Spanner enters GA

Announced in March at Google Cloud Next ‘17, Cloud Spanner is the world’s first fully managed, horizontally scalable relational database service for mission-critical online transaction processing (OLTP) applications. Cloud Spanner is specifically designed to meet customer requirements in this area for strong consistency, high availability and global scale qualities that make it unique as a service.

During the beta period, we were thrilled to see customers unlock new use cases in the cloud with Cloud Spanner, including:

  • Powering mission-critical applications like customer authentication and provisioning for multi-national businesses
  • Building consistent systems for business transactions and inventory management in the financials services and retail industries
  • Supporting incredibly high-volume systems that need low-latency and high-throughput in the advertising and media industries

As with all our other services, GCP handles all the performance, scalability and availability needs automatically in a pay-as-you-go way.

On May 16, Cloud Spanner will reach a further milestone by becoming generally available for the first time. Currently we're offering regional instances, with multi-regional instances coming later this year. We've been Spanner users ourselves for more than five years to support a variety of mission-critical global apps, and we can’t wait to see what new workloads you bring to the cloud, and which new ones you build next!

Google Cloud Storage introduces Cloud Pub/Sub notifications



Google Cloud Storage has always been a high-performance and cost-effective place to store data objects. Now it’s also easy to build workflows around those objects that are triggered by creating or deleting them, or changing their metadata.

Suppose you want to take some action every time a change occurs in one of your Cloud Storage buckets. You might want to automatically update sales projections every day when sales uploads its new daily totals. You might need to remove a resource from a search index when an object is deleted. Or perhaps you want to update the thumbnail when someone makes a change to an image. The ability to respond to changes in a Cloud Storage bucket gives you increased responsiveness, control and flexibility.

Cloud Pub/Sub Support


We’re pleased to announce that Cloud Storage can now register changes by sending change notifications to a Google Cloud Pub/Sub topic. Cloud Pub/Sub is a powerful messaging platform that allows you to build fast, reliable and more secure messaging solutions. Cloud Pub/Sub support introduces many new capabilities to Cloud Storage notifications, such as pulling from subscriptions instead of requiring users to configure webhooks, multiplexing copies of each message to many subscribers and filtering messages by event type or prefix.
You can get started sending Cloud Storage notifications to Cloud Pub/Sub by reading our getting started guide. Once you’ve enabled the Cloud Pub/Sub API and downloaded the latest version of the gcloud SDK, you can set up notification triggers from your Cloud Storage bucket to your Cloud Pub/Sub topic with the following command:

$> gsutil notification create -f json -t your-topic gs://your-bucket

From that point on, any changes to the contents of your Cloud Storage bucket trigger a message to your Cloud Pub/Sub topic. You can then create Cloud Pub/Sub subscriptions on that topic and pull messages from those subscriptions in your programs, like in this example Python app.

Cloud Functions

Cloud Pub/Sub is a powerful and flexible way to respond to changes in a bucket. However, for some tasks you may prefer the simplicity of deploying a small, serverless function that just describes the action you want to take in response to a change. For that, Google Cloud Functions supports Cloud Storage triggers.

Cloud Functions is a quick way to deploy cloud-based scripts in response to a wide variety of events, for example an HTTP request to a certain URL, or a new object in a Cloud Storage bucket.

Once you get started with Google Cloud Functions, you can learn about setting up a Cloud Storage Trigger for your function. It’s as simple as adding a “--trigger-bucket” parameter to your deploy function:

$> gcloud beta functions deploy helloWorld --stage-bucket cloud-functions --trigger-bucket your-bucket

It’s fun to think about what’s possible when Cloud Storage objects aren’t just static entities, but can trigger a wide variety of tasks. We hope you’re as excited as we are!

Google Cloud Platform expands to Mars



Google Cloud Platform (GCP) is committed to meeting our customers needs—no matter where they are. Amidst our growing list of new regions, today we're pleased to announce our expansion to Mars. In addition to supporting some of the most demanding disaster recovery and data sovereignty needs of our Earth-based customers, we’re looking to the future cloud infrastructure needed for the exploration and ultimate colonization of the Red Planet.
Visit Mars with Google Street View
Mars has long captured the imagination as the most hospitable planet for future colonization, and expanding to Mars has been a top priority for Google. By opening a dedicated extraterrestrial cloud region, we're bringing the power of Google’s compute, network, and storage to the rest of the solar system, unlocking a plethora of possibilities for astronomy research, exploration of Martian natural resources and interplanetary life sciences. This region will also serve as an important node in an extensive network throughout the solar system.

Our first interplanetary data center—affectionately nicknamed “Ziggy Stardust”—will open in 2018. Our Mars exploration started as a 20% project with the Google Planets team, which mapped Mars and other bodies in space and found a suitable location in Gale Crater, near the landing site of NASA’s Curiosity rover.
Explore more of Mars in Google Maps
In order to ease the transition for our Earthling customers, Google Cloud Storage (GCS) is launching a new Earth-Mars Multi-Regional location. Users can store planet-redundant data across Earth and Mars, which means even if Earth experiences another asteroid strike like the one that wiped out the dinosaurs, your cat videos, selfies and other data will still be safe. Of course, we'll also store all public domain scientific data, history and arts free of charge so that the next global catastrophe doesn't send humanity back into the dark ages.

Customers can choose to store data exclusively in the new Mars region, outside of any controlled jurisdictions on Earth, ensuring that they're both compliant with and benefit from the terms of the Outer Space Treaty. The ability to store and process data on Mars enables low-latency data analysis pipelines and consumer apps to serve the expected influx of Mars explorers and colonists. How exciting would it be to stream movies of potatoes growing right from the craters and dunes of our new frontier?

One of our early access customers says “This will be a game changer for us. With GCS, we can store all the data collected from our rovers right on Mars and run big data analytics to query exabyte-scale datasets all in a matter of seconds. Our dream of colonizing Mars by 2020 can now become a reality.”
Walk inside our new data center in Google Street View
The Martian data center will become Google’s greenest facility yet by taking full advantage of its new location. The cold weather enables natural, unpowered cooling throughout the year, while the thin atmosphere and high winds allow the entire facility to be redundantly powered by entirely renewable sources.

But why stop at Mars? We're taking a moonshot at N+42 redundancy with galaxy-scale computing. While GCP is optimized for faster-than-light data coordination for databases, the Google Planets team is already hard at work mapping the rest of our solar system for future data center locations. Stay tuned and join our journey! We can’t wait to see the problems you solve and the breakthroughs you achieve.

P.S. Check out Curiosity’s journey across the Red Planet on Mars Street View.


Solution guide: Archive your cold data to Google Cloud Storage with Komprise



More than 56% of enterprises have more than half a petabyte of inactive data but this “cold” data often lives on expensive primary storage platforms. Google Cloud Storage provides an opportunity to store this data cost-effectively and achieve significant savings, but storage and IT admins often face the challenge of how to identify cold data and move it non-disruptively.

Komprise, a Google Cloud technology partner, provides software that analyzes data across NFS and SMB/CIFS storage to identify inactive/cold data, and moves the data transparently to Cloud Storage, which can help to cut costs significantly. Working with Komprise, we’ve prepared a full tutorial guide that describes how customers can understand data usage and growth in their storage environment, get a customized ROI analysis and move this data to Cloud Storage based on specific policies.
Cloud Storage provides excellent options to customers looking to store infrequently accessed data at low cost using Nearline or Coldline storage tiers. If and when access to this data is needed, there are no access time penalties; the data is available almost immediately. In addition, built-in object-level lifecycle management in Cloud Storage reduces the burden for admins by enabling policy-based movement of data across storage classes. With Komprise, customers can bring lifecycle management to their on-premise primary storage platforms and seamlessly move this data to the Cloud. Komprise deploys in under 15 minutes, works across NFS, SMB/CIFS and object storage without any storage agents, adapts to file-system and network loads to run non-intrusively in the background and scales out on-demand.

Teams can get started through this self-service tutorial or watch this on-demand webinar featuring Komprise’ COO Krishna Subramanian and Google Cloud Storage Product Manager Ben Chong. As always, don’t hesitate to reach out to us to explore which enterprise workloads make the most sense for your cloud initiatives.

Solution guide: backing up Windows files using CloudBerry Backup with Google Cloud Storage



Modern businesses increasingly depend on their data as a foundation for their operation. The more critical the reliance is on that data, the more important it is to ensure that data is protected with backups. Unfortunately, even by taking regular backups, you're still susceptible to data loss from a local disaster or human error. Thus, many companies entrust their data to geographically distributed cloud storage providers like Google Cloud Platform (GCP). And when they do, they want convenient cloud backup automation tools that offer flexible backup options and quick on-demand restores.

One such tool is CloudBerry Backup (CBB), and has the following capabilities:

  • Creating incremental data copies with low impact on production workloads
  • Data encryption on all transferring paths
  • Flexible retention policy, allowing you to balance the volume of data stored and storage space used
  • Ability to carry out hybrid restores with the use of local and cloud storage resources

CBB includes a broad range of features out of the box, allowing you to address most of your cloud backup needs, and is designed to have low impact on production servers and applications.

CBB has a low-footprint backup client that you install on the desired server. After you provision a Google Cloud Storage bucket, attach it to CBB and create a backup plan to immediately start protecting your files in the cloud.

To simplify your cloud backup onboarding, check out the step-by-step tutorial on how to use CloudBerry Backup with Google Cloud Storage and easily restore any files.

Cloud SQL for PostgreSQL: Managed PostgreSQL for your mobile and geospatial applications in Google Cloud



At Google Cloud Next ‘17, we announced support for PostgreSQL as part of Google Cloud SQL, our managed database service. With its extensibility, strong standards compliance and support from a vibrant open-source community, Postgres is the database of choice for many developers, especially for powering geospatial and mobile applications. Cloud SQL already supports MySQL, and now, PostgreSQL users can also let Google take care of mundane database administration tasks like applying patches and managing backups and storage capacity, and focus on developing great applications.

Feature highlights

Storage and data protection
  • Flexible backups: Schedule automatic daily backups or run them on-demand.
  • Automatic storage increase: Enable automatic storage increase and Cloud SQL will add storage capacity whenever you approach your limit.

Connections
  • Open standards: We embrace the PostgreSQL wire protocol (the standard connection protocol for PostgreSQL databases) and SSL, so you can access your database from nearly any application, running anywhere.
  • Security features: Our Cloud SQL Proxy creates a local socket and uses OAuth to help establish a secure connection with your application or PostgreSQL tool. It automatically creates the SSL certificate and makes more secure connections easier for both dynamic and static IP addresses.

Extensibility
  • Geospatial support: Easily enable the popular PostGIS extension for geospatial objects in Postgres.
  • Custom instance sizes: Create your Postgres instances with the optimal amount of CPU and memory for your workloads


Create Cloud SQL for PostgreSQL instances customized to your needs.


More features coming soon

We’re continuing to improve Cloud SQL for PostgreSQL during beta. Watch for the following:

  • Automatic failover for high availability
  • Read replicas
  • Additional extensions
  • Precise restores with point-in-time recovery
  • Compliance certification as part of Google’s Cloud Platform BAA

Case study: Descartes Labs delves into Earth’s resources with Cloud SQL for PostgreSQL

Using deep-learning to make sense of vast amounts of image data from Google Earth Engine, NASA, and other satellites, Descartes Labs delivers invaluable insights about natural resources and human population. They provide timely and accurate forecasts on such things as the growth and health of crops, urban development, the spread of forest fires and the availability of safe drinking water across the globe.

Cloud SQL for PostgreSQL integrates seamlessly with the open-source components that make up Descartes Labs’ environment. Google Earth Engine combines a multi-petabyte catalog of satellite imagery and geospatial datasets with planetary-scale analysis capabilities and makes it available for scientists, researchers and developers to detect changes, map trends and quantify differences on the Earth's surface. With ready-to-use data sets and an API, Earth Engine data is core to Descartes Labs’ product. Combining this with NASA data and the popular OpenStreetMap data, Descartes Labs takes full advantage of the open source community.

Descartes Labs’ first application tracks corn crops based on a 13-year historical backtest. It predicts the U.S. corn yield faster and more accurately than the U.S. Department of Agriculture.
click to enlarge

Descartes adopted Cloud SQL for PostgreSQL early on because it allowed them to focus on developing applications rather than on mundane database management tasks. “Cloud SQL gives us more time to work on products that provide value to our customers,” said Tim Kelton, Descartes Labs Co-founder and Cloud Architect. “Our individual teams, who are building micro services, can quickly provision a database on Cloud SQL. They don't need to bother compiling Geos, Proj4, GDAL, and Lib2xml to leverage PostGIS. And when PostGIS isn’t needed, our teams use PostgreSQL without extensions or MySQL, also supported by Cloud SQL.”

According to Descartes Labs, Google Cloud Platform (GCP) is like having a virtual supercomputer on demand, without all the usual space, power, cooling and networking issues. Cloud SQL for PostgreSQL is a key piece of the architecture that backs the company’s satellite image analysis applications.
click to enlarge
In developing their newest application, GeoVisual Search, the team benefited greatly from automatic storage increases in Cloud SQL for PostgreSQL. “Ever tried to estimate how a compressed 54GB XML file will expand in PostGIS?” Tim Kelton asked. “It’s not easy. We enabled Cloud SQL’s automatic storage increase, which allows the disk to start at 10GB and, in our case, automatically expanded to 387GB. With this feature, we don’t waste money or time by under- or over-allocating disk capacity as we would on a VM.”
click to enlarge
Because the team was able to focus on data models rather than on database management, development of the GeoVisual Search application proceeded smoothly. Descartes’ customers can now find the geospatial equivalent of a needle in a haystack: specific objects of interest in map images.

The screenshot below shows a search through two billion map tiles to find wind turbines.
click to enlarge
Tim’s parting advice for startups evaluating cloud solutions: “Make sure the solution you choose gives you the freedom to experiment, lets your team focus on product development rather than IT management and aligns with your company’s budget.”

See what GCP can do for you


Sign up for a $300 credit to try Cloud SQL and the rest of GCP. Start with inexpensive micro instances for testing and development. When you’re ready, you can easily scale them up to serve performance-intensive applications. As a bonus, everyone gets the 100% sustained use discount during beta, regardless of usage.

Our partner ecosystem can help you get started with Cloud SQL for PostgreSQL. To streamline data transfer, reach out to Alooma, Informatica, Segment, Stitch, Talend and Xplenty. For help with visualizing analytics data, try ChartIO, iCharts, Looker, Metabase and Zoomdata.
"PostgreSQL is one of Segment’s most popular database targets for our Warehouses product. Analysts and administrators appreciate its rich set of OLAP features and the portability they’re ensured by it being open source. In an increasingly “serverless” world, Google’s Cloud SQL for PostgreSQL offering allows our customers to eschew costly management and operations of their PostgreSQL instance in favor of effortless setup, and the NoOps cost and scaling model that GCP is known for across their product line."   Chris Sperandio, Product Lead, Segment
"At Xplenty, we see steady growth of prospects and customers seeking to establish their data and analytics infrastructure on Google Cloud Platform. Data integration is always a key challenge, and we're excited to support both Google Cloud Spanner and Cloud SQL for PostgreSQL both as data sources as well as targets, to continue helping companies integrate and prepare their data for analytics. With the robustness of Cloud Spanner and the popularity of PostgreSQL, Google continues to innovate and prove it is a world leader in cloud computing."   Saggi Neumann, CTO, Xplenty

No matter how far we take Cloud SQL, we still feel like we’re just getting started. We hope you’ll come along for the ride.


Google Cloud Platform: your Next home in the cloud



San Francisco Today at Google Cloud Next ‘17, we’re thrilled to announce new Google Cloud Platform (GCP) products, technologies and services that will help you imagine, build and run the next generation of cloud applications on our platform.

Bring your code to App Engine, we’ll handle the rest

In 2008, we launched Google App Engine, a pioneering serverless runtime environment that lets developers build web apps, APIs and mobile backends at Google-scale and speed. For nearly 10 years, some of the most innovative companies built applications that serve their users all over the world on top of App Engine. Today, we’re excited to announce into general availability a major expansion of App Engine centered around openness and developer choice that keeps App Engine’s original promise to developers: bring your code, we’ll handle the rest.

App Engine now supports Node.js, Ruby, Java 8, Python 2.7 or 3.5, Go 1.8, plus PHP 7.1 and .NET Core, both in beta, all backed by App Engine’s 99.95% SLA. Our managed runtimes make it easy to start with your favorite languages and use the open source libraries and packages of your choice. Need something different than what’s out of the box? Break the glass and go beyond our managed runtimes by supplying your own Docker container, which makes it simple to run any language, library or framework on App Engine.

The future of cloud is open: take your app to-go by having App Engine generate a Docker container containing your app and deploy it to any container-based environment, on or off GCP. App Engine gives developers an open platform while still providing a fully managed environment where developers focus only on code and on their users.


Cloud Functions public beta at your service

Up one level from fully managed applications, we’re launching Google Cloud Functions into public beta. Cloud Functions is a completely serverless environment to build and connect cloud services without having to manage infrastructure. It’s the smallest unit of compute offered by GCP and is able to spin up a single function and spin it back down instantly. Because of this, billing occurs only while the function is executing, metered to the nearest one hundred milliseconds.

Cloud Functions is a great way to build lightweight backends, and to extend the functionality of existing services. For example, Cloud Functions can respond to file changes in Google Cloud Storage or incoming Google Cloud Pub/Sub messages, perform lightweight data processing/ETL jobs or provide a layer of logic to respond to webhooks emitted by any event on the internet. Developers can securely invoke Cloud Functions directly over HTTP right out of the box without the need for any add-on services.

Cloud Functions is also a great option for mobile developers using Firebase, allowing them to build backends integrated with the Firebase platform. Cloud Functions for Firebase handles events emitted from the Firebase Realtime Database, Firebase Authentication and Firebase Analytics.

Growing the Google BigQuery universe: introducing BigQuery Data Transfer Service

Since our earliest days, our customers turned to Google to promote their advertising messages around the world, at a scale that was previously unimaginable. Today, those same customers want to use BigQuery, our powerful data analytics service, to better understand how users interact with those campaigns. With that, we’ve developed deeper integration between broader Google and GCP with the public beta of the BigQuery Data Transfer Service, which automates data movement from select Google applications directly into BigQuery. With BigQuery Data Transfer Service, marketing and business analysts can easily export data from Adwords, DoubleClick and YouTube directly into BigQuery, making it available for immediate analysis and visualization using the extensive set of tools in the BigQuery ecosystem.

Slashing data preparation time with Google Cloud Dataprep

In fact, our goal is to make it easy to import data into BigQuery, while keeping it secure. Google Cloud Dataprep is a new serverless browser-based service that can dramatically cut the time it takes to prepare data for analysis, which represents about 80% of the work that data scientists do. It intelligently connects to your data source, identifies data types, identifies anomalies and suggests data transformations. Data scientists can then visualize their data schemas until they're happy with the proposed data transformation. Dataprep then creates a data pipeline in Google Cloud Dataflow, cleans the data and exports it to BigQuery or other destinations. In other words, you can now prepare structured and unstructured data for analysis with clicks, not code. For more information on Dataprep, apply to be part of the private beta. Also, you’ll find more news about our latest database and data and analytics capabilities here and here.

Hello, (more) world

Not only are we working hard on bringing you new products and capabilities, but we want your users to access them quickly and securely  wherever they may be. That’s why we’re announcing three new Google Cloud Platform regions: California, Montreal and the Netherlands. These will bring the total number of Google Cloud regions up from six today, to more than 17 locations in the future. These new regions will deliver lower latency for customers in adjacent geographic areas, increased scalability and more disaster recovery options. Like other Google Cloud regions, the new regions will feature a minimum of three zones, benefit from Google’s global, private fibre network and offer a complement of GCP services.

Supercharging our infrastructure . . .

Customers run demanding workloads on GCP, and we're constantly striving to improve the performance of our VMs. For instance, we were honored to be the first public cloud provider to run Intel Skylake, a custom Xeon chip that delivers significant enhancements for compute-heavy workloads and a larger range of VM memory and CPU options.

We’re also doubling the number of vCPUs you can run in an instance from 32 to 64 and now offering up to 416GB of memory, which customers have asked us for as they move large enterprise applications to Google Cloud. Meanwhile, we recently began offering GPUs, which provide substantial performance improvements to parallel workloads like training machine learning models.

To continually unlock new energy sources, Schlumberger collects large quantities of data to build detailed subsurface earth models based on acoustic measurements, and GCP compute infrastructure has the unique characteristics that match Schlumberger's needs to turn this data into insights. High performance scientific computing is integral to its business, so GCP's flexibility is critical.

Schlumberger can mix and match GPUs and CPUs and dynamically create different shapes and types of virtual machines, choosing memory and storage options on demand.

"We are now leveraging the strengths offered by cloud computation stacks to bring our data processing to the next level. Ashok Belani, Executive Vice President Technology, Schlumberger

. . . without supercharging our prices

We aim to keep costs low. Today we announced Committed Use Discounts that provide up to 57% off the list price on Google Compute Engine, in exchange for a one or three year purchase commitment. Committed Use Discounts are based on the total amount of CPU and RAM you purchase, and give you the flexibility to use different instance and machine types; they apply automatically, even if you change instance types (or size). There are no upfront costs with Committed Use Discounts, and they are billed monthly. What’s more, we automatically apply Sustained Use Discounts to any additional usage above a commitment.

We're also dropping prices for Compute Engine. The specific cuts vary by region. Customers in the United States will see a 5% price drop; customers in Europe will see a 4.9% drop and customers using our Tokyo region an 8% drop.

Then there’s our improved Free Tier. First, we’ve extended the free trial from 60 days to 12 months, allowing you to use your $300 credit across all GCP services and APIs, at your own pace and on your own schedule. Second, we’re introducing new Always Free products  non-expiring usage limits that you can use to test and develop applications at no cost. New additions include Compute Engine, Cloud Pub/Sub, Google Cloud Storage and Cloud Functions, bringing the number of Always Free products up to 15, and broadening the horizons for developers getting started on GCP. Visit the Google Cloud Platform Free Tier page today for further details, terms, eligibility and to sign up.

We'll be diving into all of these product announcements in much more detail in the coming days, so stay tuned!