Tag Archives: Google Cloud Platform

Skylake processors now available in seven regions



Earlier this year, Intel Xeon server processor (codenamed Skylake) became generally available on Google Compute Engine, providing you with the most powerful and technically advanced processors in the cloud. Paired with Compute Engine, Skylake benefits from finer-grained controls over your VMs, the ability to select your host CPU platform for any of our predefined and custom machine types, and new machine types that extend to 96 vCPUs and 1.4TB of memory per instance.

And now, we offer Skylake in our South Carolina, Mumbai and Singapore regions, joining Iowa, Oregon, Belgium and Taiwan, and bringing the total number of GCP regions with Skylake to seven globally. We’re also lowering the cost of Skylake VMs by 6-10 percent, depending on your specific machine configuration. With this price drop, we’re making it easier for you to choose the best platform for your applications. Just select the number of cores and amount of RAM you need and get all the computational power that Skylake on Compute Engine makes possible.

Already, what you’ve done with this additional computational power has been amazing! In the last six months, thousands of Compute Engine customers have used Skylake to run their applications faster, to achieve better performance, and to utilize new instruction sets like AVX512 to optimize their applications. Here’s what a few customers have to say about taking advantage of Compute Engine’s Skylake processors.

Alpha Vertex develops cognitive systems that provide advanced analytical capabilities to the financial community. Using compute engine 64-core machine types with Skylake for their ML systems allowed them to cut on training times for machine learning models.
“Using Google Cloud Platform, our Machine Learning (ML) training times have improved by 15 percent. We were able to build a Kubernetes cluster of 150 64-core Skylake processors in 15 minutes.” 
Michael Bishop, CTO, Alpha Vertex Inc

Milk VFX runs thousands of preemptible cores using one of the larger machine types (n1-highcpu-96) to create innovative and complex sequences for high-end television and feature films. At this scale, better performance means decreasing the runtime by days. With 96 vCPUs instances and preemptible machines, they were able to reduce the number of nodes they needed, and decrease their costs.
“By using Skylake with larger core machines we've been able to process more data faster, enabling our artists to be more productive, creative and cost effective. With preemptible machines we've cut the cost even more, so much so that we're already seeing savings made in such a short timeframe. More importantly, for the past 12 weeks since we started rendering all 3D on the GCP, we have met our deadlines without any late nights or weekend work and everyone is really happy.” 
Dave Goodbourn, Head of Systems, Milk Visual Effects

QuantConnect breaks down barriers to algorithmic trading by providing market data and a cluster computer so any engineer can quickly design an algorithmic trading system. They're constantly seeking the latest infrastructure innovations on Compute Engine.
“Our work at QuantConnect is constantly pushing the boundaries of cloud computing. When we learned of the addition of Skylake processors to the Google compute platform, we quickly joined as one of the early beta testers and converted infrastructure to harness it. The Skylake vCPUs improved our web-compiler speeds by 10 to 15 percent, making a measurable improvement to our user coding experience and increasing user satisfaction overall.” 
Jared Broad, Founder, QuantConnect

We're committed to making all our infrastructure innovations accessible to all Compute Engine customers. To start using Skylake processors in Compute Engine today, sign up for a new GCP account and get $300 in free trial credits to use on Skylake-powered VMs.

Deploying Memcached on Kubernetes Engine: tutorial



Memcached is one of the most popular open source, multi-purpose caching systems. It usually serves as a temporary store for frequently used data to speed up web applications and lighten database loads. We recently published a tutorial to learn how to deploy a cluster of distributed Memcached servers on Kubernetes Engine using Kubernetes and Helm.
Memcached has two main design goals:

  • Simplicity: Memcached functions like a large hash table and offers a simple API to store and retrieve arbitrarily shaped objects by key. 
  • Speed: Memcached holds cache data exclusively in random-access memory (RAM), making data access extremely fast.
Memcached is a distributed system that allows its hash table’s capacity to scale horizontally across a pool of servers. Each Memcached server operates in complete isolation and is unaware of the other servers in the pool. Therefore, the routing and load balancing between the servers must be done at the client level.

The tutorial explains how to effectively deploy Memcached servers to Kubernetes Engine, and describes how Memcached clients can proceed to discover the server endpoints and set up load balancing.

The tutorial also explains how to improve the system by enabling connection pooling with Mcrouter, a powerful open source Memcached proxy. Advanced optimization techniques are also discussed to reduce latency between the proxies and Memcached clients.

Check out the step-by-step tutorial for all the details on this solution. We hope this will inspire you to deploy caching servers to speed up your applications!

Introducing our new developer YouTube Series: “Build Out”

Posted by Reto Meier & Colt McAnlis: Developer Advocates

Ever found yourself trying to figure out the right way to combine mobile, cloud, and web technologies, only to be lost in the myriad of available offerings? It can be challenging to know the best way to combine all the options to build products that solve problems for your users.

That's why we created Build Out, a new YouTube series where real engineers face-off building fake products.

Each month we, (Reto Meier and Colt McAnlis), will present competing architectures to help show how Google's developer products can be combined to solve challenging problems for your users. Each solution incorporates a wide range of technologies, including Google Cloud, Android, Firebase, and Tensorflow (just to name a few).

Since we're engineers at heart, we enjoy a challenge—so each solution goes well past minimum viable product, and explores some of the more advanced possibilities available to solve the problem creatively.

Now, here's the interesting part. When we're done presenting, you get to decide which of us solved the problem better, by posting a comment to the video on YouTube. If you've already got a better solution—or think you know one—tell us about it in the comments, or respond with your own Build Out video to show us how it's done!

Episode #1: The Smart Garden.

In which we explore designs for gardens that care for themselves. Each design must be fully autonomous, learn from experience, and scale from backyard up to large-scale commercial gardens.

You can get the full technical details on each Smart Garden solution in this Medium article, including alternative approaches and best practices.

You can also listen to the Build Out Rewound Podcast, to hear us discuss our choices.

Introducing Dialogflow Enterprise Edition, a new way to build voice and text conversational apps



From chatbots to IoT devices, conversational apps provide a richer and more natural experience for users. Dialogflow (formerly API.AI) was created for exactly that purpose — to help developers build interfaces that offer engaging, personal interactions.

We’ve seen hundreds of thousands of developers use Dialogflow to create conversational apps for customer service, commerce, productivity, IoT devices and more. Developers have consistently asked us to add enterprise capabilities, which is why today we’re announcing the beta release of Dialogflow Enterprise Edition. The enterprise edition expands on all the benefits of Dialogflow, offering greater flexibility and support to meet the needs of large-scale businesses. In addition, we're also announcing speech integration within Dialogflow, enabling developers to build rich voice-based applications.

Here’s a little more on what Dialogflow offers:

  • Conversational interaction powered by machine learning: Dialogflow uses natural language processing to build conversational experiences faster and iterate more quickly. Provide a few examples of what a user might say and Dialogflow will build a unique model that can learn what actions to trigger and what data to extract so it provides the most relevant and precise responses to your users.
  • Build once and deploy everywhere: Use Dialogflow to build a conversational app and deploy it on your website, your app or 32 different platforms, including the Google Assistant and other popular messaging services. Dialogflow also supports multiple languages and multilingual experiences so you can reach users around the world.
  • Advanced fulfillment options: Fulfillment defines the corresponding action in response to whatever a user says, such as processing an order for a pizza or triggering the right answer to your user's question. Dialogflow allows you to connect to any webhook for fulfillment whether it's hosted in the public cloud or on-premises. Dialogflow’s integrated code editor allows you to code, test and implement these actions directly within Dialogflow's console.
  • Voice control with speech recognition: Starting today, Dialogflow enables your conversational app to respond to voice commands or voice conversations. It's available within a single API call, combining speech recognition with natural language understanding.


Dialogflow Enterprise Edition also offers:

  • Google Cloud Platform Terms of Service: Dialogflow Enterprise Edition is covered by the Google Cloud Platform Terms of Service, including the Data Privacy and Security Terms. Enterprise Edition users are also eligible for Cloud Support packages, and the Enterprise Edition will soon provide SLAs with committed availability levels.
  • Flexibility and scale: Dialogflow Enterprise Edition offers higher default quotas so it’s easier to scale your app up or down based on user demand.
  • Unlimited pay-as-you-go voice support: While both the standard and enterprise editions now allow your conversational app to detect voice commands or respond to voice conversations, Dialogflow Enterprise Edition offers unlimited pay-as-you-go voice support.

Companies such as Uniqlo, PolicyBazaar and Strayer University have already used Dialogflow to design and deploy conversational experiences.

Creating new online shopping experiences for Uniqlo


UNIQLO is a modern Japanese retailer that operates nearly 1,900 stores worldwide. The company integrated a chatbot into its mobile app to provide quick, relevant answers to a range of customer questions, regardless of whether customers are shopping online or in-store. This makes the shopping experience easier and more enjoyable. Since deploying the chatbot, 40% of users have interacted with it on a weekly basis.
“Our shopping chatbot was developed using Dialogflow to offer a new type of shopping experience through a messaging interface, with responses continually being improved through machine learning. Going forward, we’re also looking to expand the functionality to include voice recognition and multiple languages.” 
Shinya Matsuyama, Director of Global Digital Commerce, Uniqlo

Changing the way we buy insurance with PolicyBazaar


PolicyBazaar is the leading insurance marketplace in India, founded in the year 2008, with the purpose of educating consumers, enabling easy comparisons and purchasing insurance products. The company today hosts over 80 million visitors yearly, and records nearly 150,000 transactions a month.

Using Dialogflow Enterprise Edition, PolicyBazaar created and deployed a conversational assisted chatbot, PBee, to better serve its visitors and transform the way customers purchase insurance online. The company has been using the logging and training module to track top customer requests and improve fulfillment capabilities. In just a few months, PBee now handles over 60% of customer queries over chat, resulting in faster fulfillment of requests from its users.

Since deploying the chatbot, the company has seen a five-fold increase in customers using their chat interface for auto insurance, and chat now contributes to 40% of the company's auto insurance sales.
“Dialogflow is by far the best platform for text-based conversational chatbots. With it, we derive all the benefits of machine learning without restrictions on the frontend. Through our chatbot, we are now closing over 13,000 sales totaling a premium of nearly $2 million (USD) every month and growing at a 30% month-over-month rate.”  
Ashish Gupta, CTO & CPO, Policybazaar.com

For more on the differences between the standard and the enterprise editions of Dialogflow, we recommend reading our documentation.

We look forward to seeing what you'll build during our public beta. To learn more about Dialogflow Enterprise Edition, visit our product page.

How Qubit and GCP helped Ubisoft create personalized customer experiences



Editor’s note: Today’s blog post comes from Alex Olivier, product manager at Qubit. He’ll be taking us through the solution Qubit provided for Ubisoft, one of the world’s largest gaming companies, to help them personalize customer experiences through data analysis.

Our platform helps brands across a range of sectors — from retail and gaming to travel and hospitality — deliver a personalized digital experience for users. To do so, we analyze thousands of data points throughout a customer’s journey, taking the processing burden away from our clients. This insight prompts our platform to make a decision — for example, including a customer in a VIP segment, or identifying a customer’s interest in a certain product — and adapts the visitor’s experience accordingly.

As one of the world's largest gaming companies, Ubisoft faced a problem that challenges many enterprises: a data store so big it was difficult and time-consuming to analyze. “Data took between fifteen and thirty minutes to process,” explained Maxime Bosvieux, EMEA Ecommerce Director at Ubisoft. “This doesn’t sound like much, but the modern customer darts from website to website, and if you’re unable to provide them with the experience they’re looking for, when they’re looking for it, they’ll choose the competitor who can.” That’s when they turned to Qubit and Google Cloud Platform.

A cloud native approach.


From early on, we made the decision to be an open ecosystem so as to provide our clients and partners with flexibility across technologies. When designing our system, we saw that the rise of cloud computing could transform not only how platform companies like ours process data, but also how they interface with customers. By providing Cloud-native APIs across the stack, our clients could seamlessly use open source tools and utilities with Qubit’s systems that run on GCP. Many of these tools interface with gsutil via the command-line, call BigQuery, or even upload to Cloud Storage buckets via CyberDuck.

We provision and provide our clients access to their own GCP project. The project contains all data processed and stored from their websites, apps and back-end data sources. Clients can then access both batch and streaming data, be it a user's predicted preferred category, a real-time calculation of lifetime value, or which customer segment the user belongs to. A client can access this data within seconds, regardless of their site’s traffic volume at that moment.


Bringing it all together for Ubisoft.


One of the first things Ubisoft realized is that they needed access to all of their data, regardless of the source. Qubit Live Tap gave Ubisoft access to the full take of their data via BigQuery (and through BI tools like Google Analytics and Looker). Our system manages all data processing and schema management, and reports out actionable next steps. This helps speed up the process of understanding the customer in order to provide better personalization. Using BigQuery’s scaling abilities, Live Tap generates machine learning and AI driven insights for clients like Ubisoft. This same system also lets them access their data in other BI and analytics tools such as Google Data Studio.

We grant access to clients like Ubisoft through a series of views in their project that point back to their master data store. The BigQuery IAM model (permissions provisioning for shared datasets) allows views to be authorized across multiple projects, removing the need to do batch copies between instances, which might cause some data to become stale. As Qubit streams data into the master tables, the views have direct access to it: analysts who perform queries in their own BigQuery project get access to the latest, real-time data.

Additionally, because the project provided is a complete GCP environment, clients like Ubisoft can also provision additional resources. We have clients who create their own Dataproc clusters, or import data provided by Qubit in BigQuery or via a PubSub topic to perform additional analysis and machine learning in a single environment. This process avoids the data wrangling problems commonly encountered in closed systems.

By combining Google Cloud Dataflow, Bigtable and BigQuery, we’re able to process vast amounts of data quickly and at petabyte-scale. During a typical month, Qubit’s platform will provide personalized experiences for more than 100 million users, surface 28 billion individual visitor experiences from ML-derived conclusions on customer data and use AI to simulate more than 2.3 billion customers journeys.

All of this made a lot of sense to Ubisoft. “We’re a company famous for innovating quickly and pushing the limits of what can be done,” Maxime Bosvieux told us. “That requires stable and robust technology that leverages the latest in artificial intelligence to build our segmentation and personalization strategies.”

Helping more companies move to the cloud with effective and efficient migrations.


We’re thrilled that the infrastructure we built with GCP has helped clients like Ubisoft scale data processing far beyond previous capabilities. Our integration into the GCP ecosystem is making this scalability even more attractive to organizations switching to the cloud. While porting data to a new provider can be daunting, we’re helping our clients make a more manageable leap to GCP.

Monitor and manage your costs with Cloud Platform billing export to BigQuery



The flexibility and scalability of the cloud means that your usage can fluctuate dramatically from day to day with demand. And while you always pay only for what you use, customers often ask us to help them better understand their bill.

A prerequisite for understanding your bill is better access to detailed usage and billing data. So today, we are excited to announce the general availability of billing export to BigQuery, our data warehouse service, enabling a more granular and timely view into your GCP costs than ever before.

Billing export to BigQuery is a new and improved version of our existing billing export to CSV/JSON files, and like the name implies, exports your cloud usage data directly into a BigQuery dataset. Once the data is there, you can write simple SQL queries in BigQuery, visualize your data in Data Studio, or programmatically export the data into other tools to analyze your spend.

New billing data is exported automatically into the dataset as it becomes available-- usually multiple times per day. BigQuery billing export also contains a few new features to help you organize your data:
  • User labels to categorize and track costs 
  • Additional product metadata to organize by GCP services: 
    • Service description 
    • Service category 
    • SKU ID to uniquely identify each resource type 
  • Export time to help organize cost by invoice 

Getting started with billing export to BigQuery 


It’s easy to export billing data into BigQuery and start to analyze it. The first step is to enable the export, which begins to build your billing dataset, following these setup instructions. Note that you need Billing Admin permissions in GCP to enable export so check you have the appropriate permissions or work with your Billing Admin.

Once you have billing export set up, the data will automatically start being populated within a few hours. Your BigQuery dataset will continue to automatically update as new data is available.


NOTE: Your BigQuery dataset only reflects costs incurred from the date you set up billing export; we will not backfill billing data at this time. While our existing CSV and JSON export features continue to remain available in their current format, we strongly encourage you to enable billing export to BigQuery as early as possible to build out your billing dataset, and to take advantage of the more granular cost analysis it allows.

Querying the billing export data


Now that you've populated your dataset, you can start the fun part--data analysis. You can export the full dataset, complete with new elements such as user labels, or write queries against the data to answer specific questions. Here are a couple of simple examples of how you might use BigQuery queries on exported billing data.

Query every row without grouping


The most granular view of your billing costs is to query every row without grouping. Assume all fields, except labels and resource types, are the same (project, product, and so on).

SELECT
     resource_type,
     TO_JSON_STRING(labels) as labels,
     cost as cost
FROM `project.dataset.table`;

Group by label map as a JSON string 

This is a quick and easy way to break down cost by each label combination.

SELECT
     TO_JSON_STRING(labels) as labels,
     sum(cost) as cost
FROM `project.dataset.table`
GROUP BY labels;

You can see more query examples or write your own.

Visualize Spend Over Time with Data Studio


Many business intelligence tools natively integrate with BigQuery as the backend datastore. With Data Studio, you can easily visualize your BigQuery billing data, and with a few clicks set up a dashboard and get up-to-date billing reports throughout the day, using labels to slice and dice your GCP bill.


You can find detailed instructions about how to copy and setup a Data Studio template here: Visualize spend over time with Data Studio

Here at Google Cloud, we’re all about making your cloud costs as transparent and predictable as possible. To learn more about billing export to BigQuery, check out the documentation, and let us know how else we can help you understand your bill, by sending us feedback.

Intel Performance Libraries and Python Distribution enhance performance and scaling of Intel® Xeon® Scalable (‘Skylake’) processors on GCP



Google was pleased to be the first cloud vendor to offer the latest-generation Intel® Xeon® Scalable (‘Skylake’) processors in February 2017. With their higher core counts, improved on-chip interconnect with the new Intel® Mesh Architecture, enhanced memory subsystems and Intel® Advanced Vector Extensions-512 (AVX-512) functional units, these processors are a great fit for demanding HPC applications that need high floating-point operation rates (FLOPS) and the operand bandwidth to feed the processing pipelines.
New Intel® Mesh Architecture for Xeon Scalable Processors

Skylake raises the performance bar significantly, but a processor is only as powerful as the software that runs on it. So today we're announcing that the Intel Performance Libraries are now freely available for Google Cloud Platform (GCP) Compute Engine. These libraries, which include the Intel® Math Kernel Library, Intel® Data Analytics Acceleration Library, Intel® Performance Primitives, Intel® Threading Building Blocks, and Intel® MPI Library, integrate key communication and computation kernels that have been tuned and optimized for this latest Intel processor family, in terms of both sequential pipeline flow and parallel execution. These components are useful across all the Intel Xeon processor families in GCP, but they're of particular interest for applications that can use them to fully exploit the scale of 96 vCPU instances on Skylake-based servers.

Scaling out to Skylake can result in dramatic performance improvements. This parallel SGEMM matrix multiplication benchmark result, run by Intel engineers on GCP, shows the advantage obtained by going from a 64 vCPU GCP instance on an Intel® Xeon processor E5 (“Broadwell”) system to an instance with 96 vCPUs on Intel Xeon Scalable (“Skylake”) processors, using the Intel® MKL on GCP. Using half or fewer of the available vCPUs reduces hyper-thread sharing of AVX-512 functional units and leads to higher efficiency.

In addition to pre-compiled performance libraries, GCP users now have free access to the Intel® Distribution for Python, a distribution of both python2 and python3, which uses the Intel instruction features and pipelines for maximum effect.

The following chart shows example performance improvements delivered by the optimized scikit-learn K-means functions in the Intel® Distribution for Python over the stock open source Python distribution.
We’re delighted that Google Cloud Platform users will experience the best of Intel® Xeon® Scalable processors using the Intel® Distribution for Python and the Intel performance libraries Intel® MKL, Intel® DAAL, Intel® TBB, Intel® IPP and Intel® MPI. These software tools are carefully tuned to deliver the workload-optimized performance benefits of the advanced processors that Google has deployed, including 96 vCPUs and workload-optimized vector capabilities provided by Intel® AVX-512.”  
Sanjiv Shah, VP and GM, Software Development tools for technical, enterprise, and cloud computing at Intel
For more information about Intel and GCP, or to access the installation instructions for the Intel Performance Library and Python packages, visit the Intel and Google Cloud Platform page.

With Multi-Region support in Cloud Spanner, have your cake and eat it too



Today, we’re thrilled to announce the general availability of Cloud Spanner Multi-Region configurations. With this release, we’ve extended Cloud Spanner’s transactions and synchronous replication across regions and continents. That means no matter where your users may be, apps backed by Cloud Spanner can read and write up-to-date (strongly consistent) data globally and do so with minimal latency for end users. In other words, your app now has an accurate, consistent view of the data it needs to support users whether they’re around the corner or around the globe. Additionally, when running a Multi-Region instance, your database is able to survive a regional failure.

This release also delivers an industry-leading 99.999% availability SLA with no planned downtime. That’s 10x less downtime (< 5min / year) than database services with four nines of availability.

Cloud Spanner is the first and only enterprise-grade, globally distributed and strongly consistent database service built specifically for the cloud that combines the benefits and familiarity of relational database semantics with non-relational scale and performance. It now supports a wider range of application workloads, from a single node in a single region to massive instances that span regions and continents. At any scale, Cloud Spanner behaves the same, delivering a single database experience.


Since we announced the general availability of Cloud Spanner in May, customers, from startups to enterprises, have rethought what a database can do, and have been migrating their mission critical production workloads to it. For example, Mixpanel, a business analytics service, moved their sharded MySQL database to Cloud Spanner to handle user-id lookups when processing events from their customers' end-users web browser and mobile devices.

No more trade-offs


For years, developers and IT organizations were forced to make painful compromises between the horizontal scalability of non-relational databases and the transactions, structured schema and complex SQL queries offered by traditional relational databases. With the increase in volume, variety and velocity of data, companies had to layer additional technologies and scale-related workarounds to keep up. These compromises introduced immense complexity and only addressed the symptoms of the problem, not the actual problem.

This summer, we announced an alliance with marketing automation provider Marketo, Inc., which is migrating to GCP and Cloud Spanner. Companies around the world rely on Marketo to orchestrate, automate, and adapt their marketing campaigns via the Marketo Engagement Platform. To meet the demands of its customers today, and tomorrow, Marketo needed to be able to process trillions of activities annually, creating an extreme-scale big data challenge. When it came time to scale its platform, Marketo did what many companies do  it migrated to a non-relational database stack. But if your data is inherently transactional, going to a system without transactions and keeping data ordered and readers consistent is very hard.

"It was essential for us to have order sequence in our app logic, and with Cloud Spanner, it’s built in. When we started looking at GCP, we quickly identified Cloud Spanner as the solution, as it provided relational semantics and incredible scalability within a managed service. We hadn’t found a Cloud Spanner-like product in other clouds. We ran a successful POC and plan to move several massive services to Cloud Spanner. We look forward to Multi-Region configurations, as they give us the ability to expand globally and reduce latencies for customers on the other side of the world" 
— Manoj Goyal, Marketo Chief Product Officer

Mission-critical high availability


For global businesses, reliability is expected but maintaining that reliability while also rapidly scaling can be a challenge. Evernote, a cross-platform app for individuals and teams to create, assemble, nurture and share ideas in any form, migrated to GCP last year. In the coming months, it will mark the next phase of its move to the cloud by migrating to a single Cloud Spanner instance to manage over 8 billion plus pieces of its customers’ notes, replacing over 750 MySQL instances in the process. Cloud Spanner Multi-Region support gives Evernote the confidence it needs to make this bold move.
"At our size, problems such as scalability and reliability don't have a simple answer, Cloud Spanner is a transformational technology choice for us. It will give us a regionally distributed database storage layer for our customers’ data that can scale as we continue to grow. Our whole technology team is excited to bring this into production in the coming months."
Ben McCormack, Evernote Vice President of Operations

Strong consistency with scalability and high performance


Cloud Spanner delivers scalability and global strong consistency so apps can rely on an accurate and ordered view of their data around the world with low latency. Redknee, for example, provides enterprise software to mobile operators to help them charge their subscribers for their data, voice and texts. Its customers' network traffic currently runs through traditional database systems that are expensive to operate and come with processing capacity limitations.
“We want to move from our current on-prem per-customer deployment model to the cloud to improve performance and reliability, which is extremely important to us and our customers. With Cloud Spanner, we can process ten times more transactions per second (using a current benchmark of 55k transactions per second), allowing us to better serve customers, with a dramatically reduced total cost of ownership." 
— Danielle Royston, CEO, Redknee

Revolutionize the database admin and management experience


Standing up a globally consistent, scalable relational database instance is usually prohibitively complex. With Cloud Spanner, you can create an instance in just a few clicks and then scale it simply using the Google Console or programmatically. This simplicity revolutionizes database administration, freeing up time for activities that drive the business forward, and enabling new and unique end-user experiences.

A different way of thinking about databases


We believe Cloud Spanner is unique among databases and cloud database services, offering a global relational database, not just a feature to eventually copy or replicate data around the world. At Google, Spanner powers apps that process billions of transactions per day across many Google services. In fact, it has become the default database internally for apps of all sizes. We’re excited to see what your company can do with Cloud Spanner as your database foundation.

Want to learn more? Check out the many whitepapers discussing the technology behind Cloud Spanner. Then, when you’re ready to get started, follow our Quickstart guide to Cloud Spanner, or Kelsey Hightower’s post How to get started with Cloud Spanner in 5 minutes.

Introducing Certified Kubernetes (and Google Kubernetes Engine!)



When Google launched Kubernetes three years ago, we knew based on our 10 years of experience with Borg how useful it would be to developers. But even we couldn’t have predicted just how successful it would become. Kubernetes is one of the world’s highest velocity open source projects, supported by a diverse community of contributors. It was designed at its heart to run anywhere, and dozens of vendors have created their own Kubernetes offerings.

It's critical to Kubernetes users that their applications run reliably across different Kubernetes environments, and that they can access the new features in a timely manner. To ensure a consistent developer experience across different Kubernetes offerings, we’ve been working with the Cloud Native Computing Foundation (CNCF) and the Kubernetes community to create the Certified Kubernetes Conformance Program. The Certified Kubernetes program officially launched today, and our Kubernetes service is among the first to be certified.

Choosing a Certified Kubernetes platform like ours and those from our partners brings both benefits and peace of mind, especially for organizations with hybrid deployments. With the greater compatibility of Certified Kubernetes, you get:
  • Smooth migrations between on-premises and cloud environments, and a greater ability to split a single workload across multiple environments 
  • Consistent upgrades
  • Access to community software and support resources
The CNCF hosts a complete list of of Certified Kubernetes platforms and distributions. If you use a Kubernetes offering that's not on the list, encourage them to become certified as soon as possible!

Putting the K in GKE


One of the benefits of participating in the Certified Kubernetes Conformance Program is being able to use the name “Kubernetes” in your product. With that, we’re taking this opportunity to rename Container Engine to Kubernetes Engine. From the beginning, Container Engine’s acronym has been GKE in a nod to Kubernetes. Now, as a Certified Kubernetes offering, we can officially put the K in GKE.

While the Kubernetes Engine name is new, everything else about the service is unchanged—it’s still the same great managed environment for deploying containerized applications that you trust to run your production environments. To learn more about Kubernetes Engine, visit the product page, or the documentation for a wealth of quickstarts, tutorials and how-tos. And as always, if you’re just getting started with containers and Google Cloud Platform, be sure to sign up for a free trial.

Announcing integration of Altair HPC applications with Google Cloud



Engineering today requires access to unprecedented computing resources to simulate, test and design the products that make modern life possible. Here at Google Cloud, one of our goals is to democratize and simplify access to advanced computing resources and promote the sciences and engineering.

With that, we’re excited to announce a new technology partnership between Google Cloud Platform (GCP), Intel and Altair, a leading software provider for engineering and science applications, including high performance computing (HPC) applications for computer-aided engineering, simulation, product design, Internet of Things and others.

Starting today, you can launch virtual HPC appliances running Altair and other HPC applications on GCP using Altair’s PBScloud.io. PBScloud provides a central command center, a simple user experience, easy deployment, real-time monitoring and resource management for HPC use cases. It also includes features for job submission, job monitoring and result visualization. PBScloud.io also works and orchestrates across multiple public clouds and traditional on-premises deployments.
Altair applications available on GCP via PBScloud.io
Before cloud computing, engineers and scientists were constrained by the limitations of on-premise computing resources and clusters. Long queue times, suboptimal hardware utilization and frustrated users were commonplace. With Google Cloud, you can test your ideas quickly, pay for exactly what you need and only while you need it. Now with Altair’s PBScloud.io, you also have easy, turn-key access to state-of-the-art science and engineering applications on GCP’s advanced, scalable hardware and infrastructure.

Compare, for example, the performance of Altair RADIOSS on Intel’s latest generation Xeon processor codenamed Skylake on Compute Engine vs. its performance on previous generation CPUs. Note that RADIOSS demonstrated product scalability by taking advantage of all 96 vCPUs on GCP.


We’re excited to bring this collaboration to you and even more excited to see what you'll build with Altair’s software on our platform. If you’re at the SC17 conference, be sure to drop by the Google Cloud, Altair and Intel booths for talks, demos and to talk about HPC on Google Cloud.

Check out PBScloud.io and sign up for a GCP trial at no cost today.