Tag Archives: Announcements

Cloud Speech API improves longform audio recognition and adds 30 new language variants

Since its launch in 2016, businesses have used the Google Cloud Speech API to improve speech recognition for everything from voice-activated commands to call center routing to data analytics. And since then, we’ve gotten a lot of feedback that our users would like even more functionality and control. That’s why today we’re announcing Cloud Speech API features that expand support for long-form audio and further extend our language support to help even more customers inject AI into their businesses.

Here’s more on what the updated Cloud Speech API can do:

Word-level timestamps

Our number one most requested feature has been providing timestamp information for each word in the transcript. Word-level timestamps let users jump to the moment in the audio where the text was spoken, or display the relevant text while the audio is playing. You can find more information on timestamps here.

Happy Scribe uses Cloud Speech API to power its easy-to-use and affordable voice-to-text transcription service, helping professionals such as reporters and researchers transcribe interviews.
“Having the ability to map the audio to the text with timestamps significantly reduces the time spent proofreading transcripts.”  
 Happy Scribe Co-founder, André Bastie
VoxImplant enables companies to build voice and video applications, including IVR and speech analytics applications.
“Now with Google Cloud Speech API timestamps, we can accurately analyze phone call conversations between two individuals with real-time speech-to-text transcription, helping our customers drive business impact. The ability to easily find the place in a call when something was said using timestamps makes Cloud Speech API much more useful and will save our customers’ time”  
 VoxImplant CEO, Alexey Aylarov

Support for files up to 3 hours long

To help our users with long-form audio needs, we’re increasing the length of supported files from 80 minutes to up to 3 hours. Additionally, files longer than 3 hours could be supported on a case-by-case basis by applying for a quota extension through Cloud Support.

Expanded language coverage

Cloud Speech API already supports 89 language varieties. Today, coinciding with the broader announcement this morning, we’re adding 30 additional language varieties, from Bengali to Latvian to Swahili, covering more than one billion additional speakers. Our new expanded language support helps Cloud Speech API customers reach more users in more countries for an almost global reach. In addition, it enables users in more countries to use speech to access products and services that up until now have never been available to them.

You can find our complete list of supported languages here.

We hope these updates will help our users do more with Cloud Speech API. To learn more, visit Cloud.google.com/speech/.

Now Optimize users can innovate in 37 new languages

It just got a whole lot easier to share Google Optimize with your teams around the world.

Optimize is now available in 37 new languages. Got a team in Thailand? No trouble. Cross-functional partner in Croatia? You're covered. You'll find the full list of supported languages here.

We're always glad to bring our products to more of the world. But in this case, we're extra excited about the way this will help teams collaborate and innovate not just across the office but across the globe.

In this data-rich world, everyone in your company needs to be part of building a culture of growth: a culture that embraces testing and analytics as the best way to learn what customers like most and to improve their experience day by day. Optimize opens the door for innovators at every level to explore how even tiny UI changes can improve results. 

Often those innovators take the form of a small "X-team" — maybe an analyst, a designer, and an engineer working together and totally focused on testing and optimization. With Optimize, a group like that can create changes in minutes instead of days, and they can more easily share that growth mindset and inspire others across their organization.

Now with 37 more languages in play, Optimize makes it possible for many more local teams to take on the role of optimizers, innovators, and culture-changers.

If you have team members who have selected one of the 37 new languages in their Google Account preferences, they'll see Optimize in that language whenever they next sign in. (If you’d like to select a language preference just for Optimize, you can do so in your Optimize user settings at any time.) And if you're happy with your current Optimize language, you're fine: No action is needed.

To learn more about your global language options, visit our help center. Happy optimizing!

Independent research firm names Google Cloud the Insight PaaS Leader

Forrester Research, a leading analyst firm, just named Google Cloud Platform (GCP) the leader in The Forrester Wave™: Insight Platforms-As-A-Service, Q3 2017, its analysis of cloud providers offering Platform as a Service. According to the report, an insight PaaS makes it easier to:

  • Manage and access large, complex data sets
  • Update and evolve applications that deliver insight at the moment of action
  • Update and upgrade technology
  • Integrate and coordinate team member activities

For this Wave, Forrester evaluated eight separate vendors. It looked at 36 evaluation criteria spanning three broad buckets  current offering, strategy and market presence.

Of the eight vendors, Google Cloud’s insight PaaS scored highest for both current offering and strategy.
“Google was the only vendor in our evaluation to offer insight execution features like full machine learning automation with hyperparameter tuning, container management and API management. Google will appeal to firms that want flexibility and extreme scalability for highly competent data scientists and cloud application development teams used to building solutions on PaaS.”  The Forrester Wave: Insight Platforms-As-A-Service, Q3 2017
Our presence in the Insight Platform as a Service market goes way back. We started with a vision for serverless computing back in 2008 with Google App Engine and added serverless data processing in 2010 with Google BigQuery. In 2016 we added machine learning (Cloud Machine Learning Engine) to GCP to help bring the power of TensorFlow (Google’s open source machine learning framework) to everyone. We continue to be amazed by what companies like Snap and The Telegraph are doing with these technologies and look forward to building on these insight services to help you build the amazing applications of tomorrow.

Sign up here to get a complimentary copy of the report.

Professors from Around the World Get Their Students into HFOSS

Over the last four years instructors from around the world have gathered for the Professors’ Open Source Software Experience (POSSE) workshop to integrate open source concepts into their curriculum. At each event, professors make more progress toward providing students with hands on experience via contributions to humanitarian free and open source software (HFOSS).

This year Google was proud to not only host a workshop at our San Francisco office in April, but also to collaborate with the organizers to bring a POSSE workshop to Europe for the first time.
POSSE workshop leaders, from left to right: Clif Kussmaul (Muhlenburg College), Lori Postner (Nassau Community College), Stoney Jackson (Western New England University),  Heidi Ellis (Western New England University), Greg Hislop (Drexel University), and Darci Burdge (Nassau Community College).
The workshop in Italy was led by Dr. Gregory Hislop from Drexel University, and Drs. Heidi Ellis and Stoney Jackson from Western New England University, and brought together 20 instructors from Germany, Hungary, India, Italy, Macedonia, Qatar, Spain, Swaziland, the United Kingdom, and the United States. This was the most geographically diverse workshop to date!
Group photos in San Francisco, USA on April 22, 2017 (left) and Bologna, Italy on July 1, 2017 (right).
What’s next for POSSE? University instructors from institutions in the US can apply now to participate in the next workshop, November 16-18 in Raleigh, NC and join their peers in the community of instructors weaving HFOSS into their curriculum.

By Helen Hu, Google Open Source

Ask a question, get an answer in Google Analytics

What if getting answers about your key business metrics was as easy as asking a question in plain English? What if you could simply say, "How many new users did we have from organic search on mobile last week?" ― and get an answer right away?

Today, Google Analytics is taking a step toward that future.  Know what data you need and want it quickly? Just ask Google Analytics and get your answer.
This feature, which uses the same natural language processing technology available across Google products like Android and Search, is rolling out now and will become available in English to all Google Analytics users over the next few weeks.
The ability to ask questions is part of Analytics Intelligence, a set of features in Google Analytics that use machine learning to help you better understand and act on your analytics data. Analytics Intelligence also includes existing machine learning capabilities like automated insights (now available on both web and the mobile app), smart lists, smart goals, and session quality.

How it Works
We've talked to web analysts who say they spend half their time answering basic analytics questions for other people in their organization. In fact, a recent report from Forrester found that 57% of marketers find it difficult to give their stakeholders in different functions access to their data and insights. Asking questions in Analytics Intelligence can help everyone get their answers directly in the product ― so team members get what they need faster, and analysts can spend their valuable time on deeper research and discovery.
Try it! This short video will give you a feel for how it works:
“Analytics Intelligence enables those users who aren’t too familiar with Google Analytics to access and make use of the data within their business’ account. Democratising data in this way can only be a good thing for everyone involved in Google Analytics!”
Joe Whitehead, Analytics Consultant, Merkle | Periscopix

Beyond answering your questions, Analytics Intelligence also surfaces new opportunities for you through automated insights, now available in the web interface as well as in the mobile app. These insights can show spikes or drops in metrics like revenue or session duration, tipping you off to issues that you may need to investigate further. Insights may also present opportunities to improve key metrics by following specific recommendations. For example, a chance to improve bounce rate by reducing a page's load time, or the potential to boost conversion rate by adding a new keyword to your AdWords campaign.

To ask questions and get automated insights from Analytics Intelligence in our web interface, click the Intelligence button to open a side panel. In the Google Analytics mobile app for Android and iOS, tap the Intelligence icon in the upper right-hand corner of most screens. Check out this article to learn more about the types of questions you can ask today.

Help us Learn
Our Intelligence system gets even smarter over time as it learns which questions and insights users are interested in. In that spirit, we need your help: After you ask questions or look at insights, please leave feedback at the bottom of the card.

Your answers will help us train Analytics Intelligence to be more useful.

Our goal is to help you get more insights to more people, faster. That way everyone can get to the good stuff: creating amazing experiences that make customers happier and help you grow your business.
Happy Analyzing!

Introducing Transfer Appliance: Sneakernet for the cloud era

Back in the eighties, when network constraints limited data transfers, people took to the streets and walked their floppy disks where they needed to go. And Sneakernet was born.

In the world of cloud and exponential data growth, the size of the disk and the speed of your sneakers may have changed, but the solution is the same: Sometimes the best way to move data is to ship it on physical media.

Today, we’re excited to introduce Transfer Appliance, to help you ingest large amounts of data to Google Cloud Platform (GCP).
Transfer Appliance offers up to 480TB in 4U or 100TB in 2U of raw data capacity in a single rackmount device
Transfer Appliance is a rackable high-capacity storage server that you set up in your data center. Fill it up with data and then ship it to us, and we upload your data to Google Cloud Storage. With capacity of up to one-petabyte compressed, Transfer Appliance helps you migrate your data orders-of-magnitude faster than over a typical network. The appliance encrypts your data at capture, and you decrypt it when it reaches its final cloud destination, helping to get it to the cloud safely.

Like many organizations we talk to, you probably have large amounts of data that you want to use to train machine learning models. You have huge archives and backup libraries taking up expensive space in your data center. Or IoT devices flooding your storage arrays. There’s all this data waiting to get to the cloud, but it’s impeded by expensive, limited bandwidth. With Transfer Appliance, you can finally take advantage of all that GCP has to offer  machine learning, advanced analytics, content serving, archive and disaster recovery  without upgrading your network infrastructure or acquiring third-party data migration tools.

Working with customers, we’ve found that the typical enterprise has many petabytes of data, and available network bandwidth between 100 Mbps and 1 Gbps. Depending on the available bandwidth, transferring 10 PB of that data would take between three and 34 years  much too long.

Estimated transfer times for given capacity and bandwidth
That’s where Transfer Appliance comes in. In a matter of weeks, you can have a petabyte of your data accessible in Google Cloud Storage, without consuming a single bit of precious outbound network bandwidth. Simply put, Transfer Appliance is the fastest way to move large amounts of data into GCP.

Compare the transfer times for 1 petabyte of data.
Customers tell us that space inside the data center is at a premium, and what space there is comes in the form of server racks. In developing Transfer Appliance, we built a device designed for the data center, that slides into a standard 19” rack. Transfer Appliance will only live in your data center for a few days, but we want it to be a good houseguest while it’s there.

Customers have been testing Transfer Appliance for several months, and love what they see:
"Google Transfer Appliance moves petabytes of environmental and geographic data for Makani so we can find out where the wind is the most windy." Ruth Marsh, Technical Program Manager at Makani

"Using a service like Google Transfer Appliance meant I could transfer hundreds of terabytes of data in days not weeks. Now we can leverage all that Google Cloud Platform has to offer as we bring narratives to life for our clients."  Tom Taylor, Head of Engineering at The Mill
Transfer Appliance joins the growing family of Google Cloud Data Transfer services. Initially available in the US, the service comes in two configurations: 100TB or 480TB of raw storage capacity, or up to 200TB or 1PB compressed. The 100TB model is priced at $300, plus shipping via Fedex (approximately $500); the 480TB model is priced at $1800, plus shipping (approximately $900). To learn more visit the documentation.

We think you’re going to love getting to cloud in a matter of weeks rather than years. Sign up to reserve a Transfer Appliance today. You can also sign up here for a GCP free trial.

New Singapore GCP region – open now

The Singapore region is now open as asia-southeast1. This is our first Google Cloud Platform (GCP) region in Southeast Asia (and our third region in Asia), and it promises to significantly improve latency for GCP customers and end users in the area.

Customers are loving GCP in Southeast Asia; the total number of paid GCP customers in Singapore has increased by 100% over the last 12 months.

And the experience for GCP customers in Southeast Asia is better than ever too; performance testing shows 51% to 98% reductions in round-trip time (RTT) latency when serving customers in Singapore, Jakarta, Kuala Lumpur and Bangkok compared to using other GCP regions in Taiwan or Tokyo.

Customers with a global footprint like BBM Messenger, Carousell and Go-Jek have been looking forward to the launch of the Singapore region.
"We are excited to be able to deploy into the GCP Singapore region, as it will allow us to offer our services closer to BBM Messenger key markets. Coupled with Google's global load balancers and extensive global network, we expect to be able to provide a low latency, high-speed experience for our users globally. During our POCs, we found that GCP outperformed most vendors on key metrics such as disk I/O and network performance on like-for-like benchmarks. With sustained usage discounts and continuous support from Google's PSO and account team, we are excited to make GCP the foundation for the next generation of BBM consumer services. Matthew Talbot, CEO of Creative Media Works, the company that runs BBM Messenger Consumer globally.
"As one of the largest and fastest growing mobile classifieds marketplaces in the world, Carousell needed a platform that was agile enough for a startup, but could scale quickly as we expand. We found all these qualities in the Google Cloud Platform (GCP), which gives us a level of control over our systems and environment that we didn't find elsewhere, along with access to cutting edge technologies. We're thrilled that GCP is launching in Singapore, and look forward to being inspired by the way Google does things at scale."  — Jordan Dea-Mattson, Vice President Engineering, Carousell

"We are extremely pleased with the performance of GCP, and we are excited about the opportunities opening in Indonesia and other markets, and making use of the Singapore Cloud Region. The outcomes we’ve achieved in scaling, stability and other areas have proven how fantastic it is to have Google and GCP among our key service partners." — Ajey Gore, CTO, Go-Jek
We’ve launched Singapore with two zones and the following services:
In addition, you can combine any of the services you deploy in Singapore with other GCP services around the world such as DLP, Spanner and BigQuery.

Singapore Multi-Tier Cloud Security certification

Google Cloud is pleased to announce that having completed the required assessment, it has been recommended, by an approved certification body, for Level 3 certification of Singapore's Multi-Tier Cloud Security (MTCS) standard (SS 584:2015+C1:2016). Customers can expect formal approval of Google Cloud's certification in the coming months. As a result of achieving this certification, organizations who require compliance with the strictest levels of the MTCS standard can now confidently adopt Google Cloud services and host this data on Google Cloud's infrastructure.

Next steps

If you’re looking for help to understand how to deploy GCP, please contact local partners Sakura Sky, CloudCover, Cloud Comrade and Powerupcloud.

For more details on the Singapore region, please visit our Singapore region portal, where you’ll get access to free resources, whitepapers, on-demand video series called "Cloud On-Air" and more. These will help you get started on GCP. Our locations page provides updates on other regions coming online soon. Give us a shout to request early access to new regions and help us prioritize what we build next.

Getting started with Shared VPC

Large organizations with multiple cloud projects value the ability to share physical resources, while maintaining logical separation between groups or departments. At Google Cloud Next '17, we announced Shared VPC, which allows you to configure and centrally manage one or more virtual networks across multiple projects in your Organization, the top level Cloud Identity Access Management (Cloud IAM) resource in the Google Cloud Platform (GCP) cloud resource hierarchy.

With Shared VPC, you can centrally manage the creation of routes, firewalls, subnet IP ranges, VPN connections, etc. for the entire organization, and at the same time allow developers to own billing, quotas, IAM permissions and autonomously operate their development projects. Shared VPC is now generally available, so let’s look at how it works and how best to configure it.

How does Shared VPC work?

We implemented Shared VPC entirely in the management control plane, transparent to the data plane of the virtual network. In the control plane, the centrally managed project is enabled as a host project, allowing it to contain one or more shared virtual networks. After configuring the necessary Cloud IAM permissions, you can then create virtual machines in shared virtual networks, by linking one or more service projects to the host project. The advantage of sharing virtual networks in this way is being able to control access to critical network resources such as firewalls and centrally manage them with less overhead.

Further, with shared virtual networks, virtual machines benefit from the same network throughput caps and VM-to-VM latency as when they're not on shared networks. This is also the case for VM-to-VPN and load balancer-to-VM communication.

To illustrate, consider a single externally facing web application server that uses services such as personalization, recommendation and analytics, all internally available, but built by different development teams.

Example topology of a Shared VPC setup.

Let’s look at the recommended patterns when designing such a virtual network in your organization.

Shared VPC administrator role

The network administrator of the shared host project should also have the XPN administrator role in the organization. This allows a single central group to configure new service projects that attach to the shared VPC host project, while also allowing them to set up individual subnetworks in the shared network and configure IP ranges, for use by administrators of specific service projects. Typically, these administrators would have the InstanceAdmin role on the service project.

Subnetworks USE permission

When connecting a service project to the shared network, we recommend you grant the service project administrators compute.subnetworks.use permission (through the NetworkUser role) on one (or more) subnetwork(s) per region, such that the subnetwork(s) are used by a single service project.

This will help ensure cleaner separation of usage of subnetworks by different teams in your organization. In the future, you may choose to associate specific network policies for each subnetwork based on which service project is using it.

Subnetwork IP ranges

When configuring subnetwork IP ranges in the same or different regions, allow sufficient IP space between subnetworks for future growth. GCP allows you to expand an existing subnetwork without affecting IP addresses owned by existing VMs in the virtual network and with zero downtime.

Shared VPC and folders

When using folders to manage projects created in your organization, place all host and service projects for a given shared VPC setup within the same folder. The parent folder of the host project should be in the parent hierarchy of the service projects, so that the parent folder of the host project contains all the projects in the shared VPC setup. When associating service projects with a host project, ensure that these projects will not move to other folders in the future, while still being linked to the host project.

Control external access

In order to control and restrict which VMs can have public IPs and thus access to the internet, you can now set up an organization policy that disables external IP access for VMs. Do this only for projects that should have only internal access, e.g. the personalization, recommendation and analytics services in the example above.

As you can see, Shared VPC is a powerful tool that can make GCP more flexible and manageable for your organization. To learn more about Shared VPC, check out the documentation.

Spinnaker 1.0: a continuous delivery platform for cloud

At Google we deploy a lot of code: tens of thousands deployments a day, to thousands of services, seven of which have more than a billion users each around the globe. Along the way we’ve learned some best practices about how to deploy software at velocity -- things like automated releases, immutable infrastructure, gradual rollouts and fast rollbacks.

Back in 2014, we started working with the Netflix team that created Spinnaker, and saw in it a release management platform that embodied many of our first principles for safe, frequent and reliable releases. Excited by its potential, we partnered with Netflix to bring Spinnaker to the public, and they open-sourced it in November 2015. Since then, the Spinnaker community has grown to include dozens of organizations including Microsoft, Oracle, Target, Veritas, Schibsted, Armory and Kenzan, to name a few.

Today we’re happy to announce the release of Spinnaker 1.0, an open-source multi-cloud continuous delivery platform used in production at companies like Netflix, Waze, Target, and Cloudera, plus a new open-source command line interface (CLI) tool called halyard that makes it easy to deploy Spinnaker itself. Read on to learn what Spinnaker can do for your own software development processes.

Why Spinnaker?

Let’s look at a few of the features and new updates that make Spinnaker a great release management solution for enterprises:

Open-source, multi-cloud deployments
Here at Google Cloud Platform (GCP), we believe in an open cloud. Spinnaker, including its rich UI dashboard, is 100% open-source. You can install it locally, on-prem, or to any cloud platform, running either on a virtual machine (VM) or Kubernetes.

Spinnaker streamlines the deployment process by decoupling your release pipeline from your target cloud provider, reducing the complexity of moving from one platform to another or deploying the same application to multiple clouds.

It has built-in support for Google Compute Engine, Google Container Engine, Google App Engine, AWS EC2, Microsoft Azure, Kubernetes, and OpenStack, with more added every year by the community, including Oracle Bare Metal and DC/OS, coming soon.

Whether you’re releasing to multiple clouds or preventing vendor lock-in, Spinnaker helps you deploy your application based on what’s best for your business.

Automated releases
In Spinnaker, deployments are orchestrated using custom release pipelines, the stages of which can consist of almost anything you want -- integration or system tests, spinning a server group up or down, manual approvals, waiting a period of time, or running a custom script or Jenkins job.

Spinnaker integrates seamlessly with your existing continuous integration (CI) workflows. You can trigger pipelines from git, Jenkins, Travis CI, Docker registries, on a cron-like schedule, or even other pipelines.

Best-practice deployment strategies
Out-of-the-box, Spinnaker supports sophisticated deployment strategies like release canaries, multiple staging environments, red/black (a.k.a. blue/green) deployments, traffic splitting and easy rollbacks.

This is enabled in part by Spinnaker’s use of immutable infrastructure in the cloud, where changes to your application trigger a redeployment of your entire server fleet. Compare this to the traditional approach of configuring updates to running machines, which results in slower, riskier rollouts and hard-to-debug configuration-drift issues.

With Spinnaker, you simply choose the deployment strategy you want to use for each environment, e.g. red/black for staging, rolling red/black for production, and it orchestrates the dozens of steps necessary under-the-hood. You don’t have to write your own deployment tool or maintain a complex web of Jenkins scripts to have enterprise-grade rollouts.

Role-based authorizations and permissions
Large companies often adopt Spinnaker across multiple product areas managed by a central DevOps team. For admins that need role-based access control for a project or account, Spinnaker supports multiple authentication and authorization options, including OAuth, SAML, LDAP, X.509 certs, GitHub teams, Azure groups or Google Groups.

You can also apply permissions to manual judgements, a Spinnaker stage which requires a person’s approval before proceeding with the pipeline, ensuring that a release can’t happen without the right people signing off.

Simplified installation and management with halyard
With the release of Spinnaker 1.0, we’re also announcing the launch of a new CLI tool, halyard, that helps admins more easily install, configure and upgrade a production-ready instance of Spinnaker.

Prior to halyard and Spinnaker 1.0, admins had to manage each of the microservices that make up Spinnaker individually. Starting with 1.0, all new Spinnaker releases are individually versioned and follow semantic versioning. With halyard, upgrading to the latest Spinnaker release is as simple as running a CLI command.

Getting started

Try out Spinnaker and make your deployments fast, safe, and, dare we say, boring.

For more info on Spinnaker, visit the new spinnaker.io website and learn how to get started.

Or if you’re ready to try Spinnaker right now, click here to install and run Spinnaker with Google’s click-to-deploy option in the Cloud Launcher Marketplace.

For questions, feedback, or to engage more with the Spinnaker community, you can find us on the Spinnaker Slack channel, submit issues to the Spinnaker GitHub repository, or ask questions on Stack Overflow using the “spinnaker” tag.

More on Spinnaker

Compute Engine updates bring Skylake GA, extended memory and more VM flexibility

We’re pleased to announce several updates to Google Compute Engine that give you more powerful and flexible instances. Google Cloud is the first and only public cloud to deliver Intel’s next-generation Xeon server processor (codenamed Skylake), and starting today, it’s generally available (GA). In addition, we’ve made several other enhancements to Compute Engine:
  • Increased total amount of memory per instance by removing memory caps
  • Increased variety of machine shapes
  • Simple process to select a baseline processor type
  • Availability of 64-core processors in all regions
  • Broadwell CPUs available in all regions
These improvements help you get the performance from Compute Engine that you need, in the configuration you want.

Skylake is generally available

With up to 64 vCPUs and 455GB of RAM, Skylake-based instances support a wide range of compute-intensive workloads, including scientific modeling, genomic research, 3D rendering, data analytics and engineering simulations. Since we first launched Skylake for Compute Engine in February, GCP customers have run millions of hours of compute on Skylake VMs, seeing increased performance for a variety of applications.

With this GA release, you can create new VMs with Skylake across Compute Engine’s complete family of VM instance types  standard, highmem, highcpu, Custom Machine Types, as well as Preemptible VMs. You can provision Skylake VMs using Cloud Console, the gcloud command line tool, or our APIs. Skylake is available in three GCP regions: Western US, Western Europe and Eastern Asia Pacific. Customer demand for Skylake has been very strong; we have more capacity arriving every day, and support for additional regions and zones coming in the near future.

To help you experience Skylake, we're offering Skylake VMs at no additional cost for a limited time. After a 60-day promotional period, Skylake VMs will be priced at a 6-10% premium depending on the specific machine configuration. Given the significant performance increase over previous generations of Intel processors, this continues our record of providing a leading price-performance cloud computing platform.

CPU platform selector

Google Cloud Platform (GCP) regions and zones are equipped with a diverse set of Intel Xeon-based host machines, with CPUs including Sandy Bridge, Ivy Bridge, Haswell, Broadwell and now Skylake microarchitectures. In addition to fundamental systems features like clock speed and memory access time, these CPU platforms also support unique features like AVX-2 and AVX-512.

Now, with our Minimum CPU Platform feature, you can select a specific CPU platform for VMs in that zone, and Compute Engine will always schedule your VM to that CPU family or above. You can assign a minimum CPU platform to a VM from the Cloud Console, Google Cloud SDK, or API, with full flexibility to choose the CPU features that work best for your applications.

Enabling this enhanced flexibility also allows us to now offer Broadwell CPU support in every region, as well as the ability to create VMs up to 64 vCPUs in size.
In the gcloud command line tool, use the instances create subcommand, followed by the --min-cpu-platform flag to specify a minimum CPU platform.

For example, the following command creates an n1-standard-1 instance with the Intel Broadwell (or later) CPU platform.

gcloud beta compute instances create example-instance --machine-type 
n1-standard-1 --min-cpu-platform “Intel Broadwell”

To see which CPUs are available in different GCP zones, check our Available Regions and Zones page. For complete instructions for using --min-cpu-platform, please refer to our documentation.

Extended memory, where you want it

Compute Engine Custom Machine Types allow you to create virtual machines with the vCPU and memory ratios to fit your application needs. Now, with extended memory, we’ve removed memory ratio restrictions for a vCPU (previously set at 6.5GB), for a maximum of 455GB of memory per VM instance. This is great news for applications like in-memory databases (e.g. Memcached & Redis), high-performance relational databases (e.g. Microsoft SQL Server) and NoSQL databases (e.g. MongoDB) that benefit from flexible memory configurations to achieve optimum price-performance. To learn more about the pricing for extended memory please take a look at our pricing page.

You can create a VM with extended memory using the Cloud Console, Cloud SDK or APIs.

For example, this command creates a 2 vCPU, 15GB memory instance (including an extended memory of 2GB):

gcloud beta compute instances create example-instance 
--custom-cpu 2 --custom-memory 15 --custom-extensions

Complete instructions for using extended memory are available in our documentation.

Get started today

The minimum CPU platform selector, extended memory to 455GB, availability of 64-core machines, Broadwell processors in all regions and the GA of Skylake processors are now all available for you and your applications. If you’re new to GCP you can try all of this out when you sign up for $300 free trial. We’d love to hear about the amazing things you do with these Compute Engine enhancements in the comments below.