Author Archives: GCP Team

Committed use discounts for Google Compute Engine now generally available

The cloud’s original promise was higher agility, lower risk and simpler pricing. Over the last four years, we've remained focused on delivering that promise. We introduced usage-based billing, which allows you to pay for exactly what you use. Sustained use discounts automatically lower the price of your instances when you use them for a significant portion of the month. And most recently, we introduced committed use discounts, which reward your steady-state, predictable usage in a way that’s easy-to-use and can accommodate a variety of applications.

Today, committed use discounts are now generally available. Committed use discounts are ideal for predictable, steady-state use of Google Compute Engine instances. They require no upfront payments and allow you to purchase a specific number of vCPUs and a total amount of memory for up to 57% off normal prices. At the same time, you have total control over the instance types, families and zones to which you apply your committed use discounts.

Simple and flexible 

We built committed use discounts so you actually attain the savings you expect  regardless of how you configure your resources, or where you run them within a region. For example, say you run several instances for one month with aggregate vCPU and memory consumption of 10 vCPUs and 48.5 GB of RAM. Then, the next month your compute needs evolve and you change the shapes and locations of your instances (e.g., zones, machine types, operating systems), but your aggregate resource consumption stays the same. With committed use discounts, you receive the same discount both months even though your entire footprint is different!

Committed use discounts automatically apply to aggregate compute usage with no manual intervention, giving you low, predictable costs. During the beta, customers achieved over 98% utilization rates of their commitments, with little or no effort on their part.

Quizlet is one of the largest online learning communities with over 20 million monthly learners.
 "Our fleet is constantly changing with the evolving needs of our students and teachers. Even as we rapidly change instance types and Compute Engine zones, committed use discounts automatically apply to our aggregate usage, making it simple and straightforward to optimize our costs. The results speak for themselves: 60% of our total usage is now covered by committed use discounts, saving us thousands of dollars every month. Google really got the model right." 
 Peter Bakkum, Platform Lead, Quizlet 

No hidden costs 

With committed use discounts, you don’t need to make upfront payments to see deep price cuts. Prepaying is a major source of hidden costs, as it is effectively an interest-free loan to the company you're prepaying. Imagine you get a 60% discount on $300,000 of compute usage. At a reasonable 7% per year cost of capital, an all-upfront prepay reduces your realized savings from 60% to 56%.
"We see great financial benefits by using committed use discounts for predictable workloads. With committed use discounts, there are no upfront costs, unlike other platforms we have used. It’s also possible to change machine types as committed use discounts work on both vCPU and memory. We have been very happy with committed use discounts."  
 Gizem Terzi Türkoğlu, Project Coordinator, MetGlobal 

Getting the best price and performance in the cloud shouldn’t require a PhD in Finance. We remain committed to that principle and continue to innovate to keep pricing simple for all your use cases. In coming months, we'll increase the flexibility of committed use discounts by allowing them to apply across multiple projects. And rest assured, we'll do so in a way that’s easy-to-use.

For more details on committed use discounts, check out our documentation. For pricing information, take a look at our pricing page or try out our pricing calculator. To get started and try Google Cloud Platform for free, click here.

Announcing Stackdriver Debugger for Node.js

We’ve all been there. The code looked fine on your machine, but now you’re in production and it’s suddenly not working.

Tools like Stackdriver Error Reporting can make it easier to know when something goes wrong — but how do you diagnose the root cause of the issue? That’s where Stackdriver Debugger comes in.
Stackdriver Debugger lets you inspect the state of an application at any code location without using logging statements and without stopping or slowing down your applications. This means users are not impacted during debugging. Using the production debugger, you can capture the local variables and call stack and link it back to a specific line location in your source code. You can use this to analyze your applications’ production state and understand your code’s behavior in production.

What’s more, we’re excited to announce that Stackdriver Debugger for Node.js is now officially in beta. The agent is open source, and available on npm.

Setting up Stackdriver Debugger for Node.js

To get started, first install the @google-cloud/debug-agent npm module in your application:

$ npm install --save @google-cloud/debug-agent

Then, require debugger in the entry point of your application:

.start({ allowExpressions: true });

Now deploy your application! You’ll need to associate your sources with the application running in production, and you can do this via Cloud Source Repositories, GitHub or by copying sources directly from your desktop.

Using Logpoints 

The passive debugger is just one of the ways you can diagnose issues with your app. You can also add log statements in real time — without needing to re-deploy your application. These are called Stackdriver Debugger Logpoints.

Logpoints let you inject log statements in real time, in your production application, without redeploying your application.
These are just a few of ways you can use Stackdriver Debugger for Node.js in your application. To get started, check out the full setup guide.

We can’t wait to hear what you think. Feel free to reach out to us on Twitter @googlecloud, or request an invite to the Google Cloud Slack community and join the #nodejs channel.

Introducing faster GPUs for Google Compute Engine

Today, we're happy to make some massively parallel announcements for Cloud GPUs. First, Google Cloud Platform (GCP) gets another performance boost with the public launch of NVIDIA P100 GPUs in beta. Second, NVIDIA K80 GPUs are now generally available on Google Compute Engine. Third, we're happy to announce the introduction of sustained use discounts on both the K80 and P100 GPUs.

Cloud GPUs can accelerate your workloads including machine learning training and inference, geophysical data processing, simulation, seismic analysis, molecular modeling, genomics and many more high performance compute use cases.

The NVIDIA Tesla P100 is the state of the art of GPU technology. Based on the Pascal GPU architecture, you can increase throughput with fewer instances while saving money. P100 GPUs can accelerate your workloads by up to 10x compared to K801.

Compared to traditional solutions, Cloud GPUs provide an unparalleled combination of flexibility, performance and cost-savings:
  • Flexibility: Google’s custom VM shapes and incremental Cloud GPUs provide the ultimate amount of flexibility. Customize the CPU, memory, disk and GPU configuration to best match your needs.  
  • Fast performance: Cloud GPUs are offered in passthrough mode to provide bare-metal performance. Attach up to 4 P100 or 8 K80 per VM (we offer up to 4 K80 boards, that come with 2 GPUs per board). For those looking for higher disk performance, optionally attach up to 3TB of Local SSD to any GPU VM. 
  • Low cost: With Cloud GPUs you get the same per-minute billing and Sustained Use Discounts that you do for the rest of GCP's resources. Pay only for what you need! 
  • Cloud integration: Cloud GPUs are available at all levels of the stack. For infrastructure, Compute Engine and Google Container Enginer allow you to run your GPU workloads with VMs or containers. For machine learning, Cloud Machine Learning can be optionally configured to utilize GPUs in order to reduce the time it takes to train your models at scale with TensorFlow. 

With today’s announcement, you can now deploy both the NVIDIA Tesla P100 and K80 GPUs in four regions worldwide. All of our GPUs can now take advantage of sustained use discounts, which automatically lower the price (up to 30%), of your virtual machines when you use them to run sustained workloads. No lock-in or upfront minimum fee commitments are needed to take advantage of these discounts.
Cloud GPUs Regions Availability - Number of Zones

Speed up machine learning workloads 

Since launching GPUs, we’ve seen customers benefit from the extra computation they provide to accelerate workloads ranging from genomics and computational finance to training and inference on machine learning models. One of our customers, Shazam, was an early adopter of GPUs on GCP to power their music recognition service.
“For certain tasks, [NVIDIA] GPUs are a cost-effective and high-performance alternative to traditional CPUs. They work great with Shazam’s core music recognition workload, in which we match snippets of user-recorded audio fingerprints against our catalog of over 40 million songs. We do that by taking the audio signatures of each and every song, compiling them into a custom database format and loading them into GPU memory. Whenever a user Shazams a song, our algorithm uses GPUs to search that database until it finds a match. This happens successfully over 20 million times per day.”   
 Ben Belchak, Head of Site Reliability Engineering, Shazam

With today’s Cloud GPU announcements, GCP takes another step toward being the optimal place for any hardware-accelerated workload. With the addition of NVIDIA P100 GPUs, our primary focus is to help you bring new use cases to life. To learn more about how your organization can benefit from Cloud GPUs and Compute Engine, visit the GPU site and get started today!

The 10x performance boost compares 1 P100 GPU versus 1 K80 GPU (½ of a K80 board) for machine learning inference workloads that benefits from the P100 FP16 precision. Performance will vary by workload. Download this datasheet for more information.

Announcing IPv6 global load balancing GA

Google Cloud users deploy Cloud Load Balancing to instantiate applications across the globe, architect for the highest levels of availability, and deliver applications with low latency. Today, we’re excited to announce that IPv6 global load balancing is now generally available (GA).

Until today, global load balancing was available only for IPv4 clients. With this launch, your IPv6 clients can connect to an IPv6 load balancing VIP (Virtual IP) and get load balanced to IPv4 application instances using HTTP(S) Load Balancing, SSL proxy, and TCP proxy. You now get the same management simplicity of using a single anycast IPv6 VIP for application instances in multiple regions.

Home Depot serves 75% of out of Google Cloud Platform (GCP) and uses global load balancing to achieve a global footprint and resiliency for its service with low management overhead.
"On the front-end, we use the Layer 7 load balancer with a single global IP that intelligently routes customer requests to the closest location. Global load balancing will allow us to easily add another region in the future without any DNS record changes, or for that matter, doing anything besides adding VMs in the right location."  
Ravi Yeddula, Senior Director Platform Architecture and Application Development, The Home Depot

IPv6 support unlocks new capabilities 

With IPv6 global load balancing, you can build more scalable and resilient applications on GCP, with the following benefits:
  • Single Anycast IPv6 VIP for multi-region deployment: Now, you only need one Load Balancer IPv6 VIP for application instances running across multiple regions. This means that your DNS server has a single AAAA record and that you don’t need to load-balance among multiple IPv6 VIPs. Caching of AAAA records by clients is not an issue since there's only one IPv6 VIP to cache. User requests to IPv6 VIP are automatically load balanced to the closest healthy instance with available capacity.
  • Support for a variety of traffic types: You can load balance HTTP, HTTPS, HTTP/2, TCP and TLS (non-HTTP) IPv6 client traffic. 
  • Cross-region overflow with a single IPv6 Load Balancer VIP: If instances in one region are out of resources, the IPv6 global load balancer automatically directs requests from users closest to this region to another region with available resources. Once the closest region has available resources, global load balancing reverts back to serving user requests via instances in this region. 
  • Cross-region failover with single IPv6 Load Balancer VIP: If the region with instances closest to the user experiences a failure, IPv6 global load balancing automatically directs traffic to another region with healthy instances. 
  • Dual-stack applications: To serve both IPv6 and IPv4 clients, create two load balancer IPs  one with an IPv6 VIP and the other with an IPv4 VIP and associate both VIPs with the same IPv4 application instances. IPv4 clients connect to the IPv4 Load Balancer VIP while IPv6 clients connect to IPv6 Load Balancer VIP. These clients are then automatically load balanced to the closest healthy instance with available capacity. We provide IPv6 VIPs (forwarding rules) without charge, so you pay for only the IPv4 ones.
    (click to enlarge)

A global, scalable, resilient foundation 

Global load balancing for both IPv6 and IPv4 clients benefits from its scalable, software-defined architecture that reduces latency for end users and ensures a great user experience.
  • Software-defined, globally distributed load balancing: Global load balancing is delivered via software-defined, globally distributed systems. This means that you won’t hit performance bottlenecks with the load balancer and it can handle 1,000,000+ queries per second seamlessly. 
  • Reduced latency through edge-based architecture: Global load balancing is delivered at the edge of Google's global network from 80+ points of presence (POPs) across the globe. User connections terminate at the POP closest to them and travel over Google's global network to the load-balanced instance in Google Cloud. 
    (click to enlarge)
  • Seamless autoscaling: Global load balancing scales application instances up or down automatically based on traffic  no pre-warming of instances required. 

Take IPv6 global load balancing for a spin 

Earlier this year, we gave a sneak preview of IPv6 global load balancing at Google Cloud Next ‘17. You can test drive this feature using the same setup.

In this setup:
  • is served by IPv4 application instances in multiple Google Cloud regions across the globe. 
  • A single anycast IPv6 Load Balancer IP, 2600:1901:0:ab8:: fronts the IPv4 application instances across regions 
  • When you connect using an IPv6 address to this website, IPv6 global load balancing directs you to a healthy Google Cloud instance that's closest to you and has available capacity. 
  • The website is programmed to display your IPv6 address, the Load Balancer IPv6 VIP and information about the instance serving your request. 
  • will only work with IPv6 clients. You can test drive instead if you want to test with both IPv4 and IPv6 clients.
For example, when I connect to from California, my request connects to an IPv6 global load balancer with IP address 2600:1901:0:ab8:: and is served out of an instance in us-west1-c, the closest region to California in the set-up.

Give it a try, and you'll observe that while your request connects to the same IPv6 VIP address 2600:1901:0:ab8::, it's served by an instance closest to you that has available capacity.

You can learn more by reading about IPv6 global load balancing, and taking it for a spin. We look forward to your feedback!

HashiCorp and Google expand collaboration, easing secret and infrastructure management

Open source technology encourages collaboration and innovation to address real world problems, including projects supported by Google Cloud. As part of our broad engagement with the open source community, we’ve been working with HashiCorp since 2013 to enable customers who use HashiCorp tools to make optimal use of Google Cloud Platform (GCP) services and features.

A longstanding, productive collaboration 

Google and HashiCorp have dedicated engineering teams focused on enhancing and expanding GCP support in HashiCorp products. We're focused on technical and shared go-to-market efforts around HashiCorp products in several critical areas of infrastructure.

  • Cloud provisioning: The Google Cloud provider for HashiCorp Terraform allows management of a broad array of GCP resource types, with Bigtable and BigQuery being the most recent additions. Today, HashiCorp also announced support for GCP in the Terraform Module Registry to give users easy access to templates for setting up and running their GCP-based infrastructure. We plan to continue to broaden the number of GCP services that can be provisioned with Terraform, allowing Terraform users to adopt a familiar workflow across multiple cloud and on-premises environments. Using Terraform to move workloads to GCP simplifies the cloud adoption process for Google customers that use Terraform today in cross-cloud environments. 
  • Cloud security and secret management: We're working to enhance the integration between HashiCorp Vault and GCP, including Vault authentication backends for IAM and signed VM metadata. This is in addition to work being done by HashiCorp for Kubernetes authentication. 

Using HashiCorp Vault with Google Cloud and Kubernetes 

Applications often require access to small pieces of sensitive data at build or run time, referred to as secrets. HashiCorp Vault is a popular open source tool for secret management, which allows a developer to store, manage and control access to tokens, passwords, certificates, API keys and other secrets. Vault has many options for authentication, known as authentication backends. These allow developers to use many kinds of credentials to access Vault, including tokens, or usernames and passwords.

As of today, developers on Google Cloud now have two authentication backends which they can use to validate a service’s identity to their instance of Vault: 

With these authentication backends, it’s easier for a particular service running on Google Cloud to get access to a secret it needs at build or run time stored in Vault.

Fleetsmith is a secure cloud-based solution for managing a company’s Mac computers, that fully integrates with G Suite. They’ve been testing out the new Compute Engine metadata backend, and are currently using Vault on GCP for PKI and secret management. Learn more about how Fleetsmith did this in their blogpost.

“Fleetsmith and Google have shared values when it comes to security, and we built our product on Google Cloud Platform in part due to Google's high bar for security. We're excited about this new integration because it strengthens the security model for us as Google Cloud customers using Vault.” 
 Jesse Endahl, CPO and CSO, Fleetsmith 

If you’re using Vault for managing secrets in Kubernetes specifically, today HashiCorp announced a new Kubernetes authentication backend. This uses Kubernetes pod service accounts to authenticate to Vault, providing an alternative to storing secrets in directly in `etcd`.

Running HashiCorp Vault on Google Cloud 

You may already be running your own instance of HashiCorp Vault. Users can run Vault in either Compute Engine or Google Container Engine, and then use one of our new authentication backends to authenticate to Vault.

WePay, an online payment service provider, uses HashiCorp Vault on GCP:
 "Managing usernames, passwords and certificates is a challenge in a microservice world, where we have to securely manage many secrets for hundreds of microservices. WePay chose to use HashiCorp Vault to store secrets because it provides us with rotation, tight control and out-of-the-box audit logging for our secrets and other sensitive data. WePay runs Vault server infrastructure on Google Compute Engine for secret storage, key management and service to service authentication, for use by our microservice architecture based on Google Container Engine."  
 Akshath Kumar, Site Reliability Engineer, WePay 
eBay also uses HashiCorp Vault on GCP:
“As a strong contributor and supporter of free open source software with vital projects such as regressr and datameta, eBay is a user of Hashicorp’s software products, including on the Google Cloud Platform.”  
 Mitch Wyle, Director of Applied Science and Engineering, eBay 

Today, we’re publishing a solution on how to best set up and run HashiCorp Vault on Compute Engine. For best practices for running Vault on Compute Engine, read the solution brief “Using Vault on Compute Engine for Secret Management”.

Using HashiCorp Terraform to manage your resources on Google Cloud 

When you’re testing new code or software, you might want to spin up a test environment to simulate your application. HashiCorp Terraform is an infrastructure management and deployment tool that allows you to programmatically configure infrastructure across a variety of providers, including cloud providers like Google Cloud.

Using Terraform on Google Cloud, you can programmatically manage projects, IAM policies, Compute Engine resources, BigQuery datasets and more. To get started with Terraform for Google Cloud, check out the Terraform Google Cloud provider documentation, take a look at our tutorial for managing GCP projects with Terraform, which you can follow on our community page, or watch our Terraform for Google Cloud demo.

Google has released a number of Terraform modules that make working with Google Cloud even easier. These modules let you quickly compose your architectures as code and reuse architectural patterns for resources like load balancing, managed instance groups, NAT gateways and SQL databases. The modules can be found on the Terraform Module Registry.

Get involved 

We’re always excited about new contributors to open source projects we support. If you’d like to contribute, please get involved in projects like Kubernetes, istio, as well as Vault and Terraform. The community is what makes these projects successful. To learn more about open source projects we support, see Open Source at Google.

GCP arrives in South America with launch of São Paulo region!

Read this post in Portuguese. A Nova Região GCP de São Paulo está abertaa
Read this post in Spanish. La Nueva Región GCP de San Pablo está abierta

We’re pleased to announce that the São Paulo region is now open to the public as southamerica-east1. This is our first Google Cloud Platform (GCP) region in South America, and it promises to significantly improve latency for GCP customers and end users in the area. Performance testing shows 80% to 95% reductions in round-trip time (RTT) latency are possible when your serve customers in Chile, Argentina and Brazil compared to using other GCP regions in the U.S. GCP customers are able to build applications and store data* in Brazil as well as make payments in Brazilian Reais.


We’ve launched São Paulo with three zones and the following services. You can combine any of the services you deploy in São Paulo with other GCP services around the world such as Data Loss Prevention, Cloud Spanner and BigQuery.

What customers are saying

“With the arrival of the Google Cloud Platform region in Brazil, Dotz sees the potential of boosting its business entirely. We are excited with the new opportunities we can take advantage of with the opening of the new region, leveraging the current use of the tools we are working on in GCP.”  
Cristiano Hyppolito, CTO of Dotz
 "We’re excited that GCP will be offering soon a region in São Paulo. Contabilizei has been a GCP client since 2013, and our fast growth was possible thanks to GCP tools. We believe that this launch will improve our service performance, and will contribute to the growth of the Latin American community of Google Cloud Platform users. The launch of the São Paulo GCP region will continue to support us in order for Contabilizei to continue delivering services 90% cheaper than traditional accountants."  
Fabio Bacarin, CTO Contabilizei
“The majority of our clients and partners are in Brazil. The launch of the Google Cloud Platform region in São Paulo will reduce the latency of its products, and with this take down the last barrier so we can massively use the Google Cloud for services that interface with our clients.”  
Flavio Tooru, Movile

Getting started 

For help migrating to GCP, please contact any of the following local partners: Alest, iPnet, SantoDigital, Safetec, UOL,QiNetwork. For additional details on the São Paulo region, please visit our São Paulo region page where you’ll get access to free resources, whitepapers, an on-demand video series called "Cloud On-Air" and more. Our locations page provides updates on the availability of additional services and regions. Contact us to request early access to new regions and help us prioritize what we build next.

*Please visit our Service Specific Terms to get detailed information on our data storage capabilities.


A Nova Região GCP de São Paulo está aberta

O Google Cloud Platform chegou à América do Sul! A região de São Paulo já está aberta como southamerica-east1. Esta é a primeira região do Google Cloud Platform (GCP) na América do Sul e promete reduzir muito a latência para os clientes do GCP e usuários finais nessa área.

Com a região de São Paulo, a experiência dos clientes do GCP na América do Sul ficou melhor do que nunca. Agora, os clientes do GCP de todo o continente terão a oportunidade de armazenar e processar dados localmente no Brasil, além de comprar direto de uma entidade local em reais. Os testes de desempenho mostram reduções de 80% a 95% na latência do tempo de ida e volta (TIV) para clientes no Chile, na Argentina e no Brasil em comparação com outras regiões de GCP nos Estados Unidos.
O que os clientes estão dizendo:

Os clientes estavam ansiosos pelo lançamento da região do Google Cloud Platform na América Latina.
 "Com a chegada da região do Google Cloud Platform no Brasil, a Dotz vê a possibilidade de potencializar os negócios. Estamos empolgados com as novas possibilidades que poderemos aproveitar com a abertura da nova região, alavancando o uso atual das ferramentas que usamos no GCP."  
Cristiano Hyppolito, CTO of Dotz
“Estamos muito felizes com o lançamento de uma região do GCP em São Paulo. A Contabilizei é cliente do GCP desde 2013, e nosso crescimento tão rápido só foi possível graças às ferramentas do GCP. Com certeza, esse lançamento vai melhorar o desempenho dos nossos serviços e vai contribuir para o crescimento da comunidade latino-americana de usuários do Google Cloud Platform. Com a região do GCP em São Paulo, a Contabilizei poderá oferecer serviços 90% mais baratos que as empresas de contabilidade tradicionais.” 
 Fabio Bacarin - CTO Contabilizei
 "A maioria dos nossos clientes e parceiros estão no Brasil. O lançamento da região do Google Cloud Platform em São Paulo vai diminuir a latência dos produtos e, com isso, derrubar a última barreira para podermos utilizar o Google Cloud de forma massiva nos serviços que fazem interface com nossos clientes." 
Flavio Tooru, Movile


Lançamos São Paulo com três zonas e os seguintes serviços:
Além disso, você pode combinar qualquer um dos serviços que você implanta em São Paulo com outros serviços GCP em todo o mundo, como DLP, Spanner e BigQuery.

Próximos passos

Se precisar de ajuda para implementar o GCP, entre em contato com o nosso time de vendas.

Para saber mais sobre a nova região, acesse o portal da região de São Paulo, que traz recursos gratuitos, fichas técnicas, vídeos da série 'Cloud On-Air' e muito mais. Esses materiais vão ajudar você a começar a trabalhar com o GCP. Além disso, na nossa página de regiões, você pode encontrar novidades sobre as regiões que estarão disponíveis em breve. Fale conosco para solicitar acesso antecipado a novas regiões e nos ajudar a priorizar nossos próximos passos.

* Por favor, visite nossos Termos Específicos de Serviços para obter informações detalhadas sobre nossa capacidade de armazenamento de dados.


La Nueva Región GCP de San Pablo está abierta

¡Google Cloud Platform ha llegado a Sudamérica! La región de San Pablo está abierta ahora cómo southamerica-east1. Esta es nuestra primera región de Google Cloud Platform (GCP) en América del Sur, y promete mejorar significativamente la latencia para los clientes de GCP y los usuarios finales en el área.

Con la región de San Pablo, la experiencia de los clientes de GCP en Sudamérica es mejor que nunca. Por primera vez, la nueva región de Brasil ofrece a los clientes de GCP en toda Sudamérica, la oportunidad de almacenar y procesar datos localmente en Brasil. Las pruebas de rendimiento muestran reducciones del 80% al 95% en la latencia del tiempo de ida y vuelta (TIV) al servir a clientes en Chile, Argentina y Brasil en comparación con el uso de otras regiones de GCP en los Estados Unidos.

Comentarios de los clientes

Los clientes han estado ansiosos por el lanzamiento de la Google Cloud Platform región en Latinoamérica. .
“Con la llegada de la región basada en Brasil, Dotz ve el potencial de aumentar todos sus negocios. Estamos emocionados con las nuevas oportunidades que podríamos aprovechar con la apertura de la nueva región, aprovechando el uso actual de las herramientas de GCP que ya estamos utilizando.”  
Cristiano Hyppolito, CTO of
“Estamos emocionados porque GCP ofrecerá una región en San Pablo. Contabilizei ha sido cliente desde el 2013, y nuestro rápido crecimiento fue posible gracias a las herramientas de GCP. Creemos que este lanzamiento mejorará el desempeño de nuestros servicios y obtener precios más competitivos, 90% más baratos que los de contadores tradicionales.” 
 Fabio Bacarin - CTO Contabilizei
“La mayoría de nuestros clientes y socios están en Brasil. El lanzamiento de la región de Google Cloud Platform en San Pablo disminuirá la latencia de sus productos, y con esto podremos derribar la última barrera para utilizar masivamente el Google Cloud para servicios que interactúan con nuestros clientes.” 
 Flavio Tooru - Movile


Hemos lanzado San Pablo con tres zonas y los siguientes servicios:

Además, puede combinar cualquiera de los servicios que implementa en San Pablo con otros servicios de GCP en todo el mundo, como DLP, Spanner y BigQuery.

Próximos pasos

Si está buscando ayuda para implementar GCP, póngase en contacto con nuestro equipo de ventas.

Para más detalles sobre la región de San Pablo, por favor visite nuestro portal de la región de San Pablo, donde obtendrá acceso a recursos gratuitos, papeles blancos, la serie de videos 'Cloud On-Air' y más. Estos materiales le ayudarán a empezar a trabajar con GCP. En nuestra página de ubicaciones encontrará las próximas actualizaciones sobre otras regiones. Contáctenos para solicitar acceso temprano a nuevas regiones y ayudarnos a priorizar lo que construimos a continuación.

 *Por favor, visite nuestros Términos Específicos del Servicio para obtener información detallada acerca de nuestras capacidades de almacenamiento de datos.

Read between the lines with Cloud Natural Language’s new recognition features

From documents to blog posts, emails to social media updates, there’s never been more ways to connect via the written word. For businesses, this can present both a challenge and an opportunity. With such a proliferation of communication channels, how do businesses stay responsive? More importantly, how can they derive useful insights from all of their content?

That’s where Google Cloud Natural Language API comes in. Cloud Natural Language enables businesses to extract critical information from their written data. And today we’re launching two new features that can help businesses further organize their content and better understand how their users feel.

Here’s a little more on what these new features can do.

Automatically classify content 

Through predefined content classification, Cloud Natural Language can now automatically sort documents and content into more than 700 different categories, including Arts & Entertainment, Hobbies & Leisure, Law & Government, News, Health, and more. This makes it ideal for industries like media and publishing who’ve traditionally had to manually sort, label and categorize content. Through machine learning with Cloud Natural Language, these companies can now automatically parse the meaning of their articles and content to organize them more efficiently.

To showcase the granularity of content classification, we analyzed top stories from the The New York Times API with Cloud Natural Language. This lobster salad recipe was categorized not only as “Cooking & Recipes” but also as “Meat & Seafood.” You can read more examples on our machine learning blog.
Hearst, one of the largest mass media publishers in the world, uses Cloud Natural Language Processing in their content management system to automatically tag entities in articles and will be using categories such as sports, entertainment, technology and more. Natural language processing adds an intelligence layer to their newsrooms, that will allow editors to understand what their audience is reading and how their content is being used. For example, Hearst now has granular visibility into how specific entities (people, places and things) trend across all their properties including daily newspapers such as the San Francisco Chronicle. This insight will help editors keep a finger on the pulse of their readers and better inform them when deciding what or who to cover in the news.
"In the newsroom, precision and speed are critical to engaging our readers. Google Cloud Natural Language is unmatched in its accuracy for content classification. At Hearst, we publish several thousand articles a day across 30+ properties and, with natural language processing, we're able to quickly gain insight into what content is being published and how it resonates with our audiences." 
Naveed Ahmad, Senior Director of Data, Hearst
Content classification is available in beta for all Cloud Natural Language users.

Analyze sentiment of entities

Sentiment analysis is one of Cloud Natural Language’s most popular features. Now, it offers more granularity with entity sentiment analysis. Rather than analyze the sentiment of a sentence or block of text, users can now parse the sentiment of places or things.

Leveraging Entity Sentiment Analysis, Motorola analyzes customer sentiment about its products across multiple sources such as Twitter, online community forums, and customer service emails. The insight helps Motorola quickly turn feedback into actionable results and increase customer satisfaction. Motorola uses Cloud Natural Language alongside its in-house natural language algorithms to get richer, more granular understanding of its customers to better serve them. Cloud Natural Language also offered a short learning curve and was easily integrated within its existing framework, without any downtime.

Entity sentiment analysis is now generally available for all Cloud Natural Language users.

These new features will help even more businesses use machine learning to get the most from their data. For more information, visit our website or sign up for a trial at no charge.

More secure hybrid cloud deployments with Google Cloud Endpoints

The shift from on-premises to cloud computing is rarely sudden and rarely complete. Workloads move over time; in some cases new workloads get built in the cloud and old workloads stay on-premises. In other cases, organizations lift and shift some services and continue to do new developments on their own infrastructure. And, of course, many companies have deployments in multiple clouds.

When you run services across a wide array of resources and locations, you need to secure communications between them. Networking may be able to solve some issues, but it can be difficult in many cases: if you're running containerized workloads on hardware that belongs to three different vendors, good luck setting up a VPN to protect that traffic.

Increasingly, our customers use Google Cloud Endpoints to authenticate and authorize calls to APIs rather than (or even in addition to) trying to secure them through networking. In fact, providing more security for calls across a hybrid environment was one of the original use cases for Cloud Endpoints adopters.
"When migrating our workloads to Google Cloud Platform, we needed to more securely communicate between multiple data centers. Traditional methods like firewalls and ad hoc authentication were unsustainable, quickly leading to a jumbled mess of ACLs. Cloud Endpoints, on the other hand, gives us a standardized authentication system." 
 Laurie Clark-Michalek, Infrastructure Engineer, Qubit 
Cloud Endpoints uses the Extensible Service Proxy, based on NGINX, which can validate a variety of authentication schemes from JWT tokens to API keys. We deploy that open source proxy automatically if you use Cloud Endpoints on App Engine Flexible environment, but it is also available via the Google Container Registry for deployment anywhere: on Google Container Engine, on-premises, or even in another cloud.

Protecting APIs with JSON Web Tokens 

One of the most common and more secure ways to protect your APIs is to require a JSON Web Token (JWT). Typically, you use a service account to represent each of your services, and each service account has a private key that can be used to sign a JSON Web Token.

If your (calling) service runs on GCP, we manage the key for you automatically; simply invoke the IAM.signJwt method on your JSON web token and put the resulting signed JWT in the OAuth Authorization: Bearer header on your call.

If your service runs on-premises, install ESP as a sidecar that proxies all traffic to your service. Your API configuration tells ESP which service account will be placing the calls. ESP uses the public key for your service account to validate that it was signed properly, and validates several fields in the JWT as well.

If the service is on-premises and calling to the cloud, you still need to sign your JWT, but it’s your responsibility to manage the private key. In that case, download the private key from Cloud Console (following best practices to help securely store it) and sign your JWTs.

For more details, check out the sample code and documentation on service-to-service authentication (or this, if you're using gRPC).

Securing APIs with API keys 

Strictly speaking, API keys are not authentication tokens. They're longer-lived and more dangerous if stolen. However, they do provide a quick and easy way to protect an API by easily adding them to a call either in a header or as a query parameter.

API keys also allow an API’s consumers to generate their own credentials. If you’ve ever called a Google API that doesn’t involve personal data, for example the Google Maps Javascript API, you’ve used an API key.

To restrict access to an API with an API key, follow these directions. After that, you’ll need to generate a key. You can generate the key in that same project (following these directions). Or you can share your project with another developer. Then, in the project that will call your API, that developer can create an API key and enable the API. Add the key to the API calls as a query parameter (just add ?key=${ENDPOINTS_KEY} to your request) or in the x-api-key header (see the documentation for details).

Wrapping up 

Securing APIs is good practice no matter where they run. At Google, we use authentication for inter-service communication, even if both run entirely on our production network. But if you live in a hybrid cloud world, authenticating each and every call is even more important.

To get started with Cloud Endpoints, take a look at our tutorials. It’s a great way to build scalable and more secure applications that can span a variety of cloud and on-premises environments.

With Forseti, Spotify and Google release GCP security tools to open source community

Being able to secure your cloud resources at scale is important for all Google Cloud Platform users. To help ensure the security of GCP resources, you need to have the right tools and processes in place. Spotify and Google Cloud worked together to develop innovative security tools that help organizations protect GCP projects, and have made them available in an open source community called Forseti Security. Forseti is now open to all GCP users!

For this blog post, we talked with Spotify about their experience working with the Google team to develop tools for the GCP security community. The Spotify team will also be presenting about their experience with Forseti today at the SEC-T information security conference in Stockholm.

Q: How did Forseti get started? 

When we moved our back-end data infrastructure from in-house data centers to the cloud, we began by evaluating the tools that GCP offers to help us develop securely in the cloud. Once we had a handle on that, we wanted to build some specific tools that would help us automate security processes so that our engineering team could develop freely, but securely.

In parallel to our efforts, Google had developed their own GCP security tools and was interested in bringing them to the open source community. Both of our security teams wanted to contribute our ideas to the bigger picture, and it made sense to collaborate rather than each company writing their own tools. This is how the Forseti open source idea was born.

Q. What is Forseti? 

Forseti is an open source toolkit designed to help give security teams the confidence and peace of mind that they have the appropriate security controls in place across GCP. Today, Forseti features a number of useful security tools:

  • Inventory: provides visibility into existing GCP resources 
  • Scanner: validates access control policies across GCP resources 
  • Enforcer: removes unwanted access to GCP resources 
  • Explain: analyzes who has what access to GCP resources 

Q: How does Forseti help keep your GCP environment more secure? 

Forseti gives us visibility into the GCP infrastructure that we didn’t have before, and we use it to help make sure we have the right controls in place and stay ahead of the game. It helps keep us informed about what’s going on in our environment so that we can quickly find out about any risky misconfigurations so they can be fixed right away. These tools allow us to create a workflow that puts the security team in a proactive stance rather than a reactive one. We can inform everyone involved on time rather than waiting for an incident to happen.

With the Inventory tool, we get ongoing snapshots of our GCP resources which provides an audit trail of any changes.This visibility allows us to give our developers a lot of freedom, and enables us to investigate any potential incidents.

Scanner helps us detect misconfigurations and security issues. It greatly reduces risk and saves us a ton of time. As soon as we see a violation from Scanner, we ping the team in charge of the affected resource so they can make the necessary fix. This way, security only needs to get involved if the dev team needs help.

Q: How have you put Forseti into practice so far at Spotify? 

We want our security culture to promote operational ownership by the dev team. Our team strives to be a business enabler, rather than a blocker to getting things done. This approach has allowed us to educate engineering and raise their security awareness. We believe it’s been influential in helping the dev teams become more security-conscious.

Using Forseti, we’ve been able to create a notification pipeline that proactively informs us about risky misconfigurations in GCP. This process is a major time saver for us.

Here’s how it works:
  • We run scans on our resources, and if a violation is found, it triggers our notification pipeline. 
  • Once the violation is parsed, we retrieve ownership information about the affected resource. This is like a phonebook that tells us which team is responsible, and then pings them automatically.
  • Engineering acknowledges the notification and then books a fix. 
  • We run inventory the next day to make sure the fix was completed. The security team gets involved only if the dev team is unable to resolve the issue on their own. 

Q: Why take an open source approach? 

The Forseti community is all about teamwork. It allows us to work with big and small companies who, at the end of the day, need to accomplish the same things. With this combined community expertise, we’ve identified areas where companies can make the most risky mistakes in configuring GCP, and executed on those areas first. We determined what should be in Forseti as a team, rather than as individual companies.

Different organizations often share the same risks, but have unique perspectives. When we collaborate with other organizations, the possibilities are multiplied exponentially and it helps everyone operate more securely. It also allows us to put security processes in place faster than we could do it individually. Forseti is all about about sharing ideas and collaborating, which are the ideals of open source. The benefit is to not reinvent the wheel; with Forseti we can divide and conquer — the more we are, the more we can do.

Interested in joining the Forseti security community? Get started here

Read more about Foresti on the Spotify Labs blog.

Introducing managed SSL for Google App Engine

We’re excited to announce the beta release of managed SSL certificates at no charge for applications built on Google App Engine. This service automatically encrypts server-to-client communication   an essential part of safeguarding sensitive information over the web. Manually managing SSL certificates to ensure a secure connection is a time consuming process, and GCP makes it easy for customers by providing SSL systematically at no additional charge. Managed SSL certificates are offered in addition to HTTPS connections provided on

Here at Google, we believe encrypted communications should be used everywhere. For example, in 2014, the Search team announced that the use of HTTPS would positively impact page rankings. Fast forward to 2017 and Google is a Certificate Authority, establishing HTTPS as the default behavior for App Engine, even across custom domains.

Now, when you build apps on App Engine, SSL is on by default   you no longer need to worry about it or spend time managing it. We’ve made using HTTPS simple: map a domain to your app, prove ownership, and App Engine automatically provisions an SSL certificate and renews it whenever necessary, at no additional cost. Purchasing and generating certificates, dealing with and securing keys, managing your SSL cipher suites and worrying about renewal dates   those are all a thing of the past.
 "Anyone who has ever had to replace an expiring SSL certificate for a production resource knows how stressful and error-prone it can be. That's why we're so excited about managed SSL certificates in App Engine. Not only is it simple to add encryption to our custom domains programmatically, the renewal process is fully automated as well. For our engineers that means less operational risk." 
 James Baldassari, Engineer, mabl

Get started with managed SSL/TLS certificates 

To get started with App Engine managed SSL certificates, simply head to the Cloud Console and add a new domain. Once the domain is mapped and your DNS records are up to date, you’ll see the SSL certificate appear in the domains list. And that’s it. Managed certificates is now the default behavior   no further steps are required!
To switch from using your own SSL certificate on an existing domain, select the desired domain, then click on the "Enable managed security" button. In just minutes, a certificate will be in place and serving client requests.

You can also use the gcloud CLI to make this change:

$ gcloud beta app domain-mappings update DOMAIN --certificate-management 'AUTOMATIC'

Rest assured that your existing certificate will remain in place and communication will continue as securely as before until the new certificate is ready and swapped in.

For more details on the full set of commands, head to the full documentation here.

Domains and SSL Certificates Admin API GA 

We’re also excited to announce the general availability of the App Engine Admin API to manage your custom domains and SSL certificates. The addition of this API enables more automation so that you can easily scale and configure your app according to the needs of your business. Check out the full documentation and API definition.

If you have any questions or concerns, or if something is not working as you’d expect, you can post in the Google App Engine forum, log a public issue, or get in touch on the App Engine slack channel (#app-engine).