Author Archives: GCP Team

Improving our account management policies to better support customers



Recently, a Google Cloud Platform (GCP) customer blogged about an incident in June, in which a project they were running on Google Cloud Platform was suspended. We really appreciated the candid feedback, in which our customer noted several aspects of our account management process which needed to be improved. We also appreciated our customer’s followup and recognition of the Google Cloud support team, “who have reached out and assured us these incidents will not repeat.”

Here’s what we are doing to be as good as our word, and provide a more careful, accurate, thoughtful and empathetic account management experience for our GCP customers. These changes are intended to provide peace of mind and a predictable, positive experience for our GCP customers, while continuing to permit appropriate suspension and removal actions for the inevitable bad actors and fraud which are a part of operating a public cloud service.

No Automatic Fraud-Based Account Suspensions for Established Customers with Offline Payment. Established GCP customers complying with our Acceptable Use Policy (AUP, TOS and local laws), with an offline billing contract, invoice billing, or an active relationship with our sales team, are not subject to fraud-based automatic account suspension.



Delayed Suspension for Established Online Customers. Online customers with established payment history, operating in compliance with our TOS, AUP and local laws, will receive advance notification and a 5 day cure period in the event that fraud or account compromise activity is detected in their projects.

Other Customers. For all other customers, we will institute a second human review for flagged fraud accounts prior to suspending an account. We’re also modifying who has authority to suspend an account, as well as refreshing our training for the teams that review flagged accounts and determine response actions; re-evaluating the signals, sources, and the tools we use to assess potential fraudulent activity; and increasing the number of options we can use to help new customers quickly and safely grow their usage while building an account history with Google.

In addition to the above, for all customers we are making the following improvements:

24X7 Chat Support. We are rolling out 24X7 chat support for customers that receive account notices, so that customers can always reach us easily. We expect this to be fully rolled out for all customers by September.

Correcting Notices About our 30 Day Policy. Our customer noted, with appropriate concern, that their suspension email stated “we will delete your account in 3 days.” This language was simply incorrect -- our fraud suspension policy provides 30 days before removal. We have corrected the communication language, and we are conducting a full review of our communication verbiage and systems and ensuring that our messages are accurate and clear.

Updating Our Project Suspension Guidelines. We will review and update our project suspension guidelines to clarify our practices and describe what you should expect from Google.

Improving Customer Contact Points. We will encourage customers to provide us with a verifiable phone number, email, and other contact channels, both at sign-up and at later points in time, so that we can quickly contact you if we detect suspicious activity on your account.

Creating Customer Pre-Verification. We will provide ways for customers to pre-verify their accounts with us if they desire, either at sign-up or at a later point in time.

These suspensions are our responsibility.There are also steps that customers can take to help us protect their accounts including:
  1. Make sure to monitor emails sent to your payments and billing contacts so you don’t miss important alerts.
  2. Provide a valid phone number where we can reach you in the event of suspicious activity on your account.
  3. Provide one or more billing admins to your account.
  4. Provide a secondary payment method in case there are problems charging your primary method.
  5. Contact our sales team to see if you qualify for invoice billing instead of relying on credit cards.
We’re making immediate changes to ensure our policies will improve our customer’s experience. Our work here is never done and we will continue to update and optimize based on your feedback.

We sincerely apologize to all our customers who’ve been concerned or had to go through a service reinstatement. Please keep the feedback coming, we’ll work to continue to earn your trust every day.

Introducing commercial Kubernetes applications in GCP Marketplace



Building, deploying and managing applications with Kubernetes comes with its own set of unique challenges. Today, we are excited to be the first major cloud provider to offer production-ready commercial Kubernetes apps right from our marketplace, bringing you simplified deployment, billing, and third-party licensing.

Now you can find the solution you need in Google Cloud Platform Marketplace (formerly Cloud Launcher) and deploy quickly on Kubernetes clusters running on Google Cloud Platform (GCP), Kubernetes Engine, on-prem, or even other public clouds.

Enterprise-ready containerized applications - We are on a mission to make containers accessible to everyone, especially the enterprise. When we released Kubernetes as open source, one of the first challenges that the industry tackled was management. Our hosted Kubernetes Engine takes care of cluster orchestration and management, but getting apps running on a Kubernetes cluster can still be a manual, time-consuming process. With GCP Marketplace, you can now easily find prepackaged apps and deploy them onto the cluster of your choice.

Simplified deployments - Kubernetes apps are configured to get up and running fast. Enjoy click-to-deploy to Kubernetes Engine, or deploy them to other Kubernetes clusters off-GCP. Now, deploying from Kubernetes Engine is even easier, with a Marketplace window directly in the Kubernetes Engine console.

Production-ready security and reliability - All Kubernetes apps listed on GCP Marketplace are tested and vetted by Google, including vulnerability scanning and partner agreements for maintenance and support. Additionally, we work with open-source Special Interest Groups (SIGs) to create standards for Kubernetes apps, bringing the knowledge of the open-source community to your enterprise.

Supporting hybrid environments - One of the great things about containers is their portability across environments. While Kubernetes Engine makes it easy to click-to-deploy these apps, you can also deploy them in your other Kubernetes clusters—even if they’re on-premises. This lets you use the cloud for development and then move your workloads to your production environment, wherever it may be.

Commercial Kubernetes applications available now

Our commercial Kubernetes apps, developed by third-party partners, support usage-based billing on many parameters (API calls, number of hosts, storage per month), simplifying license usage and giving you more consumption options. Further, the usage charges for your apps are consolidated and billed through GCP, no matter where they are deployed (not including any non-GCP resources they need to run on).


“Cloud deployment and manageability are core to Aerospike's strategy. GCP Marketplace makes it simpler for our customers to buy, deploy and manage Aerospike through Kubernetes Engine with one-click deployment. This provides a seamless experience for customers by allowing them to procure both Aerospike solutions and Kubernetes Engine on a single, unified Google bill and providing them with the flexibility to pay as they go.”
- Bharath Yadla, VP-Product Strategy, EcoSystems, Aerospike

"As an organization focused on supporting enterprises with security for their container-based applications, we are delighted that we can now offer our solutions as commercial Kubernetes application more simply to customers through the GCP Marketplace commercial Kubernetes application option. GCP Marketplace helps us reach GCP customers, and the one-click deployment of our applications to Google Kubernetes Engine makes it easier for enterprises to use our solution. We are also excited about GCP’s commitment to enterprise agility by allowing our solution to be deployed on-premises, letting us reach enterprises where they are today."
- Upesh Patel, VP Business Development, Aqua Security

“Couchbase is excited to see GCP Marketplace continue the legacy of GCP by bringing new technologies to market. We've seen GCP Marketplace as a key part of our strategy in reaching customers, and the new commercial Kubernetes application option differentiates us as innovators for both prospects and customers."
-Matt McDonough, VP of Business Development, Couchbase

"With the support for commercial Kubernetes applications, GCP Marketplace allows us to reach a wider range of customers looking to deploy our graph database both to Google Kubernetes Engine and hybrid environments. We're excited to announce our new offering on GCP Marketplace as a testament to both Neo4j and Google's innovation in integrations to Kubernetes."
- David Allen, Partner Solution Architect, Neo4j

Popular open-source Kubernetes apps available now

In addition to our new commercial offerings, GCP Marketplace already features popular open-source projects that are ready to deploy into Kubernetes. These apps are packaged and maintained by Google Cloud and implement best practices for running on Kubernetes Engine and GCP. Each app includes clustered images and documented upgrade steps, so it’s ready to run in production.

One-stop shopping on GCP Marketplace

As you may have noticed, Google Cloud Launcher has been renamed to GCP Marketplace, a more intuitive name for the place to discover the latest partner and open source solutions. Like Kubernetes apps, we test and vet all solutions available through the GCP Marketplace, which include virtual machines, managed services, data sets, APIs, SaaS, and more. In most instances, we also recommend Marketplace solutions for your projects.
With GCP Marketplace, you can verify that a solution will work for your environment with free trials from select partners. You can also combine those free trials with our $300 sign-up credit. Once you’re up and running, GCP Marketplace supports existing relationships between you and your partners with private pricing. Private pricing is currently available for managed services, and support for more solution types will be rolling out in the coming months.

Get started today

We’re excited to bring support for Kubernetes apps to you and our partners, featuring the extensibility of Kubernetes, commercial solutions, usage-based pricing, and discoverability on the newly revamped GCP Marketplace.
If you are a partner and want to learn more about selling your solution on GCP Marketplace, please visit our sign-up page.

Top storage and database sessions to check out at Next 2018

Whatever your particular area of cloud interest, there will be a lot to learn at Google Cloud Next ‘18 (July 24-26 in San Francisco). When it comes to cloud storage and databases, you’ll find useful sessions that can help you better understand your options as you’re building the cloud infrastructure that will work best for your organization.

Here, we’ve chosen five not-to-miss sessions, where you’ll learn tips on migrating data to the cloud, understand types of cloud storage workloads and get a closer look at which database is best for storing and analyzing your company’s data. Wherever you are in your cloud journey, there’s likely a session you can use.

Top cloud storage sessions


First up, our top picks for those of you delving into cloud storage.

From Blobs to Tables, Where to Store Your Data
Speakers: Dave Nettleton, Robert Saxby

What’s the best way to store all the data you’re creating and moving to the cloud? The answer depends on the industry, apps and users you’re supporting. Google Cloud Platform (GCP) offers many options for storing your data. The choices range from Cloud Storage (multi-regional, regional, nearline, coldline) through Persistent Disk to various database services (Cloud Datastore, Cloud SQL, Cloud Bigtable, Cloud Spanner) and data warehousing (BigQuery). In this session, you’ll learn about the products along with common application patterns that use data storage.

Why attend: With much to consider and many options available, this session is a great opportunity to examine which storage option fits your workloads.

Caching Made Easy, with Cloud Memorystore and Redis
Speaker: Gopal Ashok

In-memory database Redis has plenty of developer fans: It’s high-performance and highly available, making it an excellent choice for caching operations. Cloud Memorystore now includes a managed Redis service. In this session, you’ll hear about its new features. You’ll also learn how you can easily migrate applications using Redis to Cloud Memorystore with minimal changes.
Why attend: Are you building an application that needs sub-millisecond response? GCP provides fully managed service for the popular Redis in-memory datastore.

Google Cloud Storage - Best Practices for Storage Classes, Reliability, Performance and Scalability
Speakers: Geoff Noer, Michael Yu

Learn about common Google Cloud Storage workloads, such as content storage and serving, analytics/ML and data protection. Understand how to choose the best storage class, depending on what kind of data you have and what kind of workload you're supporting. You’ll also learn more about Multi-Regional, Regional, Nearline and Coldline storage.
Why attend: You’ll learn about ways to optimize Cloud Storage to the unique requirements of different storage use cases.

Top database sessions


Here are our top picks for database sessions to explore at Next ‘18.

Optimizing Applications, Schemas, and Query Design on Cloud Spanner
Speaker: Robert Kubis

Cloud Spanner was designed specifically for cloud infrastructure and scales easily to allow for efficient cloud growth. In this session, you’ll learn Cloud Spanner best practices, strategies for optimizing applications and workloads, and ways to improve performance and scalability. Through live demos, you’ll see real-time speed-ups of transactions, queries and overall performance. Additionally, this talk explores techniques for monitoring Cloud Spanner to identify performance bottlenecks. Come learn how to cut costs and maximize performance with Cloud Spanner.
Why attend: Cloud Spanner is a powerful product, but many users do not maximize its benefits. You’ll get an inside look at this session at getting the best performance and efficiency results out of this type of cloud database.

Optimizing performance on Cloud SQL for MySQL
Speakers: Stanley Feng, Theodore Tso, Brett Hesterberg

Database performance tuning can be challenging and time-consuming. In this session, you’ll get a look at the performance tuning our team has conducted in the last year to considerably improve Cloud SQL for MySQL. We’ll also highlight useful changes to the Linux kernel, EXT4 filesystem and Google's Persistent Disk storage layer to improve write performance. You'll come away knowing more about MySQL performance tuning, an underused EXT4 feature called “bigalloc” and how to let Cloud SQL handle mundane, yet necessary, tasks so you can focus on developing your next great app.
Why attend: When GCP provides fully managed services for databases, we put lots of innovations under the hood, so that your database runs in the most optimal way. Come and learn about Google’s secret sauce that lets you optimize Cloud SQL performance.

Check out the full list of Next sessions, and join your peers at the show by registering here.

Cloud Spanner adds import/export functionality to ease data movement



We launched Cloud Spanner to general availability last year, and many of you shared in our excitement: You explored it, started proof-of-concept trials, and deployed apps. Perhaps most importantly, you gave us feedback along the way. We heard you, and we got to work. Today, we’re happy to announce we’ve launched one of your most commonly requested features: importing and exporting data.

Import/export using Avro

You asked for easier ways to move data. You’ve got it. You can now import and export data easily in the Cloud Spanner Console:
  • Export any Cloud Spanner database into a Google Cloud Storage (GCS) bucket.
  • Import files from a GCS bucket into a new Cloud Spanner database.
These database exports and imports use Apache Avro files, transferred with our recently released Apache Beam-based Cloud Dataflow connector.

Adding imports and exports opens up even more possibilities for your Cloud Spanner data, including:
  • Disaster recovery: Export your database at any time and store it in a GCS location of your choice as a backup, which can be imported into a new Cloud Spanner database to restore data.
  • Testing: Export a database and then import it into Cloud Spanner as a dev/test database to use for integration tests or other experiments.
  • Moving databases: Export a database and import it back into Cloud Spanner in a new/different instance with the console’s simple, push-button functionality.
  • Ingest for analytics: Use database exports to ingest your operational data to other services such as BigQuery, for analytics. BigQuery can automatically ingest data in Avro format from a GCS bucket, which means it will become easier for you to run analytics on your operational data.
Ready to try it out? See our documentation on how to import and export data. Learn more about Cloud Spanner here, and get started with a free trial. For technical support and sales, please contact us.

We're excited to see the ways that Cloud Spanner—making application development more efficient, simplifying database administration and management, and providing the benefits of both relational and scale-out, non-relational databases—will continue to help you ship better apps, faster.

Our Los Angeles cloud region is open for business



Hey, LA — the day has arrived! The Los Angeles Google Cloud Platform region is officially open for business. You can now store data and build highly available, performant applications in Southern California.

Los Angeles Mayor Eric Garcetti said it best: “Los Angeles is a global hub for fashion, music, entertainment, aerospace, and more—and technology is essential to strengthening our status as a center of invention and creativity. We are excited that Google Cloud has chosen Los Angeles to provide infrastructure and technology solutions to our businesses and entrepreneurs.”

The LA cloud region, us-west2, is our seventeenth overall and our fifth in the United States.

Hosting applications in the new region can significantly improve latency for end users in Southern California, and by up to 80% across Northern California and the Southwest, compared to hosting them in the previously closest region, Oregon. You can visit www.gcping.com to see how fast the LA region is for you.

Services


The LA region has everything you need to build the next great application:

Of note, the LA region debuted with one of our newest products: Cloud FilestoreBETA, our managed file storage service for applications that require a filesystem interface and a shared filesystem for data.

The region also has three zones, allowing you to distribute apps and storage across multiple zones to protect against service disruptions. You can also access our multi-regional services (such as BigQuery) in the United States and all the other GCP services via our Google Network, and combine any of the services you deploy in LA with other GCP services around the world. Please visit our Service Specific Terms for detailed information on our data storage capabilities.

Google Cloud Network

Google Cloud’s global networking infrastructure is the largest cloud network as measured by number of points of presence. This private network provides a high-bandwidth, highly reliable, low-latency link to each region across the world. With it, you can reach the LA region as easily as any region. In addition, the global Google Cloud Load Balancing makes it easy to deploy truly global applications.

Also, if you’d like to connect to the Los Angeles region privately, we offer Dedicated Interconnect at two locations: Equinix LA1 and CoreSite LA1.

LA region celebration

We celebrated the launch of the LA cloud region the best way we know how: with our customers. At the celebration, we announced new services to help content creators take advantage of the cloud: Filestore, Transfer Appliance and of course, the new region itself, in the heart of media and entertainment country. The region’s proximity to content creators is critical for cloud-based visual effects and animation workloads. With proximity comes low latency, which lets you treat the cloud as if it were part of your on-premises infrastructure—or even migrate your entire studio to the cloud.
Paul-Henri Ferrand, President of Global Customer Operations, officially announces the opening of our Los Angeles cloud region.


What customers are saying


“Google Cloud makes the City of Los Angeles run more smoothly and efficiently to better serve Angelenos city-wide. We are very excited to have a cloud region of our own that enables businesses, big or small, to leverage the latest cloud technology and foster innovation.”
- Ted Ross, General Manager and Chief Information Officer for City of LA Information Technology Agency, City of LA

“Using Google Cloud for visual effects rendering enables our team to be fast, flexible and to work on multiple large projects simultaneously without fear of resource starvation. Cloud is at the heart of our IT strategy and Google provides us with the rendering power to create Oscar-winning graphics in post-production work.”
- Steve MacPherson, Chief Technology Officer, Framestore

“A lot of our short form projects pop up unexpectedly, so having extra capacity in region can help us quickly capitalize on these opportunities. The extra speed the LA region gives us will help us free up our artists to do more creative work. We’re also expanding internationally, and hiring more artists abroad, and we’ve found that Google Cloud has the best combination of global reach, high performance and cost to help us achieve our ambitions.”
- Tom Taylor, Head of Engineering, The Mill

What SoCal partners are saying


Our partners are available to help design and support your deployment, migration and maintenance needs.

“Cloud and data are the new equalizers, transforming the way organizations are built, work and create value. Our premier partnership with Google Cloud Platform enables us to help our clients digitally transform through efforts like app modernization, data analytics, ML and AI. Google’s new LA cloud region will enhance the deliverability of these solutions and help us better service the LA and Orange County markets - a destination where Neudesic has chosen to place its corporate home.”
- Tim Marshall, CTO and Co-Founder, Neudesic

“Enterprises everywhere are on a journey to harness the power of cloud to accelerate business objectives, implement disruptive features, and drive down costs. The Taos and Google Cloud partnership helps companies innovate and scale, and we are excited for the new Google Cloud LA region. The data center will bring a whole new level of uptime and service to our Southern California team and clients.”
- Hamilton Yu, President and COO, Taos

“As a launch partner for Google Cloud and multi-year recipient of Google’s Partner of the Year award, we are thrilled to have Google’s new cloud region in Los Angeles, our home base and where we have a strong customer footprint. SADA Systems has a track record of delivering industry expertise and innovative technical services to customers nationwide. We are excited to leverage the scale and power of Google Cloud along with SADA’s expertise for our clients in the Los Angeles area to continue their cloud transformation journey.”
- Tony Safoian, CEO & President, SADA Systems

Getting started


For additional details on the LA region, please visit our LA region page where you’ll get access to free resources, whitepapers, the "Cloud On-Air" on-demand video series and more. Our locations page provides updates on the availability of additional services and regions. Contact us to request early access to new regions and help us prioritize where we build next.

Google Home meets .NET containers using Dialogflow



I use my Google Home all the time to check the weather before leaving home, set up alarms, listen to music, but I never considered writing an app for it. What does it take to write an app for the Google Home assistant? And can we make it smarter by leveraging Google Cloud? Those were the questions that my colleague Chris Bacon, and I were thinking about when we decided to build a demo for a conference talk.

My initial instinct was that building an app for Google Home would be quite complicated. After all, we’re talking about real humans talking to a device that triggers some service running in the cloud. There are many details to figure out and many things that could potentially go wrong.

Turns out, it is much easier than I thought and a lot of fun as well. In this post, I want to give you a glimpse of what we built. If you want to setup and run the demo yourself, instructions and code are hosted here on GitHub.

Overview

Our main goal with the app was to showcase Google Cloud .NET libraries in a fun and engaging way while highlighting Google’s unique strengths. After some brainstorming, we decided to build a voice-driven app using Dialogflow where we asked some random questions and let Google Home answer by harnessing the power of the cloud.

In our app, you can ask Google Home to search for images of a city. Once it finds the images, they are displayed on a web frontend. You can select an image and ask more questions such as “Can you describe the image?” or “Does the image contain landmarks?” You can also ask questions about global temperatures such as “What was the hottest temperature in France in 2015?” or about Hacker News, for example “What was the top Hacker News story on May 1, 2018?” A picture is worth a thousand words. Here’s how the app ended up looking at the high level.

The voice command is first captured by Google Home device and passed to Google Assistant. We use Dialogflow to handle inputs to Google Assistant. Some inputs are handled directly in Dialogflow and some are passed to a pre-defined external webhook (in this case an HTTPS endpoint running in Google Cloud).

I should also mention that the app works anywhere Google Assistant is supported as long as you’re logged in the same Google account with which you created your Dialogflow app. If you don’t have a Google Home, you can simply use your Google Assistant-enabled phone to interact with the app.

Let’s take a look at the implementation in more detail.

Dialogflow

Dialogflow is a developer platform for building natural and rich conversational experiences. When we started thinking about this implementation, we quickly realized that Dialogflow would be a good starting point for the voice-driven part of the app. There are editions of Dialogflow (standard and enterprise) with different limits and SLAs. For our demo, the standard edition was more than enough.

You start by creating an agent for your app in Dialogflow console. Within the agent, you create intents. An intent represents a mapping between what a user says and what action should be taken by your app. You don’t have to list all the phrases that can trigger a certain intent. Instead, you provide some training phrases and Dialogflow uses machine learning to learn what to expect. It can also pick up entities from those phrases such as a city name or a date. If the app requires an entity, Dialogflow makes sure that the user provides them. All these small features greatly simplify the work of creating a conversational app.

Some intents can be handled directly in Dialogflow; simply provide the text response for Dialogflow to say. In our app, you can say “Say hi to everyone,” which Dialogflow handles directly with a simple response.

You can also enable an external endpoint to handle intents via a webhook. When an intent is triggered, Dialogflow passes the request to the defined endpoint. The only requirement is that the endpoint supports HTTPS. This is where the power of cloud comes in. In our app, we hosted an endpoint on Google Cloud to handle more complicated questions about images or global temperatures.

ASP.NET Core on App Engine (Flex)

For the endpoint, we decided to host a containerized ASP.NET Core web app on Google Cloud Platform (GCP). Since it’s a container running on Linux (yes, .NET runs on Linux!), we could have hosted on Google Kubernetes Engine or App Engine. We decided to go with App Engine, as it provides an HTTPS endpoint by default with minimal hassle. It also gives us versioning, so we can host multiple versions of our endpoint to do A/B testing or easy rollbacks.

The web app serves two purposes. First, it’s the visual frontend to show images or queries (handled by HomeController). Second, it handles webhook calls from Dialogflow for more complicated queries about images or global temperatures (handled by ConversationController).

ConversationController delegates to DialogflowApp to handle the request. DialogflowApp picks up the session id of the request and either creates a new Conversation or finds the existing one. Then, Conversation picks up the intent name and matches that to a subclass of BaseHandler using IntentAttribute at the beginning of handler classes.

Searching for image

When the user says “Search for images of Paris”, that triggers a webhook-enabled “vision.search” intent in Dialogflow. This intent picks up “Paris” as an entity and passes it to the webhook as search term. The call is then routed to VisionSearchHandler running on App Engine. This class uses Google Custom Search APIs to search for images using the search term. In the end, you see a list of images in the web frontend of the app.

Vision API

Once you have a list of images, you can say “Select first picture” to select one. Now it gets interesting. For example, saying something like “Describe the image” triggers VisionDescribeHandler, which makes a call to Vision API using our Vision API .NET library, and gets labels back. We pass these labels back to Dialogflow, which in turn passes them to Google Home to say out loud. You can also say “Does the image contain landmarks?” which uses Vision API’s landmark detection feature (handled by VisionLandmarksHandler). Or you can say “Is the image safe?” to make sure the image does not contain any unsafe images (handled by VisionSafeHandler).

BigQuery

BigQuery is Google's serverless data warehousing solution. It has many public datasets available for anyone to search and analyze. We decided to use two of those: Hacker News Data and NOAA Global Weather Data.

For example, if you were to say “What was the top hacker news on May 1, 2018?” It would be picked up by the “bigquery.hackernews” intent and eventually routed to BigQueryHackerNewsHandler with the date entity. This class uses BigQuery .NET library to run a query against the Hacker News Data and picks up the top 10 Hacker News articles on that day.

SImilarly, if you say “What was the hottest temperature in France in 2015?” this triggers BigQueryNoaaextremeHandler to run a query against the global weather data and display the top 10 temperatures and locations for that country in that year in the web frontend.

All this is done by scanning gigabytes of data in a few seconds and made possible by BigQuery’s massively parallel infrastructure.

Logging and monitoring

This was all fun but we wanted to make sure that we could maintain our app going forward. Stackdriver is Google Cloud’s logging, monitoring, tracing and debugging tool. Enabling Stackdriver entailed a single API call (UseGoogleDiagnostics in Program) and making a slight modification to a Dockerfile. All of a sudden, we got application logs, tracing for all HTTP calls, monitoring and last but not least, the ability to do live production debugging.

With Stackdriver Debugger, we can point to our code on GitHub and then take snapshots from anywhere in the code. Currently supported languages are Java, Python, Node.js, Go and C# (alpha). A snapshot can be captured on live production code without stopping or delaying the app. The snapshot can also be conditional, and contains local variables and stack traces, which are invaluable for production debugging.

Conclusion

In software development, something that should be easy usually ends up being much more complicated when you get into details. In this case, it was quite the opposite. Dialogflow made the voice recognition and routing of requests in our Google Home app very simple and straightforward. We deployed a containerized ASP.NET Core app on App Engine with a single command, and our Google Cloud .NET libraries for Vision API and BigQuery were straightforward and consistent to use.

In the end, I had a lot of fun writing this demo with Chris! If you want to try this out yourself, the code and instructions are on GitHub.

Introducing new Apigee capabilities to deliver business impact with APIs



Whether it's delivering new experiences through mobile apps, building a platform to power a partner ecosystem, or modernizing IT systems, virtually every modern business uses APIs (application programming interfaces).

Google Cloud’s Apigee API platform helps enterprises adapt by giving them control and visibility into the APIs that connect applications and data across the enterprise and across clouds. It enables organizations to deliver connected experiences, create operational efficiencies, and unlock the power of their data.

As enterprise API programs gain traction, organizations are looking to ensure that they can seamlessly connect data and applications, across multi-cloud and hybrid environments, with secure, manageable and monetizable APIs. They also need to empower developers to quickly build and deliver API products and applications that give customers, partners, and employees secure, seamless experiences.

We are making several announcements today to help enterprises do just that. Thanks to a new partnership with Informatica, a leading integration-platform-as-a-service (iPaaS) provider, we’re making it easier to connect and orchestrate data services and applications, across cloud and on-premise environments, using Informatica Integration Cloud for Apigee. We’ve also made it easier for API developers to access Google Cloud services via the Apigee Edge platform.

Discover and invoke business integration processes with Apigee

We believe that for an enterprise to accelerate digital transformation, it needs API developers to focus on business-impacting programs rather than low-level tasks such as coding, rebuilding point-to-point integrations, and managing secrets and keys.

From the Apigee Edge user interface, developers can now use policies to discover and invoke business integration processes that are defined in Informatica’s Integration Cloud.

Using this feature, an API developer can add a callout policy inside an API proxy that invokes the required Informatica business integration process. This is especially useful when the business integration process needs to be invoked before the request gets routed to the configured backend target.

To use this feature, API developers:
  • Log in to Apigee Edge user interface with their credentials
  • Create a new API proxy, configure backend target, add policies
  • Add a callout policy to select the appropriate business integration process
  • Save and deploy the API proxy

Access Google Cloud services from the Apigee Edge user interface

API developers want to easily access and connect with Google Cloud services like Cloud Firestore, Cloud Pub/Sub, Cloud Storage, and Cloud Spanner. In each case, there are a few steps to perform to deal with security, data formats, request/response transformation, and even wire protocols for those systems.

Apigee Edge includes a new feature that simplifies interacting with these services and enables connectivity to them through a first-class policy interface that an API developer can simply pick from the policy palette and use. Once configured, these can be reused across all API proxies.

We’re working to expand this feature to cover more Google Cloud services. Simultaneously, we’re working with Informatica to include connections to other software-as-a-service (SaaS) applications and legacy services like hosted databases.

Publish business integration processes as managed APIs

Integration architects, working to connect data and applications across the enterprise, play an important role in packaging and publishing business integration processes as great API products. Working with Informatica, we’ve made this possible within Informatica’s Integration Cloud.

Integration architects that use Informatica's Integration Cloud for Apigee can now author composite services using business integration processes to orchestrate data services and applications, and directly publish them as managed APIs to Apigee Edge. This pattern is useful when the final destination of the API call is an Informatica business integration process.

To use this feature, integration architects need to execute the following steps:
  • Log in to their Informatica Integration Cloud user interface
  • Create a new business integration process or modify an existing one
  • Create a new service of type (“Apigee”), select options (policies) presented on the wizard, and publish the process as an API proxy
  • Apply additional policies to the generated API proxy by logging in to the Apigee Edge user interface.
API documentation can be generated and published on a developer portal, and the API endpoint can be shared with app developers and partners. APIs are an increasingly central part of organizations’ digital strategy. By working with Informatica, we hope to make APIs even more powerful and pervasive. Click here for more on our partnership with Informatica.

Verifying PostgreSQL backups made easier with new open-source tool



When was the last time you verified a database backup? If that question causes you to break into a cold sweat, rest assured you’re not alone.

Verifying backups should be a common practice, but it often isn’t. This can be an issue if there’s a disaster or—as is more likely at most companies—if someone makes a mistake when deploying database changes. One industry survey indicates that data loss is one of the biggest risks when making database changes.

PostgreSQL Page Verification Tool

At Google Cloud Platform (GCP), we recently wrote a tool to fight data loss and help detect data corruption early in the change process. We made it open source, because data corruption can happen to anybody, and we’re committed to making code available to ensure secure, reliable backups. If you use Google Cloud SQL for PostgreSQL, then you’re in luck—we’re already running the PostgreSQL Page Verification Tool on your behalf. It’s also available now as open source code.

This new PostgreSQL Page Verification tool is a command-line tool that you can execute against a Postgres database. Since PostgreSQL version 9.3, it’s been possible to enable checksums on data pages to avoid ignoring data corruption. However, with the release of this utility, you can now verify all data files, online or offline. The Page Verification Tool can calculate and verify checksums for each data page.

How the Page Verification tool works

To use the PostgreSQL Page Verification tool, you must enable checksums during initialization of a new PostgreSQL database cluster. You can’t go back in and do it after the fact. Once checksums are turned on, the Page Verification tool computes its own checksum and compares it to the Postgres checksum to confirm that they are identical. If the checksum does not match, the tool identifies which data page is at fault and causing the corruption.

The Page Verification Tool can be run against a database that’s online or offline. It verifies checksums on PostgreSQL data pages without having to load each page into a shared buffer cache, and supports subsequent segments for tables larger than 1GB.

The tool skips Free Space Map, Visibility Map and pg_internal.init files, since they can be regenerated. While the tool can run against a database continuously, it does have a performance overhead associated with it, so we advise incorporating the tool into your backup process and running it on a separate server.

How to start using the PostgreSQL Page Verification tool

The Page Verification tool is integrated into Google Cloud SQL, so it runs automatically. We’re using the tool at scale to validate our customers’ backups. We do the verification process on internal instances of Cloud SQL to make sure your database doesn’t take a performance hit.

The value of the PostgreSQL Page Verification Tool comes from detecting data corruption early to minimize data loss resulting from data corruption. Organizations that use the tool and achieve a successful verification have assurance of a useful backup in case disaster strikes.

At Google, when we make a database better, we make it better for everyone, so the PostgreSQL Page Verification tool is available to you via open source. We encourage Postgres users to download the tool at Google Open Source or GitHub. The best detection is early detection, not when you need to restore a backup.

7 best practices for building containers



Kubernetes Engine is a great place to run your workloads at scale. But before being able to use Kubernetes, you need to containerize your applications. You can run most applications in a Docker container without too much hassle. However, effectively running those containers in production and streamlining the build process is another story. There are a number of things to watch out for that will make your security and operations teams happier. This post provides tips and best practices to help you effectively build containers.

1. Package a single application per container

Get more details

A container works best when a single application runs inside it. This application should have a single parent process. For example, do not run PHP and MySQL in the same container: it’s harder to debug, Linux signals will not be properly handled, you can’t horizontally scale the PHP containers, etc. This allows you to tie together the lifecycle of the application to that of the container.
The container on the left follows the best practice. The container on the right does not.


2. Properly handle PID 1, signal handling, and zombie processes

Get more details

Kubernetes and Docker send Linux signals to your application inside the container to stop it. They send those signals to the process with the process identifier (PID) 1. If you want your application to stop gracefully when needed, you need to properly handle those signals.

Google Developer Advocate Sandeep Dinesh’s article —Kubernetes best practices: terminating with grace— explains the whole Kubernetes termination lifecycle.

3. Optimize for the Docker build cache

Get more details

Docker can cache layers of your images to accelerate later builds. This is a very useful feature, but it introduces some behaviors that you need to take into account when writing your Dockerfiles. For example, you should add the source code of your application as late as possible in your Dockerfile so that the base image and your application’s dependencies get cached and aren’t rebuilt on every build.

Take this Dockerfile as example:
FROM python:3.5
COPY my_code/ /src
RUN pip install my_requirements
You should swap the last two lines:
FROM python:3.5
RUN pip install my_requirements
COPY my_code/ /src
In the new version, the result of the pip command will be cached and will not be rerun each time the source code changes.

4. Remove unnecessary tools

Get more details

Reducing the attack surface of your host system is always a good idea, and it’s much easier to do with containers than with traditional systems. Remove everything that the application doesn’t need from your container. Or better yet, include just your application in a distroless or scratch image. You should also, if possible, make the filesystem of the container read-only. This should get you some excellent feedback from your security team during your performance review.

5. Build the smallest image possible

Get more details

Who likes to download hundreds of megabytes of useless data? Aim to have the smallest images possible. This decreases download times, cold start times, and disk usage. You can use several strategies to achieve that: start with a minimal base image, leverage common layers between images and make use of Docker’s multi-stage build feature.
The Docker multi-stage build process.

Google Developer Advocate Sandeep Dinesh’s article —Kubernetes best practices: How and why to build small container images— covers this topic in depth.

6. Properly tag your images

Get more details

Tags are how the users choose which version of your image they want to use. There are two main ways to tag your images: Semantic Versioning, or using the Git commit hash of your application. Whichever your choose, document it and clearly set the expectations that the users of the image should have. Be careful: while users expect some tags —like the “latest” tag— to move from one image to another, they expect other tags to be immutable, even if they are not technically so. For example, once you have tagged a specific version of your image, with something like “1.2.3”, you should never move this tag.

7. Carefully consider whether to use a public image

Get more details

Using public images can be a great way to start working with a particular piece of software. However, using them in production can come with a set of challenges, especially in a high-constraint environment. You might need to control what’s inside them, or you might not want to depend on an external repository, for example. On the other hand, building your own images for every piece of software you use is not trivial, particularly because you need to keep up with the security updates of the upstream software. Carefully weigh the pros and cons of each for your particular use-case, and make a conscious decision.

Next steps

You can read more about those best practices on Best Practices for Building Containers, and learn more about our Kubernetes Best Practices. You can also try out our Quickstarts for Kubernetes Engine and Container Builder.

Predict your future costs with Google Cloud Billing cost forecast



With every new feature we introduce to Google Cloud Billing, we strive to provide your business with greater flexibility, control, and clarity so that you can better align your strategic priorities with your cloud usage. In order to do so, it’s important to be able to answer key questions about your cloud costs, such as:
  • “How is my current month's Google Cloud Platform (GCP) spending trending?”
  • “How much am I forecasted to spend this month based on historical trends?”
  • “Which GCP product or project is forecasted to cost me the most this month?”
Today, we are excited to announce the availability of a new cost forecast feature for Google Cloud Billing. This feature makes it easier to see at a glance how your costs are trending and how much you are projected to spend. You can now forecast your end-of-month costs for whatever bucket of spend is important to you, from your entire billing account down to a single SKU in a single project.

View your current and forecasted costs


Get started

Cost forecast for Google Cloud Billing is now available to all accounts. Get started by navigating to your account’s billing page in the GCP console and opening the reports tab in the left-hand navigation bar.

You can learn more about the cost forecast feature in the billing reports documentation. Also, if you’re attending Google Cloud Next ‘18, check out our session on Monitoring and Forecasting Your GCP Costs.

Related content