Tag Archives: Developer Tools & Insights

Now shipping: ultramem machine types with up to 4TB of RAM

Today we are announcing the general availability of Google Compute Engine “ultramem” memory-optimized machine types. You can provision ultramem VMs with up to 160 vCPUs and nearly 4TB of memory--the most vCPUs you can provision on-demand in any public cloud. These ultramem machine types are great for running memory-intensive production workloads such as SAP HANA, while leveraging the performance and flexibility of Google Cloud Platform (GCP).

The ultramem machine types offer the most resources per VM of any Compute Engine machine type, while supporting Compute Engine’s innovative differentiators, including:

SAP-certified for OLAP and OLTP workloads

Since we announced our partnership with SAP in early 2017, we’ve rapidly expanded our support for SAP HANA with new memory-intensive Compute Engine machine types. We’ve also worked closely with SAP to test and certify these machine types to bring you validated solutions for your mission-critical workloads. Our supported VM sizes for SAP HANA now meet the broad range of Google Cloud Platform’s customers’ demands. Over the last year, the size of our certified instances grew by more than 10X for both scale-up and scale-out deployments. With up to 4TB of memory and 160 vCPUs, ultramem machine types are the largest SAP-certified instances on GCP for your OLAP and OLTP workloads.
Maximum memory per node and per cluster for SAP HANA on GCP, over time

We also offer other capabilities to manage your HANA environment on GCP including automated deployments, and Stackdriver monitoring. Click here for a closer look at the SAP HANA ecosystem on GCP.

Up to 70% discount for commited use

We are also excited to share that GCP now offers deeper committed use discounts of up to 70% for memory-optimized machine types, helping you improve your total cost of ownership (TCO) for sustained, predictable usage. This allows you to control costs through a variety of usage models: on-demand usage to start testing machine types, committed use discounts when you are ready for production deployments, and sustained use discounts for mature, predictable usage. For more details on committed use discounts for these machine types check our docs, or use the pricing calculator to assess your savings on GCP.

GCP customers have been doing exciting things with ultramem VMs

GCP customers have been using ultramem VMs for a variety of memory-intensive workloads including in-memory databases, HPC applications, and analytical workloads.

Colgate has been collaborating with SAP and Google Cloud as an early user of ultramem VMs for S/4 HANA.

"As part of our partnership with SAP and Google Cloud, we have been an early tester of Google Cloud's 4TB instances for SAP solution workloads. The machines have performed well, and the results have been positive. We are excited to continue our collaboration with SAP and Google Cloud to jointly create market changing innovations based upon SAP Cloud Platform running on GCP.”
- Javier Llinas, IT Director, Colgate

Getting started

These ultramem machine types are available in us-central1, us-east1, and europe-west1, with more global regions planned soon. Stay up-to-date on additional regions by visiting our available regions and zones page.

It’s easy to configure and provision n1-ultramem machine types programmatically, as well as via the console. To learn more about running your SAP HANA in-memory database on GCP with ultramem machine types, visit our SAP page, and go to the GCP Console to get started.

Introducing new Apigee capabilities to deliver business impact with APIs

Whether it's delivering new experiences through mobile apps, building a platform to power a partner ecosystem, or modernizing IT systems, virtually every modern business uses APIs (application programming interfaces).

Google Cloud’s Apigee API platform helps enterprises adapt by giving them control and visibility into the APIs that connect applications and data across the enterprise and across clouds. It enables organizations to deliver connected experiences, create operational efficiencies, and unlock the power of their data.

As enterprise API programs gain traction, organizations are looking to ensure that they can seamlessly connect data and applications, across multi-cloud and hybrid environments, with secure, manageable and monetizable APIs. They also need to empower developers to quickly build and deliver API products and applications that give customers, partners, and employees secure, seamless experiences.

We are making several announcements today to help enterprises do just that. Thanks to a new partnership with Informatica, a leading integration-platform-as-a-service (iPaaS) provider, we’re making it easier to connect and orchestrate data services and applications, across cloud and on-premise environments, using Informatica Integration Cloud for Apigee. We’ve also made it easier for API developers to access Google Cloud services via the Apigee Edge platform.

Discover and invoke business integration processes with Apigee

We believe that for an enterprise to accelerate digital transformation, it needs API developers to focus on business-impacting programs rather than low-level tasks such as coding, rebuilding point-to-point integrations, and managing secrets and keys.

From the Apigee Edge user interface, developers can now use policies to discover and invoke business integration processes that are defined in Informatica’s Integration Cloud.

Using this feature, an API developer can add a callout policy inside an API proxy that invokes the required Informatica business integration process. This is especially useful when the business integration process needs to be invoked before the request gets routed to the configured backend target.

To use this feature, API developers:
  • Log in to Apigee Edge user interface with their credentials
  • Create a new API proxy, configure backend target, add policies
  • Add a callout policy to select the appropriate business integration process
  • Save and deploy the API proxy

Access Google Cloud services from the Apigee Edge user interface

API developers want to easily access and connect with Google Cloud services like Cloud Firestore, Cloud Pub/Sub, Cloud Storage, and Cloud Spanner. In each case, there are a few steps to perform to deal with security, data formats, request/response transformation, and even wire protocols for those systems.

Apigee Edge includes a new feature that simplifies interacting with these services and enables connectivity to them through a first-class policy interface that an API developer can simply pick from the policy palette and use. Once configured, these can be reused across all API proxies.

We’re working to expand this feature to cover more Google Cloud services. Simultaneously, we’re working with Informatica to include connections to other software-as-a-service (SaaS) applications and legacy services like hosted databases.

Publish business integration processes as managed APIs

Integration architects, working to connect data and applications across the enterprise, play an important role in packaging and publishing business integration processes as great API products. Working with Informatica, we’ve made this possible within Informatica’s Integration Cloud.

Integration architects that use Informatica's Integration Cloud for Apigee can now author composite services using business integration processes to orchestrate data services and applications, and directly publish them as managed APIs to Apigee Edge. This pattern is useful when the final destination of the API call is an Informatica business integration process.

To use this feature, integration architects need to execute the following steps:
  • Log in to their Informatica Integration Cloud user interface
  • Create a new business integration process or modify an existing one
  • Create a new service of type (“Apigee”), select options (policies) presented on the wizard, and publish the process as an API proxy
  • Apply additional policies to the generated API proxy by logging in to the Apigee Edge user interface.
API documentation can be generated and published on a developer portal, and the API endpoint can be shared with app developers and partners. APIs are an increasingly central part of organizations’ digital strategy. By working with Informatica, we hope to make APIs even more powerful and pervasive. Click here for more on our partnership with Informatica.

Verifying PostgreSQL backups made easier with new open-source tool

When was the last time you verified a database backup? If that question causes you to break into a cold sweat, rest assured you’re not alone.

Verifying backups should be a common practice, but it often isn’t. This can be an issue if there’s a disaster or—as is more likely at most companies—if someone makes a mistake when deploying database changes. One industry survey indicates that data loss is one of the biggest risks when making database changes.

PostgreSQL Page Verification Tool

At Google Cloud Platform (GCP), we recently wrote a tool to fight data loss and help detect data corruption early in the change process. We made it open source, because data corruption can happen to anybody, and we’re committed to making code available to ensure secure, reliable backups. If you use Google Cloud SQL for PostgreSQL, then you’re in luck—we’re already running the PostgreSQL Page Verification Tool on your behalf. It’s also available now as open source code.

This new PostgreSQL Page Verification tool is a command-line tool that you can execute against a Postgres database. Since PostgreSQL version 9.3, it’s been possible to enable checksums on data pages to avoid ignoring data corruption. However, with the release of this utility, you can now verify all data files, online or offline. The Page Verification Tool can calculate and verify checksums for each data page.

How the Page Verification tool works

To use the PostgreSQL Page Verification tool, you must enable checksums during initialization of a new PostgreSQL database cluster. You can’t go back in and do it after the fact. Once checksums are turned on, the Page Verification tool computes its own checksum and compares it to the Postgres checksum to confirm that they are identical. If the checksum does not match, the tool identifies which data page is at fault and causing the corruption.

The Page Verification Tool can be run against a database that’s online or offline. It verifies checksums on PostgreSQL data pages without having to load each page into a shared buffer cache, and supports subsequent segments for tables larger than 1GB.

The tool skips Free Space Map, Visibility Map and pg_internal.init files, since they can be regenerated. While the tool can run against a database continuously, it does have a performance overhead associated with it, so we advise incorporating the tool into your backup process and running it on a separate server.

How to start using the PostgreSQL Page Verification tool

The Page Verification tool is integrated into Google Cloud SQL, so it runs automatically. We’re using the tool at scale to validate our customers’ backups. We do the verification process on internal instances of Cloud SQL to make sure your database doesn’t take a performance hit.

The value of the PostgreSQL Page Verification Tool comes from detecting data corruption early to minimize data loss resulting from data corruption. Organizations that use the tool and achieve a successful verification have assurance of a useful backup in case disaster strikes.

At Google, when we make a database better, we make it better for everyone, so the PostgreSQL Page Verification tool is available to you via open source. We encourage Postgres users to download the tool at Google Open Source or GitHub. The best detection is early detection, not when you need to restore a backup.

7 best practices for building containers

Kubernetes Engine is a great place to run your workloads at scale. But before being able to use Kubernetes, you need to containerize your applications. You can run most applications in a Docker container without too much hassle. However, effectively running those containers in production and streamlining the build process is another story. There are a number of things to watch out for that will make your security and operations teams happier. This post provides tips and best practices to help you effectively build containers.

1. Package a single application per container

Get more details

A container works best when a single application runs inside it. This application should have a single parent process. For example, do not run PHP and MySQL in the same container: it’s harder to debug, Linux signals will not be properly handled, you can’t horizontally scale the PHP containers, etc. This allows you to tie together the lifecycle of the application to that of the container.
The container on the left follows the best practice. The container on the right does not.

2. Properly handle PID 1, signal handling, and zombie processes

Get more details

Kubernetes and Docker send Linux signals to your application inside the container to stop it. They send those signals to the process with the process identifier (PID) 1. If you want your application to stop gracefully when needed, you need to properly handle those signals.

Google Developer Advocate Sandeep Dinesh’s article —Kubernetes best practices: terminating with grace— explains the whole Kubernetes termination lifecycle.

3. Optimize for the Docker build cache

Get more details

Docker can cache layers of your images to accelerate later builds. This is a very useful feature, but it introduces some behaviors that you need to take into account when writing your Dockerfiles. For example, you should add the source code of your application as late as possible in your Dockerfile so that the base image and your application’s dependencies get cached and aren’t rebuilt on every build.

Take this Dockerfile as example:
FROM python:3.5
COPY my_code/ /src
RUN pip install my_requirements
You should swap the last two lines:
FROM python:3.5
RUN pip install my_requirements
COPY my_code/ /src
In the new version, the result of the pip command will be cached and will not be rerun each time the source code changes.

4. Remove unnecessary tools

Get more details

Reducing the attack surface of your host system is always a good idea, and it’s much easier to do with containers than with traditional systems. Remove everything that the application doesn’t need from your container. Or better yet, include just your application in a distroless or scratch image. You should also, if possible, make the filesystem of the container read-only. This should get you some excellent feedback from your security team during your performance review.

5. Build the smallest image possible

Get more details

Who likes to download hundreds of megabytes of useless data? Aim to have the smallest images possible. This decreases download times, cold start times, and disk usage. You can use several strategies to achieve that: start with a minimal base image, leverage common layers between images and make use of Docker’s multi-stage build feature.
The Docker multi-stage build process.

Google Developer Advocate Sandeep Dinesh’s article —Kubernetes best practices: How and why to build small container images— covers this topic in depth.

6. Properly tag your images

Get more details

Tags are how the users choose which version of your image they want to use. There are two main ways to tag your images: Semantic Versioning, or using the Git commit hash of your application. Whichever your choose, document it and clearly set the expectations that the users of the image should have. Be careful: while users expect some tags —like the “latest” tag— to move from one image to another, they expect other tags to be immutable, even if they are not technically so. For example, once you have tagged a specific version of your image, with something like “1.2.3”, you should never move this tag.

7. Carefully consider whether to use a public image

Get more details

Using public images can be a great way to start working with a particular piece of software. However, using them in production can come with a set of challenges, especially in a high-constraint environment. You might need to control what’s inside them, or you might not want to depend on an external repository, for example. On the other hand, building your own images for every piece of software you use is not trivial, particularly because you need to keep up with the security updates of the upstream software. Carefully weigh the pros and cons of each for your particular use-case, and make a conscious decision.

Next steps

You can read more about those best practices on Best Practices for Building Containers, and learn more about our Kubernetes Best Practices. You can also try out our Quickstarts for Kubernetes Engine and Container Builder.

Five can’t-miss application development sessions at Google Cloud Next ‘18

Google Cloud Next ‘18 will be a developer’s paradise, with bootcamps, hands-on labs, and yes, breakout sessions—more than 60 dedicated to app dev in some form or another. And that’s before we get to the Spotlight sessions explaining new product launches! We polled developer advocates and product managers from across Google Cloud, and here are their picks for the sessions you can’t afford to miss.

1. From Zero to Production: Build a Production-Ready Deployment Pipeline for Your Next App

Scott Feinberg, Customer Engineer, Google Cloud

Want to start deploying to Google Cloud Platform (GCP) but aren't sure how to start? In this session, you'll take an app with multiple process types, containerize it, and build a deployment pipeline with Container Builder to test and deploy your code to a Kubernetes Engine cluster.

Register for the session here.

2. Enterprise-Grade Mobile Apps with Firebase

Michael McDonald, Product Manager and Jonathan Shriver-Blake, Product Manager, Google Firebase

Firebase helps mobile development teams build better apps, improve app quality, and grow their business. But before you can use it in your enterprise, you’ll have to answer a number of questions: Will it scale in production? Is it reliable, and can your team monitor it? How do you control who has access to production data? What will the lawyers say? And how about compliance and GDPR? This session will show you the answers to these questions and pave the way to use Firebase in your enterprise.

Click here to reserve your spot.

3. Migrating to Cloud Spanner

Niel Markwick, Solutions Architect and Sami Zuhuruddin, Staff Solutions Architect, Google Cloud

When migrating an existing database to Cloud Spanner, an essential step is importing the existing data. This session describes the steps required to migrate the data and any pitfalls that need to be dealt with during the process. We'll cover what it looks like to transition to Cloud Spanner, including schema migration, data movement, cutover, and application changes. To make it real, we'll be looking at migrating from two popular systems: one NoSQL and the other SQL.

Find more details about the session here.

4. Serverless Compute on Google Cloud: What's New

Myles Borins, Developer Advocate and Jason Polites, Product Manager, Google

Join us to learn what’s new in serverless compute on GCP. We will share the latest developments in App Engine and Cloud Functions and show you how you can benefit from new feature releases. You will also get a sneak peek and preview of what’s coming next.

Secure your spot today.

5. Accelerating Your Kubernetes Development with Kubernetes Applications

Konrad Delong, Senior Software Engineer; David Eustis, Senior Staff Software Engineer; and Kenneth Owens, Software Engineer, Google

Kubernetes applications provide a new, powerful abstraction for you to compose and re-use application building blocks from a variety of sources. In this talk, we’ll show you how to accelerate your development process by taking advantage of Kubernetes applications. We’ll walk you through creating these applications and deploying third-party, commercial Kubernetes applications from the Google Cloud Marketplace.

Click here to register for this session.

And if you haven’t already registered for Next, don’t delay! Everyone who attends will receive $500 in GCP credits. Imagine the possibilities!

Why we believe in an open cloud

Open clouds matter more now than ever. While most companies today use a single public cloud provider in addition to their on-premises environment, research shows that most companies will likely adopt multiple public and private clouds in the coming years. In fact, according to a 2018 Rightscale study, 81-percent of enterprises with 1,000 or more employees have a multi-cloud strategy, and if you consider SaaS, most organizations are doing multi-cloud already.

Open clouds let customers freely choose which combination of services and providers will best meet their needs over time. Open clouds let customers orchestrate their infrastructure effectively across hybrid-cloud environments.

We believe in three principles for an open cloud:
  1. Open is about the power to pick up an app and move it—to and from on-premises, our cloud, or another cloud—at any time.
  2. Open-source software permits a richness of thought and continuous feedback loop with users.
  3. Open APIs preserve everyone’s ability to build on each other’s work.

1. Open is about the power to pick up an app and move it

An open cloud is grounded in a belief that being tied to a particular cloud shouldn’t get in the way of achieving your goals. An open cloud embraces the idea that the power to deliver your apps to different clouds while using a common development and operations approach will help you meet whatever your priority is at any given time—whether that’s making the most of skills shared widely across your teams or rapidly accelerating innovation. Open source is an enabler of open clouds because open source in the cloud preserves your control over where you deploy your IT investments. For example, customers are using Kubernetes to manage containers and TensorFlow to build machine learning models on-premises and on multiple clouds.

2. Open-source software permits a richness of thought and continuous feedback loop with users

Through the continuous feedback loop with users, open source software (OSS) results in better software, faster, and requires substantial time and investment on the part of the people and companies leading open source projects. Here are examples of Google’s commitment to OSS and the varying levels of work required:
  • OSS such as Android, has an open code base and development is the sole responsibility of one organization.
  • OSS with community-driven changes such as TensorFlow, involves coordination between many companies and individuals.
  • OSS with community-driven strategy, for example collaboration with the Linux Foundation and Kubernetes community, involves collaborative, decision-making and accepting consensus over control.
Open source is so important to Google that we call it out twice in our corporate philosophies, and we encourage employees, and in fact all developers, to engage with open source.

Using BigQuery to analyze GHarchive.org data, we found that in 2017, over 5,500 Googlers submitted code to nearly 26,000 repositories, created over 215,000 pull requests, and engaged with countless communities through almost 450,000 comments. A comparative analysis of Google’s contribution to open source provides a useful relative position of the leading companies in open source based on normalized data.

Googlers are active contributors to popular projects you may have heard of including Linux, LLVM, Samba, and Git.

Google regularly open-sources internal projects

Top Google-initiated projects include:

3. Open APIs preserve everyone’s ability to build on each other’s work

Open APIs preserve everyone’s ability to build on each other’s work, improving software iteratively and collaboratively. Open APIs empower companies and individual developers to change service providers at will. Peer-reviewed research shows that open APIs drive faster innovation across the industry and in any given ecosystem. Open APIs depend on the right to reuse established APIs by creating independent-yet-compatible implementations. Google is committed to supporting open APIs via our membership in the Open API Initiative, involvement in the Open API specification, support of gRPC, via Cloud Bigtable compatibility with the HBase API, Cloud Spanner and BigQuery compatibility with SQL:2011 (with extensions), and Cloud Storage compatibility with shared APIs.

Build an open cloud with us

If you believe in an open cloud like we do, we’d love your participation. You can help by contributing to and using open source libraries, and asking your infrastructure and cloud vendors what they’re doing to keep workloads free from lock-in. We believe open ecosystems grow the fastest and are more resilient and adaptable in the face of change. Like you, we’re in it for the long-term.

It’s worth noting that not all Google’s products will be open in every way at every stage of their life cycle. Openness is less of an absolute and more of a mindset when conducting business in general. You can, however, expect Google Cloud to continue investing in openness across our products over time, to contribute to open source projects, and to open source some of our internal projects.

If you believe open clouds are an important part of making this multi-cloud world a place in which everyone can thrive, we encourage you to check out our new open cloud website where we offer more detailed definitions and examples of the terms, concepts, and ideas we’ve discussed here: cloud.google.com/opencloud.

Google Cloud for Electronic Design Automation: new partners

A popular enterprise use case for Google Cloud is electronic design automation (EDA)—designing electronic systems such as integrated circuits and printed circuit boards. EDA workloads, like simulations and field solvers, can be incredibly computationally intensive. They may require a few thousand CPUs, sometimes even a few hundred thousand CPUs, but only for the duration of the run. Instead of building up massive server farms that are oversubscribed during peak times and sit idle for the rest of the time, you can use Google Cloud Platform (GCP) compute and storage resources to implement large-scale modeling and simulation grids.

Our partnerships with software and service providers make Google Cloud an even stronger platform for EDA. These solutions deliver elastic infrastructure and improved time-to-market for customers like eSilicon, as described here.

Scalable simulation capacity on GCP provided by Metrics Technologies (more details below)

This week at Design Automation Conference, we’re showcasing a first-of-its-kind implementation of EDA in the cloud: our implementation of the Synopsys VCS simulation solution for internal EDA workloads on Google Cloud, by the Google Hardware Engineering team. We also have several new partnerships to help you achieve operational and engineering excellence through cloud computing, including:

  • Metrics Technologies is the first EDA platform provider of cloud-based SystemVerilog simulation and verification management, accelerating the move of semiconductor verification workloads into the cloud. The Metrics Cloud Simulator and Verification Manager, a pay-by-the-minute software-as-a-service (SaaS) solution built entirely on GCP, improves resource utilization and engineering productivity, and can scale capacity with variable demand. Simulation resources are dynamically adjusted up or down by the minute without the need to purchase additional hardware or licenses, or manage disk space. You can find Metrics news and reviews at www.metrics/news.ca, or schedule a demo at DAC 2018 at www.metrics.ca.
  • Elastifile delivers enterprise-grade, scalable file storage on Google Cloud. Powered by a high-performance, POSIX-compliant distributed file system with integrated object tiering, Elastifile simplifies storage and data management for EDA workflows. Deployable in minutes via Google Cloud Launcher, Elastifile enables cloud-accelerated circuit design and verification, with no changes required to existing tools and scripts.
  • NetApp is a leading provider of high-performance storage solutions. NetApp is launching Cloud Volumes for Google Cloud Platform, which is currently available in Private Preview. With NetApp Cloud Volumes, GCP customers have access to a fully-managed, familiar file storage (NFS) service with a cloud native experience.
  • Quobyte provides a parallel, distributed, POSIX-compatible file system that runs on GCP and on-premises to provide petabytes of storage and millions of IOPS. As a distributed file system, Quobyte scales IOPS and throughput linearly with the number of nodes–avoiding the performance bottlenecks of clustered or single filer solutions. You can try Quobyte today on the Cloud Launcher Marketplace.
If you’d like to learn more about EDA offerings on Google Cloud, we encourage you to visit us at booth 1251 at DAC 2018. And if you’re interested in learning more about how our Hardware Engineering team’s used Synopsys VCS on Google Cloud for internal Google workloads, please stop by Design Infrastructure Alley on Tuesday for a talk by team members Richard Ho and Ravi Rajamani. Hope to see you there!

How to connect Stackdriver to external monitoring

Google Stackdriver lets you track your cloud-powered applications with monitoring, logging and diagnostics. Using Stackdriver to monitor Google Cloud Platform (GCP) or Amazon Web Services (AWS) projects has many advantages—you get detailed performance data and can set up tailored alerts. However, we know from our customers that many businesses are bridging cloud and on-premises environments. In these hybrid situations, it’s often necessary to also connect Stackdriver to an on-prem monitoring system. This is especially important if there is already a monitoring process in place that involves classic IT Business Management (ITBM) tasks, like opening and closing tickets and incidents automatically.

Luckily, you can use Stackdriver for these circumstances by enabling the alerting policies via webhooks. We’ll explain how in this blog post, using the example of monitoring the uptime of a web server. Setting up the monitoring condition and alerting policy is really where Stackdriver shines, since it auto-detects GCP instances and can analyze log files. This differs depending on the customer environment. (You can also find more here about alerting and incident management in Stackdriver.)

Get started with server and firewall policies to external monitoring

To keep it simple, we’ll start with explaining how to do an HTTP check on a freshly installed web server (nginx). This is called an uptime check in Stackdriver.

First, let’s set up the server and firewall policy. In order for the check to be successful, make sure you’ve created a firewall rule in the GCP console that allows HTTP traffic to the public IP of the web server. The best way to do that is to create a tag-based firewall rule that allows all IP addresses ( on the tag “http.” You can now add that tag to your newly created web server instance. (We created ours by creating a micro instance using Ubuntu image, then installing nginx using apt-get).

If you prefer containers, you can use Kubernetes to spin up an nginx container.

Make sure to check the firewall rule by manually adding your public IP in a browser. If all is configured correctly, you should see the nginx greeting page:

Setting up the uptime check

Now let’s set up the website uptime check. Open the Stackdriver monitoring menu in your GCP cloud console.

In this case, we created a little web server instance with a public IP address. We want to monitor this public IP address to check the web server’s uptime. To set this up, select “Uptime Checks” from the right-side menu of the Stackdriver monitoring page.

Remember: This is a test case, so we set the check interval to one minute. For real-world use cases, this value might change according to the service monitoring requirements.

Once you have set up the Uptime Check, you can now go ahead and set up an alerting policy. Click on “Create New Policy” in the following popup window (only appears the first time you create an Uptime Check). Or you can click on “Alerting” on the left-side Stackdriver menu to set it up. Click on “Create a Policy” in the popup menu.

Setting up the alert policy

Once you click on “Create a Policy,” you should see a new popup with four steps to complete.

The first step will ask for a condition “when” to trigger the alert. This is where you have to make sure the Uptime Check is added. To do this, simply click on the “Add Condition” button.

A new window will appear from the right side:

Specify the Uptime Check by clicking on Select under “Basic Health.”

This will bring up this window (also from the right side) to select the specific Uptime Check to alert on. Simply choose “URL” in the “Resource Type” field and the “IF UPTIME CHECK” section will appear automatically. Here, we select the previously created Uptime Check.

You can also set the duration of the service downtime to trigger an alert. In this case, we used the default of five minutes. Click “Save Condition” to continue with the Alert Policy setup.

This leads us to step two:

This is where things get interesting. In order to include an external monitoring system, you can use so-called webhooks. Those are typically callouts using an HTTP POST method to send JSON formatted messages to the external system. The on-prem or third-party monitoring system needs to understand this format in order to be used properly. Typically, there’s wide support in the monitoring system industry for receiving and using webhooks.

Setting up the alerts

Now you’ll set up the alerts. In this example, we’re configuring a webhook only. You can set up multiple ways to get alerted simultaneously. If you want to get an email and a webhook at the same time, just configure it that way by adding the second (or third) method. In this example, we’ll use a free webhook receiver to monitor if our setup works properly.

Once the site has generated a webhook receiver for you, you’ll have a link you can use that will list all received tokens for you. Remember, this is for testing purposes only. Do not send in any user-specific data such as private IP addresses or service names.

Next you have to configure the notification to use a webhook so it’ll send a message over to our shiny new webhook receiver. Click on “Add Notification.”

By default a field will appear saying “Email”—click on the drop-down arrow to see the other options:

Select “Webhook” in the drop-down menu.

The system will most properly tell you that there is no webhook setup present. That’s because you haven’t specified any webhook receiver yet. Click on “Setup Webhook.”

(If you’ve already set up a webhook receiver, the system won’t offer you this option here.)

Therefore you need to go to the “select project” dropdown list (top left side, right next to the Stackdriver logo in the gray bar area). Click on the down arrow symbol (next to your project ID) and see at the bottom of the drop-down box the option “Account Settings.”

In the popup window, select “Notifications” (bottom of the left-side list under “Settings”) and then click on “Webhooks” at the top menu. Here you can add additional webhooks if needed.

Click on “Create webhook.”

Remember to put in your webhook endpoint URL. In our test case, we do not need any authentication.

Click on “Test Connection” to verify and see your first webhook appearing on the test site!

It should say “This is a test alert notification from Stackdriver.”

Now let’s continue with the Alerting Policy. Choose the newly created webhook by selecting “Webhook” as notification type and the webhook name (created earlier) as the target. If you want to have additional notification settings (like SMS, email, etc.), feel free to add those as well by clicking on “Add another notification.”

Once you add a notification, you can optionally add documentation by creating a so-called “Markdown document.” Learn more here about the Markdown language.

Last but not least, give the Alert Policy a descriptive name:

We decided to go super creative and call it “HTTP - uptime alert.” Once you have done this, click “Save Policy” at the bottom of the page.

Done! You just created your first policy. including a webhook to trigger alerts on incidents.

The policy should be green and the uptime check should report your service being healthy. If not, check your firewall rules.

Test your alerting

If everything is normal and works as expected, it is time to try your alerting policy. In order to do that, simply delete the “allow-http” firewall rule created earlier. This should result in a “service unavailable” condition for our Uptime Check. Remember to give it a little while. The Uptime Check will wait 10 seconds per region and overall one minute until it declares the service down (remember, we configured that here).

Now you’ll see that you can’t reach the nginx web server instance anymore:

Now let’s go to the Stackdriver overview page to see if we can find the incident. Click on “Monitoring Overview” in the left-side menu at the very top:

Indeed, the Uptime Check comes back red, telling us the service is down. Also, our Alerting Policy has created an incident saying that the “HTTP - uptime alert” has been triggered and the service has been unavailable for a couple of minutes now.

Let’s check the test receiver site to see if we got the webhook to trigger there:

You can see we got the webhook alert with the same information regarding the incident. This information is passed on using the JSON format for easy parsing at the receiving end. You will see the policy name that was triggered (first red rectangle), the state “open,” as well as the “started at” timestamp in Unix time format (seconds passed since 1970). Also, it will tell you that the service is failing in the “summary” field. If you had configured any optional documentation, you’d see it using the JSON format (HTTP post).

Bring the service back

Now, recreate the firewall rule to see if we get an “incident resolved” message.

Let’s check the overview screen again (remember to give it five or six minutes after the rule to react)

You can see that service is back up. Stackdriver automatically resolves open incidents once the condition restores. So in our case, the formerly open incident is now restored, since the Uptime Check comes back as “healthy” again. This information is also passed on using the alerting policy. Let’s see if we got a “condition restored” webhook message as well.

By the power of webhooks, it also told our test monitoring system that this incident is closed now, including useful details such as the ending time (Unix timestamp format) and a summary telling us that the service has returned to a normal state.

If you need to connect Stackdriver to a third-party monitoring system, webhooks is one extremely flexible way of doing this. It will let your operations team continue using their familiar go-to resources on-premises, while using all advantages of Stackdriver in a GCP (or AWS) environment. Furthermore, existing monitoring processes can be reused to bridge into the Google Cloud world.

Remember that Stackdriver can do far more than Uptime Checks, including log monitoring over source code monitoring, debugging and tracing user interactions with your application. Whether it’s alerting policy functionality, using the webhook messaging or other checks you could define in Stackdriver, all can be forwarded to a third-party monitoring tool. Even better, you can close incidents automatically once they have been resolved.

Have fun monitoring your cloud services!

Related content:

New ways to manage and automate your Stackdriver alerting policies
How to export logs from Stackdriver Logging: new solution documentation
Monitor your GCP environment with Cloud Security Command Center

Announcing a new certification from Google Cloud Certified: the Associate Cloud Engineer

Cloud is no longer an emerging technology. Now that businesses large and small are realizing the potential of cloud services, the need to hire individuals who can manage cloud workloads has sky-rocketed. Today, we’re launching a new Associate Cloud Engineer certification, designed to address the growing demand for individuals with the foundational cloud skills necessary to deploy applications and maintain cloud projects on Google Cloud Platform (GCP).

The Associate Cloud Engineer certification joins Professional Cloud Architect, which launched in 2016, and Data Engineer, which followed quickly thereafter. These certifications identify individuals with the skills and experience to leverage GCP to overcome complex business challenges. Since the program’s inception, Google Cloud Certified has experienced continual growth, especially this last year when the number of people sitting for our professional certifications grew by 10x.

Because cloud technology affects so many aspects of an organization, IT professionals need to know when and how to use cloud tools in a variety of scenarios, ranging from data analytics to scalability. For example, it's not enough to launch an application in the cloud. Associate Cloud Engineers also ensure that the application grows seamlessly, is properly monitored, and readily managed by authorized personnel.

Feedback from the beta launch of the Associate Cloud Engineer certification has been great. Morgan Jones, an IT professional, was eager to participate because he sees “the future of succeeding and delivering business value from the cloud is to adopt a multi-cloud strategy. This certification can really help me succeed in the GCP environment."

As an entry point to our professional-level certifications, the Associate Cloud Engineer demonstrates solid working knowledge of GCP products and technologies. “You have to have experience on the GCP Console to do well on this exam. If you haven’t used the platform and you just cram for the exam, you will not do well. The hands-on labs helped me prepare for that,” says Jones.

Partners were a major impetus in the development of the Associate Cloud Engineer exam, which will help them expand GCP knowledge throughout their organizations and address increasing demand for Google Cloud technologies head-on. Their enthusiastic response to news of this exam sends signals that the Associate Cloud Engineer will be a catalyst for an array of opportunities for those early in their cloud career.

"We are really excited for the Associate Cloud Engineer to come to market. It allows us to target multiple role profiles within our company to drive greater knowledge and expertise of Google Cloud technologies across our various managed services offerings."
- Luvlynn McAllister, Rackspace, Director, Sales Strategy & Business Operations

The Associate Cloud Engineer exam is:
  • Two hours long
  • Recommended for IT professionals with six months of GCP experience
  • Available for a registration fee of $125 USD
  • Currently available in English
  • Available at Next ‘18 for registered attendees

The Google Cloud training team offers numerous ways to increase your Google Cloud know-how. Join our webinar on July 10 at 10:30am to hear from members of the team who developed the exam about how this certification differs from others in our program and how to best prepare. If you still want to check your readiness, take the online practice exam at no charge. For more information on suggested training and an exam guide, visit our website. Register for the exam today.

How to run SAP Fiori Front-End Server (OpenUI5) on GCP in 20 mins

Who enjoys doing risky development on their SAP system? No one. But if you need to build enterprise apps that use your SAP backend, not doing development is a non-starter. One solution is to apply Gartner’s Bimodal IT, the practice of managing two separate but coherent styles of work: one focused on predictability; the other on exploration. This is an awesome strategy for creating frontend innovation with modern HTML5 / JS applications that are loosely coupled to backend core ERP system, reducing risk. And it turns out that Google Cloud Platform (GCP) can be a great way to do Bimodal IT in a highly cost-effective way.

This blog walks through setting up SAP OpenUI5 on a GCP instance running a local node.js webserver to run sample apps. These apps can be the building blocks to develop new enterprise apps in the cloud without impacting your SAP backend. Let’s take a deeper look.

Set up your GCP account:

Make sure that you have set up your GCP free trial ($300 credit):

After signing up, you can access GCP at

Everything in GCP happens in a project so we need to create one and enable billing (this uses your $300 free credit).

From the GCP Console, select or create a project by clicking the GO TO THE MANAGE RESOURCES PAGE

Make sure that billing is enabled (using your $300 free credit):

Setting up SAP OpenUI5 in GCP

1. Create a compute instance (virtual machine):

In the top left corner click on ‘Products and Services’:

Select ‘Compute Engine → VM instances’
  • Click ‘Create instance’
  • Give it the coolest name you can think of
  • Select the zone closest to where you are located
  • Under ‘Machine Type’, choose “micro (1 shared CPU)”. Watch the cost per month drop like a stone!
  • Under ‘Firewall’, check ‘Allow HTTP traffic’

Keep everything else as default and click Create.Your Debian VM should start in about 5-10 seconds.

2. Set up OpenUI5 on the new image:

SAP has an open-source version of its SAPUI5 that is the basis for its Fiori Front-End Server called OpenUI5.OpenUI5 comes with a number of sample apps. Let’s deploy this to a local node.js webserver on the instance.
Install nodejs and npm (node package manager):
sudo apt-get update
curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash -
sudo apt-get install -y nodejs

SAP files are zipped so install unzip with:
sudo apt-get install unzip

Make a project directory and change to it (feel free to change the name):
mkdir saptest 
cd saptest
Download the latest Stable OpenUI5 SDK from:
wget https://www.google.com/url?q=https://openui5.hana.ondemand.com/downloads/openui5-sdk-1.54.6.zip&sa=D&source=hangouts&ust=1529597279793000&usg=AFQjCNHiQIJnKJVJyacNwVjl_6dogj-ejQ
Time to grab a coffee as the download may take about 5 to 10 minutes depending on your connection speed.
Extract the zip file to your project directory with:
unzip openui5-sdk-1.54.5.zip
Next we will set up a local static node.js http server to serve up requests running on port 8888. Download static_server.js and package.json from Github into your project folder:
curl -O
curl -O
Identify your primary working directory and create a symbolic link to your resources folder. This allows the demo apps to work out of the box without modification (adjust the path to match your own):
ln -s /home/<me>/saptest/resources resources 
Call the node package manager to install the http server:
npm install
Run the node.js static server to accept http requests:
node static_server.js
Your node server should now be running and be able to serve up SAPOpenUI5 sample applications from localhost. However, we should make this testable from outside the VM (e.g., mobile) so let’s set up a firewall rule to allow traffic to our new static server on port 8888.
In the GCP Console click on ‘Products and Services’ (top left)
Networking → VPC Networking → Firewall Rules.
Click New to create a new firewall rule and enter the following settings:

Action on Match
All instances on network
Source filter
IP ranges
Source IP ranges
Specified Protocols and ports

Now, click ‘Create’.
Go to Products and Services → Compute Engine → VM instances and copy the External IP. Open up a browser and navigate to:
http://<External IP>:8888/index.html 
Congratulations! You are now running the OpenUI5 front-end on your GCP instance.

3. Explore the OpenUI5 demo apps

You can take a look at the sample applications offered un OpenUI5 by clicking on ‘Demo Apps’ or you can navigate directly to the shopping cart application with:
http://<External IP>:8888/test-resources/sap/m/demokit/cart/webapp/index.html
(Pro-Tip: email this link to yourself and open on your mobile device to see the adaptable UI in action. Really cool.)
These demo apps are just connecting to local sample data in XML files. In the real world oData is often used. oData is a great way of connecting your front-end systems to backend SAP systems. This can be activated on your SAP Gateway. Please consult your SAP documentation setting this up.
SAPUI5 has even more capabilities than OpenUI5 (e.g. charts and micro graphs). This is available either in your SAP Deployment or on the SAP Cloud Platform. In addition, you can also leverage this on top of GCP via Cloud Foundry. Learn more here.
Good luck in your coding adventures!

References and other links

This awesome blog was the baseline of this tutorial:

Some other good links to check out: