Category Archives: Google Cloud Platform Blog

Product updates, customer stories, and tips and tricks on Google Cloud Platform

Google Cloud Platform adds two new regions, 10 more to come

The public cloud is a network, not just a collection of data centers. Our global network has allowed us to build products that billions of users around the world can depend on. Whether you’re in Taipei or Tijuana, you can get Gmail, Search, Maps or your Google Cloud Platform services with Google speed and reliability.

We’re adding to this global network for Cloud Platform customers by expanding our roster of existing Cloud Platform regions with two more — both operational later this year:
  • US Western region in Oregon
  • East Asia region in (Tokyo) Japan

As always, each region has multiple availability zones, so that we can offer high-availability computing to our customers in each locale.

These are the first two of more than 10 additional GCP regions we'll be adding to our network through 2017. This follows the opening last year of a US east coast region in South Carolina for all major GCP services.

We’re opening these new regions to help Cloud Platform customers deploy services and applications nearer to their own customers, for lower latency and greater responsiveness. With these new regions, even more applications become candidates to run on Cloud Platform, and get the benefits of Google-level scale and industry leading price/performance.

The Japan region will be in beta for at least a month. You can fill out this survey to sign up for the beta, and we’ll notify you as soon as it’s ready. If you're interested in Oregon, please fill out this survey to be notified.

To learn how to make the best use of Cloud Platform regions for your application needs, please see the Geography and Regions details page.

- Posted by Varun Sakalkar, Product Manager

Node.js on Google App Engine goes beta

We’re excited to announce that the Node.js runtime on Google App Engine is going beta. Node.js makes it easy for developers to build performant web applications and mobile backends with JavaScript. App Engine provides an easy to use platform for developers to build, deploy, manage and automatically scale services on Google’s infrastructure. Combining Node.js and App Engine provides developers with a great platform for building web applications and services that need to operate at Google scale.

Getting started



Getting started with Node.js on App Engine is easy. We’ve built a collection of getting started guides, samples, and interactive tutorials that walk you through creating your code, using our APIs and services and deploying to production.

When running Node.js on App Engine, you can use the tools and databases you already know and love. Use Express, Hapi, Parse-server or any other web server to build your app. Use MongoDB, Redis, or Google Cloud Datastore to store your data. The runtime is flexible enough to manage most applications and services  but if you want more control over the underlying infrastructure, you can easily migrate to Google Container Engine or Google Compute Engine for full flexibility and control.

Using the gcloud npm module, you can take advantage of Google’s advanced APIs and services, including Google BigQuery, Google Cloud Pub/Sub, and the Google Cloud Vision API:
var gcloud = require('gcloud')({
  projectId: 'my-project',
  keyFilename: 'keyfile.json'
});

var vision = gcloud.vision();
vision.detectText('./image.jpg', function(err, text) {
  if (text.length > 0) {
    console.log('We found text on this image...');
  }
});

Services like the Vision API allow you to take advantage of Google’s unique technology in the cloud to bring life to your applications.

Advanced diagnostic tooling


Deploying Node.js applications to Cloud Platform is just the first step. During the lifespan of any application, you’ll need the ability to diagnose issues in production. Google Cloud Debugger lets you inspect the state of Node.js applications at any code location without stopping or slowing it down. You can set breakpoints, and analyze the state of your application in real time:


When you’re ready to address performance, Google Cloud Trace will help you analyze performance by collecting end-to-end latency data for requests to App Engine URIs and additional data for round-trip RPC calls to App Engine services like Datastore, and Memcache.


NodeSource partnership


Along with the Cloud Debug and Trace tools, we’re also announcing a partnership with NodeSource. NodeSource delivers enterprise-grade tools and software targeting the unique needs of running server-side JavaScript at scale. The N|Solid™ platform extends the capabilities of Node.js to provide increased developer productivity, protection of critical applications and peak application performance. N|Solid and Cloud Platform make a great match for running enterprise Node.js applications. You can learn more about using N|Solid on Cloud Platform from the NodeSource blog.


Committent to Node.js and open source


At Google, we’re committed to open source. The new core node.js Docker runtime, debug module, trace tools, gcloud NPM module, everything  all open source:



We’re thrilled to welcome Node.js developers to Cloud Platform, and we’re committed to making further investments to help make you as productive as possible. This is just the start  keep your ear to the ground to catch the next wave of Node.js support on Cloud Platform.

We can’t wait to hear what you think. Feel free to reach out to us on Twitter @googlecloud, or request an invite to the Google Cloud Slack community and join the #nodejs channel.

- Posted by Justin Beckwith, Product Manager, Google Cloud Platform

Google Cloud Launcher simplifies running third party apps in the cloud

The promise of technical agility is driving huge interest in public cloud from developers and businesses of all sizes. Inside this growing ecosystem, software marketplaces have emerged to further reduce the time to find, integrate and launch a solution. Today it’s possible to discover and launch an app in minutes.

The productivity gains that come with this are amazing, but there's also a downside: deep challenges with operations and support. Static image-based solutions tend to decay over time and without a clear path to update and manage, organizations end up running "dead software." That is, software with drifting configuration, stale images and low overall supportability.

Today we're announcing a sweeping update to our Cloud Launcher marketplace. We’ve created a place where developers can go to find complete solutions with unprecedented levels of supportability and connectivity to the publisher.

Here are the latest updates:

  • Solutions are no more than a few clicks away. You can already access Cloud Launcher from within the Developers Console and we’ve now improved organization and search to make it even easier to find the solutions you need.
  • Find the right configuration and specs for a solution more easily. Every solution now features a customized set of pre-configuration options as well as ready-to-use defaults, enabling you to get started more easily.
  • Launcher solutions now use Google Cloud Deployment Manager, providing a complete view of all configuration aspects of your deployment in the Deployment Manager UI. Every solution’s unique template can now be downloaded and modified so you can compose them with other Google Cloud Platform, third party, or private templates, enabling you to create even more sophisticated solutions.
  • Production-grade solutions. More open source and commercial solutions now support both multi-VM and multi-resource deployments, ensuring the scale and reliability you require for production applications.
  • Receive automatic security notices for deployed solutions. You will now be automatically notified when a security update is available for one of your Cloud Launcher solutions, making it easier to stay secure.
  • Direct access to partner support. If you need help, you can now contact partner support directly through Cloud Launcher. We also provide a mechanism for partners to verify that inbound support requests are from verified buyers, ensuring that you get a timely response.

Deployment Manager can now be used to manage Cloud Launcher solutions.





Deploy powerful configurations using smart defaults, or modify using a simple wizard.
We’re always adding new solutions and services to the catalog to help you stay productive. So, the next time you’re looking for a popular pre-built solution you can trust, start with Cloud Launcher.

Launch on!

Posted by Anil Dhawan, Product Manager, Google Cloud Launcher

Google shares software network load balancer design powering GCP networking

At NSDI ‘16, we're revealing the details of Maglev1, our software network load balancer that enables Google Compute Engine load balancing to serve a million requests per second with no pre-warming.

Google has a long history of building our own networking gear, and perhaps unsurprisingly, we build our own network load balancers as well, which have been handling most of the traffic to Google services since 2008. Unlike the custom Jupiter fabrics that carry traffic around Google’s data centers, Maglev load balancers run on ordinary servers  the same hardware that the services themselves use.

Hardware load balancers are often deployed in an active-passive configuration to provide failover, wasting at least half of the load balancing capacity. Maglev load balancers don't run in active-passive configuration. Instead, they use Equal-Cost Multi-Path routing (ECMP) to spread incoming packets across all Maglevs, which then use consistent hashing techniques to forward packets to the correct service backend servers, no matter which Maglev receives a particular packet. All Maglevs in a cluster are active, performing useful work. Should one Maglev become unavailable, the other Maglevs can carry the extra traffic. This N+1 redundancy is more cost effective than the active-passive configuration of traditional hardware load balancers, because fewer resources are intentionally sitting idle at all times.


Google’s highly flexible cluster management technology, called Borg, makes it possible for Google engineers to move service workloads between clusters as needed to take advantage of unused capacity, or other operational considerations. On Google Cloud Platform, our customers have similar flexibility to move their workloads between zones and regions. This means that the mix of services running in any particular cluster changes over time, which can also lead to changing demand for load balancing capacity.

With Maglev, it's easy to add or remove load balancing capacity, since Maglev is simply another way to use the same servers that are already in the cluster. Recently, the industry has been moving toward Network Function Virtualization (NFV), providing network functionality using ordinary servers. Google has invested a significant amount of effort over a number of years to make NFV work well in our infrastructure. As Maglev shows, NFV makes it easier to add and remove networking capacity, but having the ability to deploy NFV technology also makes it possible to add new networking services without adding new, custom hardware.

How does this benefit you, as a user of GCP? You may recall we were able to scale from zero to one million requests per second with no pre-warming or other provisioning steps. This is possible because Google clusters, via Maglev, are already handling traffic at Google scale. There's enough headroom available to add another million requests per second without bringing up new Maglevs. It just increases the utilization of the existing Maglevs.

Of course, when utilization of the Maglevs exceeds a threshold, more Maglevs are needed. Since the Maglevs are deployed on the same server hardware that's already present in the cluster, it's easy for us to add that capacity. As a developer on Cloud Platform, you don’t need to worry about load balancing capacity. Google’s Maglevs, and our team of Site Reliability Engineers who manage them, have that covered for you. You can focus on building an awesome experience for your users, knowing that when your traffic ramps up, we’ve got your back.

- Posted by Daniel E. Eisenbud, Technical Lead, Maglev and Paul Newson, Developer Advocate (Maglev fan)



1 D. E. Eisenbud, C. Yi, C. Contavalli, C. Smith, R. Kononov, E. Mann-Hielscher, A. Cilingiroglu, B. Cheyney, W. Shang, and J. D. Hosein. Maglev: A Fast and Reliable Software Network Load Balancer, 13th USENIX Symposium on Networked Systems Design and Implementation (NSDI 16), 2016

Google Compute Engine boosts high availability controls

Today, we’re introducing a feature to Google Compute Engine Autoscaler and Managed Instance Groups that enables region-specific control when scaling compute resources. This is important for customers that need high availability.

Called Regional Instance Groups, this new feature lets you set-up infrastructure that automatically spreads VM instances across three zones of the region and keeps the spread equal as you change group size. Autoscaler works regionally across all three zones, adding and removing instances so that the spread remains equal.

Back in September, we announced the general availability of Google Compute Engine Autoscaler and Managed Instance Groups. Since then, we've seen amazing growth in usage and heard lots of feedback. One of the top requests from customers was to simplify configurations allowing for higher availability.

Now, with region-specific control, in the rare event of a bad build, network issues or zone failure, you're better protected against losing your service. Since your VM instances are equally spread across three zones, two-thirds of them will not be impacted. On top of that, when you’re using Autoscaler, it will notice increased traffic in other zones and adjust their number accordingly. After the faulty zone goes back to normal, the load will be rebalanced to use all three zones again.

Setup is easy. If you choose to use regional configuration, simply select Multi-Zone group while creating a Managed Instance Group. At this point, your choice of locations are regions and the scope of your Managed Instance Group and Autoscaler becomes regional.
To use Regional Instance Groups in Alpha and learn more about the service sign-up here.

- Posted by Jerzy Foryciarz, Product Manager, Google Cloud Platform

Calculating and searching 500 billion digits of Pi

It's Pi day! Have you ever wondered whether you could take on the infinite irrationality of Pi and calculate its over one trillion digits? Well we wondered that exactly, and we found a way to calculate to 500 billion digits. Here's how:

Google Compute Engine supports up to eight 375GB Local SSDs per virtual machine instance! That gave us a total of 3TB of Local SSDs to use as swap space — exactly what we needed to calculate 500 billion digits, which is also exactly what we did.
500 billion digits of Pi on Google Compute Engine in 44.9 hours



Once the machine was setup, we were able to calculate 500 billion digits of Pi in about 44.9 hours. Assuming you already have a Google Cloud Platform account, and gcloud command line tool installed, here's how you can setup the instance:

  1. First, let’s set some variables to be used later. You’ll need to pick a zone to replace the ZONE variable.
    export PROJECT=”YOUR_GCP_PROJECT"
    export ZONE="us-central1-c"
    export CORES="32"
    export OUTPUT_DISK_SIZE="500GB"
    export OUTPUT_DISK="out-${CORES}"
    export INSTANCE="yc-${CORES}"
  2. Next, create a large disk to hold the final output (500 billion digits will take up 500GB!)
    $ gcloud compute disks create ${OUTPUT_DISK} 
     --project ${PROJECT} 
     --zone ${ZONE} 
     --size ${OUTPUT_DISK_SIZE} 
     --type pd-ssd 
  3. Then, create a Compute Engine instance with Local SSDs, the following command will attach 8 Local SSDs:
    $ gcloud compute instances create ${INSTANCE} 
     --project ${PROJECT} 
     --zone ${ZONE} 
     --machine-type n1-highmem-${CORES} 
     --maintenance-policy TERMINATE 
     --image-project gce-nvme 
     --image nvme-backports-debian-7-wheezy-v20151104 
     --local-ssd interface=NVME 
     --local-ssd interface=NVME 
     --local-ssd interface=NVME 
     --local-ssd interface=NVME 
     --local-ssd interface=NVME 
     --local-ssd interface=NVME 
     --local-ssd interface=NVME 
     --local-ssd interface=NVME 
     --disk name=${OUTPUT_DISK},device-name=${OUTPUT_DISK}
  4. Once the instance is started, you can SSH into it with:
    $ gcloud compute ssh ${INSTANCE} -zone ${ZONE}
  5. Once you're in the newly created instance, you'll need to first format and mount all of the Local SSDs and the large persistent disk:
    $ sudo su -
    $ for i in `seq 0 7`; do 
      mkdir /mnt/${i}; 
      /usr/share/google/safe_format_and_mount 
       /dev/disk/by-id/google-local-ssd-${i} /mnt/${i}; 
    done
    
    $ mkdir /mnt/out
    $ /usr/share/google/safe_format_and_mount 
       /dev/disk/by-id/google-${OUTPUT_DISK} 
       /mnt/out
  6. Finally, install the latest version of y-cruncher onto that instance. You may want to install screen or tmux as well.
    To calculate digits of Pi, here is the command line I used to start y-cruncher on the virtual machine:
    $ export DIGITS=”500000000000"
    $ ./y-cruncher custom pi -dec:${DIGITS} -hex:0 
      -o /mnt/out -mode:swap -swap:raid0 
      /mnt/0 /mnt/1 /mnt/2 /mnt/3 /mnt/4 /mnt/5 /mnt/6 /mnt/7

Searching Pi Digits (Preview!)

y-cruncher certainly made it easy to calculate digits of Pi. For Pi Day 2016, Francesc Campoy, Jen Tong, Sara Robinson and I built a reverse lookup index in Google Cloud Bigtable, so that we can search a sequence of digits, such as your phone number and more! Here is a sneak preview of 2-million writes per second of Pi Digits into Cloud Bigtable:
So that we can search a sequence of digits (up to 20 digits), and ask where in Pi does the first 9 digits of e appear? 6 occurrences! One in position 2,630,513,465!
Or, is there a sequence of nine 9’s “999999999”:
In position 4,329,769,635!

This was a big leap from last year, when we calculated 100 billion digits using y-cruncher running on last year's n1-highmem-32 of Compute Engine instance, which had 32-cores and 208GB of RAM and four Local SSDs.

Stayed tuned for more details on how we ingested billions of digits into Bigtable and how we built the Pi Search frontend.

- Posted by Ray Tsang, Developer Advocate and Greg Wilson, Head of Developer Advocacy for Google Cloud Platform

Free online learning tool serves 200,000 requests per minute on Google Cloud Platform


Back in 2005, 15-year-old Andrew Sutherland created Quizlet to help study for his high school French class. He built a flashcard-like study tool that helped him learn the material in several different ways and started getting better grades on his tests and quizzes. His solution quickly grew and today has more than 100 million user-created study sets. Every day Quizlet helps more than one million students and teachers worldwide learn, from fourth graders studying history to adults learning a new language at night.
Stability and performance is a critical element of Quizlet’s infrastructure, which supports its website and the API for its mobile apps. Students and teachers create study sets on Quizlet and rely on it to store and serve their content. If Quizlet crashes, it’s like closing a textbook on a student mid-lesson, or the night before a test.

In January 2015 the company decided to switch to a new cloud provider and chose Google Cloud Platform. It had outgrown its legacy provider and knew it needed a platform that would scale as it continued to grow. Historically, Quizlet’s traffic increased at least 50 percent per year and with just six engineers, it had to choose its technology carefully. After testing the options the company picked Google Cloud because it believed we had fundamentally better core technology.

On August 1, 2015, after several months of preparation, Quizlet switched its infrastructure of 200 machines over to Google Compute Engine. The date was no accident — the company knew that it needed to be prepared for the back-to-school rush of traffic. It’s highly seasonal business means that from summer to fall its traffic increases by a factor of six. Compute Engine allowed Quizlet to smoothly scale as students returned to class.
Quizlet weekly unique visits since March 2008 in the U.S. (blue) and rest of world (grey).






Moving to Cloud Platform meant it was easy to use other Google Cloud technology too. Quizlet uses Google BigQuery to analyze both its production and event data. Queries over its multi-terabyte production data set are snappy, even as it streams several hundred million events into it every day.

The company also used Google Cloud Storage to help serve audio files generated from its text-to-speech feature, which allows users to hear their content in 18 different languages. Google’s peering agreement with Cloudflare, Quizlet’s CDN provider, meant that Quizlet saved significantly on bandwidth for these and other files.

Back to school never felt so smooth!

To learn more about Quizlet’s move to Cloud Platform, read their whitepaper, "What's the Best Cloud? Probably GCP."

- Posted by Courtney Buchanan, Customer Marketing Manager, Google Cloud Platform

Google joins Open Compute Project to drive standards in IT infrastructure

We’re excited to announce that we're joining the Open Compute Project (OCP) to help drive standardization in IT infrastructure. More specifically, Google will contribute a new rack specification that includes 48V power distribution and a new form factor to allow OCP racks to fit into our data centers.

Energy efficiency in computing is a topic that has been near and dear to our hearts since the early days. We began advocating for efficient power supplies in 2003, and in 2006 shared details of our 12-volt architecture for racks inside our data centers — the infrastructure that supports and powers rows upon rows of our servers.

In 2009, we started evaluating alternatives to our 12V power designs that could drive better system efficiency and performance as our fleet demanded more power to support new high-performance computing products, such as high-power CPUs and GPUs. We kicked-off the development of 48V rack power distribution in 2010, as we found it was at least 30% more energy efficient and more cost effective in supporting these higher-performance systems.

Our 48V architecture has since evolved and includes servers with 48V to point-of-load designs, and rack-level 48V Li-Ion UPS systems. Google has been designing and using 48V infrastructure at scale for several years, and we feel comfortable with the robustness of the design and its reliability.

As the industry's working to solve these same problems and dealing with higher-power workloads, such as GPUs for machine learning, it makes sense to standardize this new design by working with OCP. We believe this will help everyone adopt this next generation power architecture, and realize the same power efficiency and cost benefits as Google.

The Open Compute community is an established collection of consumers and producers, and we see an opportunity to contribute our experience and expand the Open Rack specification. We’re collaborating with Facebook on a common 48V rack that we intend to submit for consideration by OCP.

Today’s launch is a first step in a larger effort. We think there are other areas of possible collaboration with OCP. We’ve recently begun engaging the industry to identify better disk solutions for cloud based applications. And we think that we can work with OCP to go even further, looking up the software stack to standardize server and networking management systems. We look forward to new and exciting advancements to come with the OCP community.

- Posted by John Zipfel, Technical Program Manager, Google

Three ways to build Slack integrations on Google Cloud Platform

Slack has become the hub of communications for many teams. It integrates with many services, like Google Drive, and there are ways to integrate with your own infrastructure. This blog post describes three ways to build Slack integrations on Google Cloud Platform using samples from our Slack samples repository on GitHub. Clone it with:

git clone https://github.com/GoogleCloudPlatform/slack-samples.git

Or make your own fork to use as the base for your own integrations. Since all the platforms we use in this tutorial support incoming HTTPS connections, all these samples could be extended into Slack Apps and distributed to other teams.


Using Slack for notifications from a Compute Engine instance


If you're using Google Compute Engine as a virtual private server, it can be useful to get an alert to know who's using a machine. This could be an audit log, but it's also useful to know when someone is using a shared machine so you don't step on changes each other are making.

To get started, we assume you have a Linux Compute Engine instance. You can follow this guide to create one and follow along.

Create a Slack incoming webhook and save the webhook URL. It will look something like https://hooks.slack.com/services/YOUR/SLACK/INCOMING-WEBHOOK. Give the hook a nice name, like "SSH Bot" and a recognizable icon, like a lock emoji.

Next, SSH into the machine and clone the repository. We'll be using the notify sample for this integration.

git clone https://github.com/GoogleCloudPlatform/slack-samples.git
cd slack-samples/notify

Create a file slack-hook with the webhook URL and test your webhook out.

nano slack-hook
# paste in URL, write out, and exit
PAM_USER=$USER PAM_RHOST=testhost ./login-notify.sh

The script sends a POST request to your Slack webhook. You should receive a Slack message notifying you of this.


We'll be adding a PAM hook to run whenever someone SSHes into the machine. Verify that SSH is using PAM by making sure there's a line "UsePAM yes" in the /etc/ssh/sshd_config file.

sudo nano /etc/ssh/sshd_config

We can now set up the PAM hook. The install.sh script creates a /etc/slack directory and copies the login-notify.sh script and slack-hook configuration there.

It configures /etc/pam.d/sshd to run the script whenever someone SSHes into the machine by adding the line "session optional pam_exec.so seteuid /etc/slack/login-notify.sh".

Keep this SSH window open in case something went wrong and verify that you can login from another SSH terminal. You should receive another notification on Slack, this time with the real remote host IP address.

Building a bot and running it in Google Container Engine


If you want to run a Slack bot, one of the easiest ways to do it is to use Beep Boop, which will take care of running your bot on Cloud Platform for you, so you can focus on making the bot the best you can.

A Slack bot connects to the Slack Real Time Messaging API using Websockets; it runs as a long-running process, listening to and sending messages. Google Container Engine provides a nice balance of control for running a bot. It uses Kubernetes to keep your bot running and manage your secret tokens. It's also one of the easiest ways to run a server that uses Websockets on Cloud Platform. We'll walk you through running a Node.js Botkit Slack bot on Container Engine, using Google Container Registry to store our Docker image.

First, set up your development environment for Google Container Engine. Clone the repository and change to the bot sample directory.

git clone https://github.com/GoogleCloudPlatform/slack-samples.git
cd slack-samples/bot

Next, create a cluster, if you don't already have one:

gcloud container clusters create my-cluster

Create a Slack bot user and get an authentication token. We'll be loading this token in our bot using the Kubernetes Secrets API. Replace MY-SLACK-TOKEN with the one for your bot user. The generate-secret.sh script creates the secret configuration for you by doing a simple text substitution in a template.

./generate-secret.sh MY-SLACK-TOKEN
kubectl create -f slack-token-secret.yaml

First, build the Docker container. Replace my-cloud-project-id below with your Cloud Platform project ID. This tags the container so that the gcloud command line tool can upload it to your private Container Registry.

export PROJECT_ID=my-cloud-project-id
docker build -t gcr.io/${PROJECT_ID}/slack-bot .

Once the build completes, upload it.

gcloud docker push gcr.io/${PROJECT_ID}/slack-bot

First, create a replication controller configuration, populated with your project ID, so that Kubernetes knows where to load the Docker image from. Like generate-secret.sh, the generate-rc.sh script creates the replication controller configuration for you by doing a simple text substitution in a template.

./generate-rc.sh $PROJECT_ID

Now, tell Kubernetes to create the replication controller to start running the bot.

kubectl create -f slack-bot-rc.yaml

You can check the status of your bot with:

kubectl get pods

Now your bot should be online and respond to "Hello."

Shut down and clean up

To shutdown your bot, we tell Kubernetes to delete the replication controller.

kubectl delete -f slack-bot-rc.yaml

If you've created a container cluster, you may still get charged for the Compute Engine resources it's using, even if they're idle. To delete the cluster, run:

gcloud container clusters delete my-cluster

This deletes the Compute Engine instances that are running the cluster.

Building a Slash command on Google App Engine


App Engine is a great platform for building Slack slash commands. Slash commands require that the server support SSL with a valid certificate. App Engine supports HTTPS without any configuration for apps using the provided *.appspot.com domain, and it supports SSL for custom domains. App Engine also provides great auto-scaling. You automatically get more instances with more usage and fewer (as few as zero or a configurable minimum) when demand goes down, and a free tier to make it easy to get started.

We'll be using Go on App Engine, but you can use any language supported by the runtime, including Python, Java1, and PHP.

Clone the repository. We'll be using the slash command sample for this integration.

git clone https://github.com/GoogleCloudPlatform/slack-samples.git
cd slack-samples/command/1-custom-integration

If you can reach your development machine from the internet, you should be able to test locally. Create a Slash Command and point it at http://your-machine:8080/quotes/random and run:

goapp serve --host=0.0.0.0

Now that we see it's working, we can deploy it. Replace with your Cloud Platform project ID in the following command and run:


goapp deploy -application  ./

Update your Slash Command configuration and try it out!
If you want to publish your command to be used by more than one team, you'll need to create a Slack App. This will give you an OAuth Client ID and Client secret. Plug these values into the config.go file of the App sample and deploy in the same way to get an "Add to Slack" button.

- Posted by Tim Swast, Developer Programs Engineer



1 Java is registered trademark of Oracle and/or its affiliates.

TensorFlow machine learning with financial data on Google Cloud Platform

If you knew what happened in the London markets, how accurately could you predict what will happen in New York? It turns out, this is a great scenario to be tackled by machine learning!

The premise for this problem is that by following the sun and using data from markets that close earlier, such as London that closes 4.5 hours ahead of New York, you could more accurately predict market behaviors 7 out of 10 times.

We’ve published a new solution, TensorFlow Machine Learning with Financial Data on Google Cloud Platform, that looks at this problem. We hope you’ll enjoy exploring it with us interactively in the Google Cloud Datalab notebook we provide.

As you go through the solution, you’ll query six years of time series data for eight different markets using Google BigQuery, explore that data using Cloud Datalab, then produce two powerful TensorFlow models on Cloud Platform.

TensorFlow is Google’s next generation machine learning library, allowing you to build high performance, state-of-the-art, scalable deep learning models. Cloud Platform provides the compute and storage on demand required to build, train and test those models. The two together are a marriage made in heaven and can provide a tremendous force multiplier for your business.

This solution is intended to illustrate the capabilities of Cloud Platform and TensorFlow for fast, interactive, iterative data analysis and machine learning. It does not offer any advice on financial markets or trading strategies. The scenario presented in the tutorial is an example. Don't use this code to make investment decisions.

- Posted by Corrie Elston, Solutions Architect, Google Cloud Platform