Posted by Brian Stevens, Vice President, Google Cloud
From the beginning, our goal for Google Cloud Platform has been to build the most open cloud for all developers and businesses alike, and make it easy for them to build and run great software. A big part of this is being an active member of the open source community and working directly with developers where they are, whether they’re at an emerging startup or a large enterprise.
Today we're pleased to announce that Google has joined the Cloud Foundry Foundation as a Gold member to further our commitment to these goals.
Collaborating with customers and partners as we’ve worked on these projects made the decision to join the Cloud Foundry Foundation simple. It's an energized community with vast enterprise adoption, and the technical collaboration has been remarkable between the various teams.
What’s next
Joining the Cloud Foundry Foundation allows us to be even more engaged and collaborative with the entire Cloud Foundry ecosystem. And as we enter 2017, we look forward to even more integrations and more innovations between Google, the Cloud Foundry Foundation and our joint communities.
Nan Boden, Head of Global Technology Partners, Google Cloud
Google Cloud’s guiding philosophy is to enable what’s next, and gaming is one industry that’s constantly pushing what’s possible with technical innovation. At Google, we are no stranger to these advancements, from AlphaGo’s machine learning breakthrough to Pokemon GO’s achievements in scaling and mapping on GCP.
We are always seeking new partners who share our enthusiasm for innovation, and today we are announcing a partnership with Improbable, a company focused on building large-scale, complex online worlds through their distributed operating system, SpatialOS. As part of the partnership, Improbable is launching the SpatialOS Games Innovation Program, which provides game developers with credits to access Improbable’s technology powered by GCP and the freedom to get creative and experiment with what’s possible up until they launch the game. Today, game developers can join the SpatialOS open alpha, and start to prototype, test and deploy games to the cloud. The program will fully launch in Q1 2017, along with the SpatialOS beta.
SpatialOS allows game developers to create simulations of great scale (a single, highly detailed world can span hundreds of square miles), great complexity (millions of entities governed by realistic physics) and huge populations (thousands of players sharing the same world). These exciting new games are possible with SpatialOS plus the scalability, reliability and openness of GCP, including the use of Google Cloud Datastore’s fully managed NoSQL database and Google Compute Engine’s internal network, instance uptime, live migration and provisioning speed.
Bossa Studios is already using SpatialOS and GCP to build Worlds Adrift, a 3D massively multiplayer game set to launch in early 2017. In Worlds Adrift, thousands of players share a single world of floating islands that currently cover more than 1000km². Players form alliances, build sky-ships and become scavengers, explorers, heroes or pirates in an open, interactive world. They can steal ships and scavenge wrecks while the islands’ flora and fauna can flourish and decline over time.
We see many opportunities for GCP to support developers building next-generation games and look forward to what game studios large and small will create out of our partnership with Improbable. To join the SpatialOS open alpha or learn more about the developer program visit SpatialOS.com.
Today Red Hat is releasing the general availability of their OpenShift Dedicated service running on Google Cloud Platform (GCP). This combination helps speed the adoption of Kubernetes, containers and cloud-native application patterns.
We often hear from customers that they need open source tools that enable their applications across both their own data centers and multiple cloud providers. Our collaboration with Red Hat around Kubernetes and OpenShift, is a great example of how we're committed to working with partners on open hybrid solutions.
OpenShift Dedicated on GCP offers a new option to enterprise IT organizations that want to use Red Hat container technology to deploy, manage and support their OpenShift instances. With OpenShift Dedicated, developers maintain control over the build and isolation process for their applications. Red Hat acts as the service provider, managing OpenShift Dedicated and offering support, helping customers focus more heavily on application development and business velocity. We'll also be working with Red Hat to make it easy for customers to augment their OpenShift applications with GCP’s broad and growing portfolio of services.
OpenShift and Kubernetes
As the second largest contributor to the project, Red Hat is a key collaborator helping to evolve and mature Kubernetes. Red Hat also uses Kubernetes as a foundation for Red Hat OpenShift Container Platform, which adds a service catalog, build automation, deployment automation and application lifecycle management to meet the needs of its enterprise customers.
OpenShift Dedicated is underpinned by Red Hat Enterprise Linux, and marries Red Hat’s enterprise-grade container application platform with Google’s 12+ years of operational expertise around containers (and the resulting optimization of our infrastructure for container-based workloads).
Enterprise developers who want to complement their on-premises infrastructure with cloud services and a global footprint, but who still want stable, more secure, open-source solutions, should try out OpenShift Dedicated on Google Cloud Platform, either as a complement to an on-premise OpenShift deployment or as a stand alone offering. You can sign up for the service here. We welcome your feedback on how to make the service even better.
Example application: analyzing a Tweet stream using OpenShift and Google BigQuery
We’re also working with Red Hat to make it easy for you to augment your OpenShift-based applications wherever they run. Below is an early example of using BigQuery, Google's managed data warehouse, and Google Cloud Pub/Sub, its real-time messaging service, with Red Hat OpenShift Dedicated. This can be the starting point to incorporate social insights into your own services.
Step 1: Next, set up a service account. A service account is a way to interact with your GCP resources by using a different identity than your primary login and is generally intended for server-to-server interaction. From the GCP Navigation Menu, click on "Permissions."
Once there, click on "Service accounts."
Click on "Create service account," which will prompt you to enter a service account name. Name your project and click on "Furnish a new private key." Select the default "JSON" Key type.
Step 2: Once you click "Create," a service account “.json” will be downloaded to your browser’s downloads location.
Important: Like any credential, this represents an access mechanism to authenticate and use resources in your GCP account — KEEP IT SAFE! Never place this file in a publicly accessible source repo (e.g., public GitHub).
Step 3: We’ll be using the JSON credential via a Kubernetes secret deployed to your OpenShift cluster. To do so, first perform a base64 encoding of your JSON credential file:
$ base64 -i ~/path/to/downloads/credentials.json
Keep the output (a very long string) ready for use in the next step, where you’ll replace‘BASE64_CREDENTIAL_STRING’ in the pod example (below) with the output of the base64 encoding.
Important: Note that base64 is encoded (not encrypted) and can be readily reversed, so this file (with the base64 string) should be treated with the same high degree of care as the credential file mentioned above.
Step 4: Create the Kubernetes secret inside your OpenShift cluster. A secret is the proper place to make sensitive information available to pods running in your cluster (like passwords or the credentials downloaded in the previous step). This is what your pod definition will look like (e.g., google-secret.yaml):
You’ll want to add this file to your source-control system (minus the credentials).
Replace ‘BASE64_CREDENTIAL_STRING’ with the base64 output from the prior step.
Step 5: Deploy the secret to the cluster:
$ oc create -f google-secret.yaml
Step 6: Now you can use Google APIs from your OpenShift cluster. To take your GCP-enabled cluster for a spin, try going through the steps detailed in Real-Time Data Analysis with Kubernetes, Cloud Pub/Sub and BigQuery, a solutions document. You’ll need to make two minor tweaks for the solution to work on your OpenShift cluster:
For any pod that needs to access Google APIs, modify it to create a reference to the secret, including exporting the environment variable “GOOGLE_APPLICATION_CREDENTIALS” to the pod (here’s more information on application default credentials).
In the PubSub-BiqQuery solution, that means you’ll modify two pod definitions:, pubsub/bigquery-controller.yaml and pubsub/twitter-stream.yaml
Step 7: Finally, anywhere the solution instructs you to use "kubectl," replace that with the equivalent OpenShift command "oc."
That’s it! If you follow along with the rest of the steps in the solution, you’ll soon be able to query (and see) tweets showing up in your BigQuery table — arriving via Cloud Pub/Sub. Going forward with your own deployments, all you need to do is follow the above steps of attaching the credential secret to any pod where you use Google Cloud SDKs and/or access Google APIs.
Posted by Chuck Coulson, Global Technology Partnerships
If your organization runs IBM software, we have news for you: Google Cloud Platform is now officially an IBM Eligible Public Cloud, meaning you can run a wide range of IBM software SKUs on Google Compute Engine with your existing licenses.
Under IBM's Bring Your Own Software License policy (BYOSL), customers who have licensed, or wish to license, IBM software through either Passport Advantage or an authorized reseller, may now run that software on Compute Engine. This applies to the majority of IBM's vast catalog of software -- everything from middleware and DevOps products (Websphere, MQ Series, DataPower, Tivoli) to data and analytics offerings (DB2, Informix, Cloudant, Cognos, BigInsights).
What comes next depends on you. Help us identify the IBM software that needs to be packaged, tuned, and optimized for Compute Engine. You can let us know what IBM software you plan to run on Google Cloud by taking this short survey. And feel free to reach out to me directly with any questions.
Posted by Hanan Youssef, Product Manager, Google Cloud Platform
Google Cloud Platform’s focus on infrastructure excellence allows us to provide great price-performance and access to the latest hardware innovations. Working closely with hardware vendors, we help guide new advancements in data center technology and the speed at which Google Cloud Platform (GCP) customers can use them.
Yesterday, Google Cloud announced a strategic alliance with Intel that builds on our long-standing relationship developing data center technology. Today, we're excited to announce that Google Compute Engine will support Intel’s latest Custom Cloud solution based on the next-generation Xeon Processor (codenamed Skylake) in early 2017.
The upcoming Xeon processor is an excellent choice for graphics rendering, simulations and any CPU intensive workload. At launch, Compute Engine customers will be able to utilize the processor’s AVX-512 extensions to optimize their enterprise-class and HPC workloads. We'll also add support for additional Skylake extensions over time.
You'll be able to use the new processor with Compute Engine’s standard, highmem, highcpu and custom machine types. We also plan to continue to introduce bigger and better VM instance types that offer more vCPUs and RAM for compute- and memory-intensive workloads.
If you’d like to be notified of upcoming Skylake beta testing programs, please fill out this form.
Google Cloud continues its push into media and entertainment since completing the acquisition of online video platform Anvato and a collaboration with Autodesk at NAB earlier this year. Media use cases like multi-screen video transcoding, livestreaming to global audiences and 3D rendering power demand from customers in every industry where video and creative content is used, from advertising to education and beyond.
Today we're announcing the release of an expansion of ZYNC Render to support two new host platforms: SideFX Houdini and Maxon Cinema 4D. Both integrations will open the cost and productivity benefits for animators to leverage the power of Google Cloud Platform (GCP) to bring their projects to life, bringing massive scalability and compute access to the animation industry.
ZYNC is a turnkey rendering solution for boutique to mid-size studios that allows artists to focus on content creation. It does this through plugins to popular modeling and animation that software artists already use, offering one-stop access to powerful compute, storage and software licenses on GCP.
Users of Houdini, a leading package for complex 3D effects, can now utilize up to 32,000 cores on GCP for their rendering projects. To purchase a traditional render farm of this size is often well beyond the resources of most small to mid-size studios. Instead, artists can render on-demand, paying on a per-minute basis with the full economic benefits of GCP.
The Maxon Cinema 4D integration, a popular package for creating motion graphics, marks the first time Maxon has enabled its product through a cloud rendering service. As artists create more complicated scenes for commercial work and feature films, on-demand, scalable cloud rendering has emerged as a critical tool for studios trying to meet tight deadlines.
The media team at Google Cloud is excited to bring cloud-based rendering to the animation industry. We continue to be driven by empowering creative professionals with world-class infrastructure, giving even the smallest studio equal resources to rival the largest production houses.
Posted by Doron Meirfeld, Head of DevOps at JFrog and Mansirman Singh, Solutions Engineer at JFrog
Editor's Note: Today we hear from our partner JFrog, which recently refactored its Artifactory SaaS offering onto Google Cloud Platform (GCP), making it possible for joint JFrog and GCP customers to co-locate their development and production environments. Read on for more details about how JFrog architected and optimized the service to run on GCP.
JFrog Artifactory SaaS is a universal artifact repository hosted in the cloud. Our customers use it as the one-stop-binary-shop for their Docker registry, and repositories for Maven, npm, PyPi, Nuget and more. It offers freedom of choice in several dimensions supporting all major package formats, CI servers and build tools. We recognized the need to add Google Cloud Platform (GCP) to offer more choice, and support organizations already using GCP so they could co-locate all their cloud services. We set up Artifactory SaaS hosted on GCP using Google Cloud Storage as its massively scalable object store.
JFrog Artifactory SaaS deployment architecture
While setting up an enterprise-grade cloud service can get complicated, GCP offers an extensive range of services to make it easier. The architecture we developed is constructed as four layers based on GCP services, mainly Google Compute Engine and Cloud Storage. The four layers are:
Network Load Balancers, balance requests coming from the outside world into the front-end web Nginx stacks
Web servers based on stacked Nginx servers are responsible for internal application load balancing and proxying requests
Artifactory application servers
Data and Metadata Management using Google Cloud SQL and Cloud Storage. Cloud SQL manages the Artifactory internal database fundamental to the product’s checksum-based storage, and Cloud Storage is the application’s massively scalable object store, where all the binary artifacts are actually hosted.
Onboarding and Provisioning
To onboard new customers quickly and easily, we developed a scripted provisioning service. As soon as you register for a free trial, the JFrog Store triggers the provisioning mechanism to automatically set up and configure all layers in the service architecture, helping you get up and running virtually instantly. This structured and efficient onboarding mechanism made it easier for us to adapt our internal provisioning solution by simply swapping out API calls and replacing them with relevant GCP API calls such as those in the Compute Engine and Cloud Storage services.
This structured and efficient onboarding process made it easier for us to adopt GCP as a new additional underlying platform for the service.
The four tenets of an enterprise-grade service
In designing our setup, we wanted to ensure it would meet the requirements of any enterprise:
Scalability
Every layer in the architecture can be quickly scaled by our provisioning mechanism to meet any load requirements. Whether adding Compute Engines to the network or web service layers, or adding Artifactory instances, any element can be scaled up on demand without incurring any downtime to the system, and storage scales automatically as needed.
High availability
Our years of experience with our Artifactory SaaS taught us that one of the most critical issues that enterprises are concerned about when considering a cloud service is availability. Since an artifact repository is a vital resource for a development organization, any downtime can quickly translate to dollars lost. Our architecture takes availability to the extreme with redundancy implemented at every level, resulting in a setup with no single point-of-failure. The whole system is redundantly deployed on multiple distinct zones, so even if there is a general failure in one zone, the system will failover to another zone and continue to provide service to customers. As demand increases for JFrog Artifactory SaaS on GCP, we have complete flexibility to quickly set up redundant installations in additional zones as needed and on-demand.
Disaster recovery capabilities
To support disaster recovery, we utilize the built-in capability for multi-region storage on Cloud Storage and replicate Artifactory’s Cloud SQL database that contains both application and customer data. This allows us to failover to a recovery region in case of an outage in the main active region with no noticeable impact to users.
Security
The system maintains a clear separation at every level for both customers on a dedicated Artifactory SaaS installation and those on multi-tenanted installations. Dedicated customer installations use separate virtual devices at each layer of the architecture for maximum security. For customers in a multi-tenant environment, our provisioning mechanism automatically creates clearly separated folders, Artifactory servers and web server configurations as well as a dedicated filestore in Cloud Storage buckets.
Lessons learned
While JFrog Artifactory SaaS was already a mature and robust service, we learned a few lessons while migrating the service to GCP.
Tweak to peak
While a service may run on top of different infrastructures, each cloud provider has its nuances. To optimize JFrog Artifactory SaaS on GCP, we focused on tweaking resource allocation such as number of threads and more in each layer to get optimal performance.
Offload to buckets
Working with Cloud Storage buckets made it much easier for us to manage the service than when using traditional storage solutions such as NFS. Things like monitoring folder sizes and storage capacity was a non-issue since these functions are provided by Cloud Storage buckets. On the whole, our service got “lighter.”
Setting up JFrog Artifactory SaaS on top of GCP was a great decision for us. Our past experience with Artifactory SaaS helped us in migrating and modifying our cloud service to a new platform while maintaining the same high quality of service. As a leader in binary artifact management, we take "universality" as a guiding principle, and believe that hosting our service on GCP is a great way to serve our customers. We'll continue to grow with Google Cloud as more services are added to enhance scalability, availability and reliability.
Open source developer? Get a free ride
JFrog’s tools are made by developers for developers. We are part of the OSS community and strive to provide it with the best vehicle to ride in. Together with the Google Cloud team, JFrog is happy to sponsor repositories free of charge (including Artifactory SaaS on top of a JFrog sponsored GCP infrastructure) for open source projects. Browse to the registration form and feel free to submit your request.
Posted by Jay Marshall, Principal Strategic Advisor
A new way for enterprises to capitalize on Google scale and innovation
Our goal for Google Cloud Platform (GCP) is to build the most open cloud for all businesses, and make it easy for them to build and run great software. This means being good stewards of the open source community, and having strong engineering partnerships with like-minded industry leaders.
Today, we're happy to announce more about our collaboration with Pivotal. Its cloud-native platform, Pivotal Cloud Foundry (PCF), is based on the open source Cloud Foundry project that it started many years ago. It was a natural fit for the two companies to start working together.
A differentiated Pivotal Cloud Foundry with Google
Customers can now deploy and operate Pivotal Cloud Foundry on GCP. This is a powerful combination that brings Pivotal’s enterprise cloud-native experience together with Google’s infrastructure and innovative technology.
So what does that mean in the real-world? Deployments of PCF on GCP can include:
Cloud Foundry applications deployed on GCP utilize Google Cloud Load Balancing, which allows you to spin up from zero requests to millions automatically
Industry-leading price:performance ratio on cloud compute resources, including GCP-specific pricing features around per-minute billing, sustained use discounts and inferred usage discounts
Further, the combination of PCF and GCP allows customers to access Google’s data and machine learning (ML) services within customer applications via custom-built service brokers that expose GCP services directly into Cloud Foundry.
This level of integration with Google’s infrastructure enables the enterprise to build and deploy apps that can scale, store and analyze data quickly. The following data and machine learning services are now available in Pivotal Cloud Foundry today:
We pride ourself on our “engineer to engineer” approach to working with customers and partners. And that’s exactly how we worked with The Home Depot as a shared customer of GCP and Pivotal Cloud Foundry.
The Home Depot software development team worked side-by-side with Google and Pivotal as they co-engineered the integration of PCF on GCP. Together, they’re building business systems for a digital strategy around this partnership, and will be running parts of homedepot.com on PCF and GCP in time for this year’s Black Friday.
Getting started
We've published a “Pivotal Cloud Foundry on Google Cloud Platform” solutions document that provides an example deployment architecture, as well as links to various setup guides. These links range from the lower-level OSS bits up through step-by-step installation guides with screenshots from our friends at Pivotal. It's a comprehensive guide to help you get started with PCF on GCP.
What’s next
Bringing more GCP services into the Cloud Foundry ecosystem is a priority, and we’re looking at how we can further contribute to the Spring community. Stay tuned for more news and updates- but in the meantime, reach out to your local Pivotal or Google Cloud sales team or contact Sales to talk to someone about this exciting partnership.
To evaluate campaigns and analyze the impact of various media channels, you need to be able to identify every source of traffic. You probably are already doing this now by connecting Google Analytics with your AdWords account and by customizing links from other purchased channels with custom campaign parameters. Google’s URL builder is often used to generate these links containing campaign parameters.
In large organizations, it is not uncommon for different parts of the organization to purchase and drive traffic to the website. One group might be responsible for paid search, another for affiliates, and a third for driving traffic from social media. This division places high demands on the structure of your data if you want to be able to compare the various media channels or see the combined effect of a campaign – no matter which channel the message was communicated in.
To set a naming convention for campaign names and decide how to name all media channels is challenging when many people within the organization must comply to the naming guidelines. Without having consistent definition of the site’s traffic sources, it is also more difficult to create attribution models, as these require good-quality data in order to be reliable.
To simplify the generation of consistent links tagged with custom campaign parameters Outfox, a GACP and Google Analytics 360 Reseller, have developed GA Campaign URL, an add-on for Google Sheets. This tool helps you to create tag sheets, where you can guide users regarding which parameters are to be used and let users select, for example, the medium, sources, and campaigns in drop-down lists. Since you can easily share the spreadsheet among various users and control (by using permissions) which users can add and change the values in drop-down lists, you can create tag sheets tailored to the needs and organization of your company. 
Read more about how to use GA Campaign URL on our Help page or install it for Google Sheets!
Posted by Christoffer Lutheran, Director of Analytics at Google Analytics Certified Partner Outfox
Integration with rich third-party ops solutions is important for customers, and we know that many of you are already using these tools to manage hybrid operations in private and public clouds. With that in mind, these partnerships are focused on delivering:
Easy configurable integration of Cloud Platform with partners
New and complementary capabilities, specifically around Security Information and Event Management (SIEM) and compliance reporting
Google Cloud Platform and Splunk
Cloud Platform’s integration with Splunk Enterprise provides insights on operations and leverages Splunk’s unique capabilities around Security Information and Event Management (SIEM).
Turn on the integration by configuring real-time streaming of log data via the Google Cloud Pub/Sub API — a powerful and reliable messaging service responsible for routing data between applications at scale. Once you've configured the integration to stream all of the log data to your Splunk account, you can access the full richness of Splunk Enterprise capabilities. See the details on the partnership here.
Let’s take a scenario in which a network administrator would like to monitor sensitive configuration changes to rules on your network. When an employee in your organization changes firewall rules on any server, the activity is logged by the compute service. The integration between Splunk Enterprise and Stackdriver Logging allows you to monitor such activities and get alerted in real-time. Splunk automatically identifies such interesting trends and anomalies on your system activity data. When you get alerted, you can see the chart, drill down to the actual log entry and rest any undesirable changes that might put your system at risk, thus making it more secure.
Google Cloud Platform and BMC
Our enterprise customers are increasingly building and operating hybrid and even multi-cloud environments. As you migrate existing services and launch new ones on Cloud Platform, we want to ensure that you have access to established solutions like those from BMC that provide a single pane of glass to manage and monitor applications across deployment environments and help with compliance, security and governance.
To kick off this strategic partnership, BMC demonstrated at GCP NEXT an advance version of its Cloud Lifecycle Management product managing and repairing a workload running on Cloud Platform. The company also showcased how Cloud Platform applications can be monitored simultaneously with on-premise installations using BMC TrueSight.
You can learn more about BMC’s suite of solutions here.
Google Cloud Platform and Tenable
We understand that to secure your organization you have to know what applications and workloads are running in it, and who and what devices are trying to access it. Tenable helps secure Cloud Platform with SecurityCenter Continuous View (SecurityCenter CV). This solution supports both on-premises and cloud deployments like Cloud Platform. As a result, organizations familiar with this tool can employ a single technology for monitoring hybrid environments, thereby eliminating the need to buy, deploy and manage multiple tools.
To get started, you'll need to install SecurityCenter CV and create a service account within Cloud Platform and assign permissions to the Tenable service account you plan to use for the Pub/Sub topic. Then publish the logs to the appropriate topic that your Tenable account will subscribe to and you'll see the log and event data in SecurityCenter CV.
Our vision at Cloud Platform is to create a strong ecosystem of partners that provide flexibility and richness of tools, giving you choice and eliminating constraints. Today, we expanded our partners in the ops domain. Please visit the Google Stackdriver Partner page to see the full details on our existing and new ops partners and how you can start using them today.