Tag Archives: Announcements

Bust a move with Transfer Appliance, now generally available in U.S.



As we celebrate the upcoming Los Angeles Google Cloud Platform (GCP) region in one of the creative centers of the world, we are excited to share news about a product that can help you get your data there as fast as possible. Google Transfer Appliance is now generally available in the U.S., with a few new features that will simplify moving data to Google Cloud Storage. Customers have been using Transfer Appliance for almost a year, and we’ve heard great feedback.

The Transfer Appliance is a high-capacity server that lets you transfer large amounts of data to GCP, quickly and securely. It’s recommended if you’re moving more than 20TB of data, or data that would take more than a week to upload.

You can now request a Transfer Appliance directly from your Google Cloud Platform console. Indicate the amount of data you’re looking to transfer, and our team will help you choose the version that is the best fit for your needs.

The service comes in two configurations: 100TB or 480TB of raw storage capacity. We see typical data compression rates of 2x the raw capacity. The 100TB model is priced at $300, plus express shipping (approximately $500); the 480TB model is priced at $1,800, plus shipping (approximately $900).

You can mount Transfer Appliance as an NFS volume, making it easy to drag and drop files, or rsync, from your current NAS to the appliance. This feature simplifies the transfer of file-based content to Cloud Storage, and helps our migration partners expedite the move for customers.
"SADA Systems provides expert cloud consultation and technical services, helping customers get the most out of their Google Cloud investment. We found Transfer Appliance helps us transition the customer to the cloud faster and more efficiently by providing a secure data transfer strategy."
-Simon Margolis, Director of Cloud Platform, SADA Systems
Transfer Appliance can also help you transition your backup workflow to the cloud quickly. To do that, move the bulk of your current backup data offline using Transfer Appliance, and then incrementally back up to GCP over the network from there. Partners like Commvault can help you do this.

With this release, you’ll also find a more visible end-to-end integrity check, so you can be confident that every bit was transferred as is, and have peace of mind in deleting source data.

Transfer Appliance in action

In developing Transfer Appliance, we built a device designed for the data center, so it slides into a standard 19” rack. That has been a positive experience for our early customers, even those with floating data centers (yes, actually floating--see below for more).

We’ve seen our customers successfully use Transfer Appliance for the following use cases:
  • Migrate your data center (or parts of it) to the cloud.
  • Kick-start your ML or analytics project by transferring test data and staging it quickly.
  • Move large archives of content like creative libraries, videos, images, regulatory or backup data to Cloud Storage.
  • Collect data from research bodies or data providers and move it to Google Cloud for analysis.
We’ve heard about lots of innovative, interesting data projects powered by Transfer Appliance. Here are a few of them.

One early adopter, Schmidt Ocean Institute, is a private non-profit foundation that combines advanced science with state-of-the-art technology to achieve lasting results in ocean research. Their goals are to catalyze sharing of information and to communicate this knowledge to audiences around the world. For example, the Schmidt Ocean Institute owns and operates research vessel Falkor, the first oceanographic research vessel with a high-performance cloud computing system installed onboard. Scientists run models and software and can plan missions in near-real time while at sea. With the state-of-the-art technologies onboard, scientists contribute scientific data to the oceanographic community at large, very quickly. Schmidt Ocean Institute uses Transfer Appliance to safely get the data back to shore and publicly available to the research community as fast as possible.

“We needed a way to simplify the manual and complex process of copying, transporting and mailing hard drives of research data, as well as making it available to the scientific community as quickly as possible. We are able to mount the Transfer Appliance onboard to store the large amounts of data that result from our research expeditions and easily transfer it to Google Cloud Storage post-cruise. Once the data is in Google Cloud Storage, it’s easy to disseminate research data quickly to the community.”
-Allison Miller, Research Program Manager, Schmidt Ocean Institute

Beatport, a division of LiveStyle, serves an audience of electronic music DJs, producers and their fans. Google Transfer Appliance afforded Beatport the opportunity to rethink their storage architecture in the cloud without affecting their customer-facing network in the process.

“DJs, music producers and fans all rely on Beatport as the home for the world’s electronic music. By moving our library to Google Cloud Storage, we can access our audio data with the advanced tools that Google Cloud Platform has to offer. Managing tens of millions of lossless quality files poses unique challenges. Migrating to the highly performant Cloud Storage puts our wealth of audio data instantly at the fingertips of our technology team. Transfer Appliance made that move easier for our team.”
-Jonathan Steffen, CIO, beatport
Eleven Inc. creates content, brand experiences and customer activation strategies for clients across the globe. Through years of work for their clients, Eleven built a large library of creative digital assets and wanted a way to cost-effectively store that data in the cloud. Facing ISP network constraints and a desire to free up space on their local asset server quickly, Eleven Inc. used Transfer Appliance to facilitate their migration.

“Working with Transfer Appliance was a smooth experience. Rack, capture and ship. And now that our creative library is in Google Cloud Storage, it's much easier to think about ways to more efficiently manage the data throughout its life-cycle.”
-Joe Mitchell, Director of Information Systems
amplified ai combines extensive IP industry experience with deep learning to offer instant patent intelligence to inventors and attorneys. This requires a lot of patent data for building models. Transfer Appliance helped amplified ai move TBs of this specialized essential data to the cloud quickly.

“My hands are already full building deep learning models on massive, disparate data without also needing to worry about physically moving data around. Transfer Appliance was easy to understand, easy to install, and made it easy to capture and transfer data. It just did what it was supposed to do and saved me time which, for a busy startup, is the most valuable asset.”
-Chris Grainger, Founder & CTO, amplified ai
Airbus Defence and Space Geo Inc. uses their exclusive access to radar and optical satellites to offer a stunning Earth observation images library. As part of a major cloud migration effort, Airbus moved hundreds of TBs of this data to the cloud with Transfer Appliance so they can better serve images to clients from Cloud Storage. They improved data quality along with the migration by using Transfer Appliance.

“We needed to liberate. To flex on demand and scale in the cloud, and unleash our creativity. Transfer Appliance was a catalyst for that. In addition to migrating an amount of data that would not have been possible over the network, this transfer gave us the opportunity to improve our storage in the process—to clean out the clutter.”
-Dave Wright, CTO, Airbus Defense and Space Geo Inc.


National Collegiate Sports Archives (NCSA) is the creator and owner of the VAULT, which contains years worth of college sports footage. NCSA digitizes archival sports footage from leading schools and delivers it via mobile, advertising and social media platforms. With a lot of precious footage to deliver to college sports fans around the globe, NCSA needed a way to move data into Google Cloud Platform quickly and with zero disruption for their users.

“With a huge archive of collegiate sports moments, we wanted to get that content into the cloud and do it in a way that provides value to the business. I was looking for a solution that would cost-effectively, simply and safely execute the transfer and let our teams focus on improving the experience for our users. Transfer Appliance made it simple to capture data in our data center and ship it to Google Cloud. ”
-Jody Smith, Technology Lead, NCSA

Tackle your data migration needs with Transfer Appliance

To get detailed information on Transfer Appliance, check out our documentation. And visit our Data Transfer page to learn more about our other cloud data transfer options.

We’re looking forward to bringing Transfer Appliance to regions outside of the U.S. in the coming months. But we need your help: Where should we deploy first? If you are interested in offline data transfer but not located in the U.S., please indicate so in the request form.

If you’re interested in learning more about cloud data migration strategies, check out this session at Next 2018 next month. For more information, and to register, visit the Next ‘18 website.

Six essential security sessions at Google Cloud Next 18



We aim to be the most secure cloud, but what does that mean? If you’re coming to Google Cloud Next '18 next month in San Francisco, now is your chance to identify and understand the technologies and best practices that set Google Cloud Platform (GCP) apart from other cloud providers. There are dozens of breakout sessions dedicated to security, but if time is short, here are six sessions that will give you a solid understanding of foundational GCP security practices and offerings, as well as insight into the cutting-edge security research and development being done by our team.

1. How Google Protects Your Data at Rest and in Transit

First things, first. Come learn how Google protects your data within Google infrastructure, when it’s stored on disk as well as when it moves across our network, for use by various services. Google Cloud Security and Privacy Product Managers Maya Kaczorowski and Il-Sung Lee will also cover additional protections you can put in place such as Customer-Managed Encryption Keys, IPsec tunnels, and Istio. More details are available here.

2. How Google's Security Infrastructure Design Enabled Rapid, Seamless Response to “Spectre” and “Meltdown”

Not content to sit back and wait, Google has a huge team of security researchers that actively push the limits of our systems. This year, researchers found two significant vulnerabilities in modern compute architectures: Spectre and Meltdown. This session will detail those vulnerabilities, and more to the point, how we remediated them transparently, without customer downtime. Learn more here.

3. BeyondCorp Beyond Google

New Google employees always marvel at how they can access Google resources from anywhere, without a VPN. That’s made possible by our BeyondCorp model, and core BeyondCorp technologies such as global scale security proxies, phishing-resistant 2nd factor authentication, and laptop security enforcement are increasingly available to Google Cloud customers. In this session, French resource management provider VEOLIA describes how it built out a BeyondCorp model on Google Cloud to reach 169,000 employees across five continents. Register for the session here.

4. Trust Through (Access) Transparency

'When do you access my data, and how will I know?' is a question that troubles every cloud customer who cares about their data—and one that few cloud providers have an answer for. This talk reviews Google's robust data protection infrastructure, and introduces Google's new Access Transparency product, which gives customers near-real-time oversight over data accesses by Google's administrators. The talk also guides customers through how to audit accesses and mitigate against this risk, together with examples from our customers of where this has successfully been done. Register for the session here.

5. Google Cloud: Data Protection and Regulatory Compliance

Security in the cloud is much more than encryption and firewalls. If you’re subject to regulations, you often need to demonstrate data protection and compliance with a variety of regulatory standards. In this session, we cover recent trends in the data protection space, such as GDPR, and share tools you can leverage to help address your compliance needs. You'll learn how you can partner with Google to enhance data security and meet global regulatory obligations. You can find a full session description here.

6. Shield Your Cloud with Verifiable Advanced Platform Security

Last but not least, you’ll definitely want to attend this session by Googlers Andrew Honig and
Nelly Porter, as they discuss issues facing VM security in the cloud, and an interesting new approach to mitigate against local code gaining escalation privileges. After attending this session, you’ll understand how we prevent workloads running on Google Cloud Platform from being penetrated by boot malware or firmware rootkits. Register for the session here.

Of course, this is just the tip of the iceberg. Security runs through everything we do at Google Cloud. In addition to these six sessions, there are 31 other breakout sessions dedicated to security, not to mention keynotes and supersessions, hands-on labs, meetups and bootcamps. Don’t delay, register for Next today.

Announcing a new certification from Google Cloud Certified: the Associate Cloud Engineer



Cloud is no longer an emerging technology. Now that businesses large and small are realizing the potential of cloud services, the need to hire individuals who can manage cloud workloads has sky-rocketed. Today, we’re launching a new Associate Cloud Engineer certification, designed to address the growing demand for individuals with the foundational cloud skills necessary to deploy applications and maintain cloud projects on Google Cloud Platform (GCP).

The Associate Cloud Engineer certification joins Professional Cloud Architect, which launched in 2016, and Data Engineer, which followed quickly thereafter. These certifications identify individuals with the skills and experience to leverage GCP to overcome complex business challenges. Since the program’s inception, Google Cloud Certified has experienced continual growth, especially this last year when the number of people sitting for our professional certifications grew by 10x.

Because cloud technology affects so many aspects of an organization, IT professionals need to know when and how to use cloud tools in a variety of scenarios, ranging from data analytics to scalability. For example, it's not enough to launch an application in the cloud. Associate Cloud Engineers also ensure that the application grows seamlessly, is properly monitored, and readily managed by authorized personnel.

Feedback from the beta launch of the Associate Cloud Engineer certification has been great. Morgan Jones, an IT professional, was eager to participate because he sees “the future of succeeding and delivering business value from the cloud is to adopt a multi-cloud strategy. This certification can really help me succeed in the GCP environment."

As an entry point to our professional-level certifications, the Associate Cloud Engineer demonstrates solid working knowledge of GCP products and technologies. “You have to have experience on the GCP Console to do well on this exam. If you haven’t used the platform and you just cram for the exam, you will not do well. The hands-on labs helped me prepare for that,” says Jones.

Partners were a major impetus in the development of the Associate Cloud Engineer exam, which will help them expand GCP knowledge throughout their organizations and address increasing demand for Google Cloud technologies head-on. Their enthusiastic response to news of this exam sends signals that the Associate Cloud Engineer will be a catalyst for an array of opportunities for those early in their cloud career.

"We are really excited for the Associate Cloud Engineer to come to market. It allows us to target multiple role profiles within our company to drive greater knowledge and expertise of Google Cloud technologies across our various managed services offerings."
- Luvlynn McAllister, Rackspace, Director, Sales Strategy & Business Operations

The Associate Cloud Engineer exam is:
  • Two hours long
  • Recommended for IT professionals with six months of GCP experience
  • Available for a registration fee of $125 USD
  • Currently available in English
  • Available at Next ‘18 for registered attendees

The Google Cloud training team offers numerous ways to increase your Google Cloud know-how. Join our webinar on July 10 at 10:30am to hear from members of the team who developed the exam about how this certification differs from others in our program and how to best prepare. If you still want to check your readiness, take the online practice exam at no charge. For more information on suggested training and an exam guide, visit our website. Register for the exam today.

GPUs as a service with Kubernetes Engine are now generally available



[Editor's note: This is one of many posts on enterprise features enabled by Kubernetes Engine 1.10. For the full coverage, follow along here.]

Today, we’re excited to announce the general availability of GPUs in Google Kubernetes Engine, which have become one of the platform’s fastest growing features since they entered beta earlier this year, with core-hours soaring by 10X since the end of 2017.

Together with the GA of Kubernetes Engine 1.10, GPUs make Kubernetes Engine a great fit for enterprise machine learning (ML) workloads. By using GPUs in Kubernetes Engine for your CUDA workloads, you benefit from the massive processing power of GPUs whenever you need, without having to manage hardware or even VMs. We recently introduced the latest and the fastest NVIDIA Tesla V100 to the portfolio, and the P100 is generally available. Last but not least, we also offer the entry-level K80, which is largely responsible for the popularity of GPUs. All our GPU models are available as Preemptible GPUs, as a way to reduce costs while benefiting from GPUs in Google Cloud. Check out the latest prices for GPUs here.

As the growth in GPU core-hours indicates, our users are excited about GPUs in Kubernetes Engine. Ocado, the world’s largest online-only grocery retailer, is always looking to apply state-of-the-art machine learning models for Ocado.com customers and Ocado Smart Platform retail partners, and runs the models on preemptible, GPU-accelerated instances on Kubernetes Engine.
“GPU-attached nodes combined with Kubernetes provide a powerful, cost-effective and flexible environment for enterprise-grade machine learning. Ocado chose Kubernetes for its scalability, portability, strong ecosystem and huge community support. It’s lighter, more flexible and easier to maintain compared to a cluster of traditional VMs. It also has great ease-of-use and the ability to attach hardware accelerators such as GPUs and TPUs, providing a huge boost over traditional CPUs.”
— Martin Nikolov, Research Software Engineer, Ocado
GPUs in Kubernetes Engine also have a number of unique features:
  • Node Pools allow your existing cluster to use GPUs whenever you need.
  • Cluster Autoscaler automatically creates nodes with GPUs whenever pods requesting GPUs are scheduled, and scale down to zero when GPUs are no longer consumed by any active pods.
  • Taint and toleration technology ensures that only pods that request GPUs will be scheduled on the nodes with GPUs, and prevents pods that do not require GPUs from running on them.
  • Resource quota that allows administrators to limit resource consumption per namespace in a large cluster shared by multiple users or teams.
We also heard from you that you need an easy way to understand how your GPU jobs are performing: how busy the GPUs are, how much memory is available, and how much memory is allocated. We are thrilled to announce that you can now monitor those information natively from the GCP Console.You can also visualize these metrics in Stackdriver.
Fig 1. GPU memory usage and duty cycle 

The general availability of GPUs in Kubernetes Engine represents a lot of hard work behind the scenes, polishing the internals for enterprise workloads. Jiaying Zhang, the technical lead for this general availability, led the Device Plugins effort in Kubernetes 1.10, working closely with the OSS community to understand its needs, identify common requirements, and come up with an execution plan to build a production-ready system.

Try them today

To get started using GPUs in Kubernetes Engine using our free-trial of $300 credits, you’ll need to upgrade your account and apply for a GPU quota for the credits to take effect. For a more detailed explanation of Kubernetes Engine with GPUs, for example how to install NVIDIA drivers and how to configure a pod to consume GPUs, check out the documentation.

In addition to GPUs in Kubernetes Engine, Cloud TPUs are also now publicly available in Google Cloud. For example, RiseML uses Cloud TPUs in Kubernetes Engine for a hassle-free machine learning infrastructure that is easy-to-use, highly scalable, and cost-efficient. If you want to be among the first to access Cloud TPUs in Kubernetes Engine, join our early access program today.

Thanks for your feedback on how to shape our roadmap to better serve your needs. Keep the conversation going by connecting with us on the Kubernetes Engine Slack channel.

Cloud TPU now offers preemptible pricing and global availability




Deep neural networks have enabled breakthroughs across a variety of business and research challenges, including translating text between languages, transcribing speech, classifying image content, and mastering the game of Go. Because training and running deep learning models can be extremely computationally demanding, we rely on our custom-built Tensor Processing Units (TPUs) to power several of our major products, including Translate, Photos, Search, Assistant, and Gmail.

Cloud TPUs allow businesses everywhere to transform their own products and services with machine learning, and we’re working hard to make Cloud TPUs as widely available and as affordable as possible. As of today, Cloud TPUs are available in two new regions in Europe and Asia, and we are also introducing preemptible pricing for Cloud TPUs that is 70% lower than the normal price.

Cloud TPUs are available in the United States, Europe, and Asia at the following rates, and you can get started in minutes via our Quickstart guide:
One Cloud TPU (v2-8) can deliver up to 180 teraflops and includes 64 GB of high-bandwidth memory. The colorful cables link multiple TPU devices together over a custom 2-D mesh network to form Cloud TPU Pods. These accelerators are programmed via TensorFlow and are widely available today on Google Cloud Platform.

Benchmarking Cloud TPU performance-per-dollar


Training a machine learning model is analogous to compiling code: ML training needs to happen fast for engineers, researchers, and data scientists to be productive, and ML training needs to be affordable for models to be trained over and over as a production application is built, deployed, and refined. Key metrics include time-to-accuracy and training cost.

Researchers at Stanford recently hosted an open benchmarking competition called DAWNBench that focused on time-to-accuracy and training cost, and Cloud TPUs won first place in the large-scale ImageNet Training Cost category. On a single Cloud TPU, our open-source AmoebaNet reference model cost only $49.30 to reach the target accuracy, and our open-source ResNet-50 model cost just $58.53. Our TPU Pods also won the ImageNet Training Time category: the same ResNet-50 code running on just half of a TPU pod was nearly six times faster than any non-TPU submission, reaching the target accuracy in approximately 30 minutes!

Although we restricted ourselves to standard algorithms and standard learning regimes for the competition, another DAWNBench submission from fast.ai (3rd place in ImageNet Training Cost, 4th place in ImageNet Training Time) altered the standard ResNet-50 training procedure in two clever ways to achieve faster convergence (GPU implementation here). After DAWNBench was over, we easily applied the same optimizations to our Cloud TPU ResNet-50 implementation. This reduced ResNet-50 training time on a single Cloud TPU from 8.9 hours to 3.5 hours, a 2.5X improvement, which made it possible to train ResNet-50 for just $25 with normal pricing.

Preemptible Cloud TPUs make the Cloud TPU platform even more affordable. You can now train ResNet-50 on ImageNet from scratch for just $7.50. Preemptible Cloud TPUs allow fault-tolerant workloads to run more cost-effectively than ever before; these TPUs behave similarly to Preemptible VMs. And because TensorFlow has built-in support for saving and restoring from checkpoints, deadline-insensitive workloads can easily take advantage of preemptible pricing. This means you can train cutting-edge deep learning models to achieve DAWNBench-level accuracy for less than you might pay for lunch!




Select Open-Source Reference Models
Normal training cost
(TF 1.8)
Preemptible training cost
(TF 1.8)
ResNet-50 (with optimizations from fast.ai): Image classification
~$25
~$7.50
ResNet-50 (original implementation): Image classification
~$59
~$18
AmoebaNet: Image classification (model architecture evolved from scratch on TPUs to maximize accuracy)
~$49
~$15
RetinaNet: Object detection
~$40
~$12
Transformer: Neural machine translation
~$41
~$13
ASR Transformer: Speech recognition (transcribe speech to text)
~$86
~$27

Start using Cloud TPUs today

We aim for Google Cloud to be the best place to run all of your machine learning workloads. Cloud TPUs offer great performance-per-dollar for training and batch inference across a variety of machine learning applications, and we also offer top-of-the-line GPUs with recently-improved preemptible pricing.

We’re excited to see what you build! To get started, please check out the Cloud TPU Quickstart, try our open source reference models, and be sure to sign up for a free trial to start with $300 in cloud credits. Finally, we encourage you to watch our Cloud-TPU-related sessions from Google I/O and the TensorFlow Dev Summit: “Effective machine learning with Cloud TPUs” and “Training Performance: A user’s guide to converge faster.


A datacenter technician scoots past two rows of Cloud TPUs and supporting equipment.

Measure Matters: A New Video Series to Keep You Up to Date on Your Data


Whether you’re a data analyst, marketer, or e-commerce specialist, keeping on top of your data and making informed choices can make significant impact on your business. With that in mind, the Google Analytics team has introduced a new video series on YouTube: Measure Matters. Hosted by Analytics Advocates Krista Seiden and Louis Gray, the series covers best practices on leveraging our suite of products, rounds up highlights from the larger measurement community, and reviews recent product updates - so you never miss a thing, even with your busy schedule.

Subscribe to our YouTube channel or find our Measure Matters playlist: https://tinyurl.com/measurematters.

Measure Matters kicked off in May with a deep dive into Machine Learning, where we talked about automatic insights within Google Analytics, and whether the machines were coming for our jobs. (Spoiler alert: they’re not)

Our second episode covered finding your North Star, being sure to try new approaches and take risks, but to make choices based on data, rather than hacking your way through without a clear plan.

The third episode focused on how app developers can literally change the game through mobile app analytics, leveraging Google Analytics for Firebase.

Measure Matters Episode 3: Google Analytics for Firebase


What’s Coming Next

Measure Matters is scheduled to stream live every two weeks, with most events taking place at 10 a.m. Pacific time on Wednesday. Our next event will take place on Wednesday, June 27th, with the topic of Hearts, Charts and Shopping Carts -- how you can evolve your marketing measurement with data. See our playlist for upcoming and past episodes.

How You Can Participate

Measure Matters is not a one-way broadcast. Krista and Louis regularly stream live on YouTube and answer questions taken via YouTube or on Twitter, using the hashtag #measurematters. So send us your questions, ideas, or content you think belongs on our show, and it just may make our next episode.

Happy analyzing!
Posted by Krista Seiden and Louis Gray, Analytics Advocates

GCP arrives in the Nordics with a new region in Finland



Click here for the Finnish version, thank you!

Our sixteenth Google Cloud Platform (GCP) region, located in Finland, is now open for you to build applications and store your data.

The new Finland region, europe-north1, joins the Netherlands, Belgium, London, and Frankfurt in Europe and makes it easier to build highly available, performant applications using resources across those geographies.

Hosting applications in the new region can improve latencies by up to 65% for end-users in the Nordics and by up to 88% for end-users in Eastern Europe, compared to hosting them in the previously closest region. You can visit www.gcping.com to see for yourself how fast the Finland region is from your location.

Services


The Nordic region has everything you need to build the next great application, and three zones that allow you to distribute applications and storage across multiple zones to protect against service disruptions.

You can also access our Multi-Regional services in Europe (such as BigQuery) and all the other GCP services via the Google Network, the largest cloud network as measured by number of points of presence. Please visit our Service Specific Terms to get detailed information on our data storage capabilities.

Build sustainably


The new region is located in our existing data center in Hamina. This facility is one of the most advanced and efficient data centers in the Google fleet. Our high-tech cooling system, which uses sea water from the Gulf of Finland, reduces energy use and is the first of its kind anywhere in the world. This means that when you use this region to run your compute workloads, store your data, and develop your applications, you are doing so sustainably.

Hear from our customers


“The road to emission-free and sustainable shipping is a long and challenging one, but thanks to exciting innovation and strong partnerships, Rolls-Royce is well-prepared for the journey. For us being able to train machine learning models to deliver autonomous vessels in the most effective manner is key to success. We see the Google Cloud for Finland launch as a great advantage to speed up our delivery of the project.”
– Karno Tenovuo, Senior Vice President Ship Intelligence, Rolls-Royce

“Being the world's largest producer of renewable diesel refined from waste and residues, as well as being a technologically advanced refiner of high-quality oil products, requires us to take advantage of leading-edge technological possibilities. We have worked together with Google Cloud to accelerate our journey into the digital future. We share the same vision to leave a healthier planet for our children. Running services on an efficient and sustainably operated cloud is important for us. And even better that it is now also available physically in Finland.”
– Tommi Touvila, Chief Information Officer, Neste

“We believe that technology can enhance and improve the lives of billions of people around the world. To do this, we have joined forces with visionary industry leaders such as Google Cloud to provide a platform for our future innovation and growth. We’re seeing tremendous growth in the market for our operations, and it’s essential to select the right platform. The Google Cloud Platform cloud region in Finland stands for innovation.”
– Anssi Rönnemaa, Chief Finance and Commercial Officer, HMD Global

“Digital services are key growth drivers for our renewal of a 108-year old healthcare company. 27% of our revenue is driven by digital channels, where modern technology is essential. We are moving to a container-based architecture running on GCP at Hamina. Google has a unique position to provide services within Finland. We also highly appreciate the security and environmental values of Google’s cloud operations.”
– Kalle Alppi, Chief Information Officer, Mehiläinen

Partners in the Nordics


Our partners in the Nordics are available to help design and support your deployment, migration and maintenance needs.


"Public cloud services like those provided by Google Cloud help businesses of all sizes be more agile in meeting the changing needs of the digital era—from deploying the latest innovations in machine learning to cost savings in their infrastructure. Google Cloud Platform's new Finland region enables this business optimization and acceleration with the help of cloud-native partners like Nordcloud and we believe Nordic companies will appreciate the opportunity to deploy the value to their best benefit.”
– Jan Kritz, Chief Executive Officer, Nordcloud

Nordic partners include: Accenture, Adapty, AppsPeople, Atea, Avalan Solutions, Berge, Cap10, Cloud2, Cloudpoint, Computas, Crayon, DataCenterFinland, DNA, Devoteam, Doberman, Deloitte, Enfo, Evry, Gapps, Greenbird, Human IT Cloud, IIH Nordic, KnowIT, Koivu Solutions, Lamia, Netlight, Nordcloud, Online Partners, Outfox Intelligence AB, Pilvia, Precis Digital, PwC, Quality of Service IT-Support, Qvik, Skye, Softhouse, Solita, Symfoni Next, Soprasteria, Tieto, Unifoss, Vincit, Wizkids, and Webstep.

If you want to learn more or wish to become a partner, visit our partners page.

Getting started


For additional details on the region, please visit our Finland region page where you’ll get access to free resources, whitepapers, the "Cloud On-Air" on-demand video series and more. Our locations page provides updates on the availability of additional services and regions. Contact us to request access to new regions and help us prioritize what we build next.

Partner Interconnect now generally available



We are happy to announce that Partner Interconnect, launched in beta in April, is now generally available. Partner Interconnect lets you connect your on-premises resources to Google Cloud Platform (GCP) from the partner location of your choice, at a data rate that meets your needs.

With general availability, you can now receive an SLA for Partner Interconnect connections if you use one of the recommended topologies. If you were a beta user with one of those topologies, you will automatically be covered by the SLA. Charges for the service start with GA (see pricing).

Partner Interconnect is ideal if you want physical connectivity to your GCP resources but cannot connect at one of Google’s peering locations, or if you want to connect with an existing service provider. If you need help understanding the connection options, the information here can help.

In this blog we will walk through how you can start using Partner Interconnect, from choosing a partner that works best for you all the way through how you can deploy and start using your interconnect.


Choosing a partner


If you already have a service provider partner for network connectivity, you can check the list of supported service providers to see if they offer Partner Interconnect service. If not, you can select a partner from the list based on your data center location.

Some critical factors to consider are:
  • Make sure the partner can offer the availability and latency you need between your on-premises network and their network.
  • Check whether the partner offers Layer 2 connectivity, Layer 3 connectivity, or both. If you choose a Layer 2 Partner, you have to configure and establish a BGP session between your Cloud Routers and on-premises routers for each VLAN attachment that you create. If you choose a Layer 3 partner, they will take care of the BGP configuration.
  • Please review the recommended topologies for production-level and non-critical applications. Google provides a 99.99% (with Global Routing) or 99.9% availability SLA, and that only applies to the connectivity between your VPC network and the partner's network.

Bandwidth options and pricing


Partner Interconnect provides flexible options for bandwidth between 50 Mbps and 10 Gbps. Google charges on a monthly basis for VLAN attachments depending on capacity and egress traffic (see options and pricing).

Setting up Partner Interconnect VLAN attachments


Once you’ve established network connectivity with a partner, and they have set up interconnects with Google, you can set up and activate VLAN attachments using these steps:
  1. Create VLAN attachments.
  2. Request provisioning from the partner.
  3. If you have a Layer 2 partner, complete the BGP configuration and then activate the attachments for traffic to start. If you have a Layer 3 partner, simply activate the attachments, or use the pre-activation option.
With Partner Interconnect, you can connect to GCP where and how you want to. Follow these steps to easily access your GCP compute resources from your on-premises network.

Related content


Try full-stack monitoring with Stackdriver on us



In advance of the new simplified Stackdriver pricing that will go into effect on June 30, we want to make sure everyone gets a chance to try Stackdriver. That’s why we’ve decided to offer the full power of Stackdriver, including premium monitoring, logging and application performance management (APM), to all customers—new and existing—for free until the new pricing goes into effect. This offer will be available starting June 18.

Stackdriver, our full-stack logging and monitoring tool, collects logs and metrics, as well as other data from your cloud apps and other sources, then generates useful dashboards, charts and alerts to let you act on information as soon as you get it. Here’s what’s included when you try Stackdriver:
  • Out-of-the-box observability across the entire Google Cloud Platform (GCP) and Amazon Web Services (AWS) services you use
  • Platform, system, application and custom metrics on demand with Metrics Explorer
  • Uptime checks to monitor the availability of the internet-facing endpoints you depend on
  • Alerting policies to let you know when something is wrong. Alerting and notification options, previously available only on the premium tier, are now available for free during this limited time
  • Access to logging and APM features like logs-based metrics, using Trace to understand application health, debugging live with debugger and more
Want to estimate your usage once the new pricing goes into effect? Check out our earlier blog post on viewing and managing your costs. You’ll see the various ways you can estimate usage to plan for the best use of Stackdriver monitoring in your environment. And if you are not already a Stackdriver user, you can sign up to try Stackdriver now!

Related content:

Behind the scenes with the Dragon Ball Legends GCP backend



Dragon Ball Legends, a new mobile game from Bandai Namco Entertainment (BNE), is based on its popular Dragon Ball Z franchise, and is rolling out to gamers around the world as we speak. But planning the cloud infrastructure to power the game dates back to February 2017, when BNE approached Google Cloud to talk about the interesting challenges they were facing, and how we could help.

Based on their anticipated demand, BNE had three ambitious requirements for their game:
  1. Extreme scalability. The game would be launched globally, so it needed backend that could scale with millions of players and still perform well.
  2. Global network. Because the game allows real-time player versus player battles, it needs a reliable and low-latency network across regions.
  3. Real-time data analytics. The game is designed to evolve with players in real-time, so it was critical to have a data analytics pipeline to stream data to a data warehouse. Then the operation team can measure and evaluate how people are playing the game and adjust it on-the-fly.
All three of these are areas where we have a lot of experience. Google has multiple global services with more than a billion users, and we use the data those services generate to improve them over time. And because Google Cloud Platform (GCP) runs on the same infrastructure as these Google services, GCP customers can take advantage of the same enabling technologies.

Let’s take a look at how BNE worked with Google Cloud to build the infrastructure for Dragon Ball Legends.


Challenge #1: Extreme scalability

MySQL is extensively used by gaming companies in Japan because engineers are used to working with relational databases with schema, SQL queries and strong consistency. This simplifies a lot on the application side that doesn’t have to handle any database limitations like eventual consistency or schema enforcement. MySQL is a widely used even outside gaming and most backend engineers already have strong experience using this database.

While MySQL offers many advantages, it has one big limitation: scalability. Indeed, as a scale-up database if you want to increase MySQL performance, you need to add more CPU, RAM or disk. And when a single instance of MySQL can’t handle the load anymore, you can divide the load by sharding—splitting users into groups and assigning them to multiple independent instances of MySQL. Sharding has a number of drawbacks, however. Most gaming developers calculate the number of shards they’ll need for the database before the game launches since resharding is labor-intensive and error-prone. That causes gaming companies tend to overprovision the database to eventually handle more players than they expect. If the game is as popular as expected, everything is fine. But what if the game is a runaway hit and exceeds the anticipated demand? And what about the long tail representing a gradual decrease in active players? What if it’s an out-and-out flop? MySQL sharding is not dynamically scalable, and adjusting its size requires maintenance as well as risk.

In an ideal world, databases can scale in and out without downtime while offering the advantages of a relational database. When we first heard that BNE was considering MySQL sharding to handle the massive anticipated traffic for Dragon Ball Legends, we suggested they consider Cloud Spanner instead.


Why Cloud Spanner?

Cloud Spanner is a fully managed relational database that offers horizontal scalability and high availability while keeping strong consistency with a schema that is similar to MySQL’s. Better yet, as a managed service, it’s looked after by Google SREs, removing database maintenance and minimizing the risk of downtime. We thought Cloud Spanner would be able to help BNE make their game global.


Evaluation to implementation

Before adopting a new technology, engineers should always test it to confirm its expected performance in a real world scenario. Before replacing MySQL, BNE created a new Cloud Spanner instance in GCP, including a few tables with a similar schema to what they used in MySQL. Since their backend developers were writing in Scala, they chose the Java client library for Cloud Spanner and wrote some sample code to load-test Cloud Spanner and see if it could keep up with their queries per second (QPS) requirements for writes—around 30,000 QPS at peak. Working with our customer engineer and the Cloud Spanner engineering team, they met this goal easily. They even developed their own DML (Data Manipulation Language) wrapper to write SQL commands like INSERT, UPDATE and DELETE.


Game release

With the proof of concept behind them, they could start their implementation. Based on the expected daily active users (DAU), BNE calculated how many Cloud Spanner nodes they needed—enough for the 3 million pre-registered players they were expecting. To prepare the release, they organized two closed beta tests to validate their backend, and didn’t have a single issue with the database! In the end, over 3 million participants worldwide pre-registered for Dragon Ball Legends, and even with this huge number, the official game release went flawlessly.

Long story short, BNE can focus on improving the game rather than spending time operating their databases.


Challenge #2: Global network

Let’s now talk about BNE’s second challenge: building a global real-time player-vs-player (PvP) game. BNE’s goal for Dragon Ball Legends was to let all its players play against one another, anywhere in the world. If you know anything about networking, you understand the challenge around latency. Round-trip time (RTT) ( between Tokyo and San Francisco, for example, is on average around 100 ms. To address that, they decided to divide every game second into 250 ms intervals. So while the game looks like it’s real-time to users, it’s actually a really fast turn-based game at its core (you can read more about the architecture here). And while some might say that 250ms offers plenty of room for latency, it’s extremely hard to predict the latency when communicating across the Internet.


Why Cloud Networking?

Here’s what it looks like for a game client to access the game server on GCP over the internet. Since the number of hops can vary every time, this means that playing PvP can sometimes feel fast, sometimes slow.

Once of the main reasons BNE decided to use GCP for the Dragon Ball Legends backend was the Google dedicated network. As you can see in the picture below, when using GCP, once the game client accesses one of the hundreds of GCP Point Of Presence (POP) around the world, it’s on the Google dedicated network. That means none unpredictable hops, for predictable and lowest possible latency.


Taking advantage of the Google Cloud Network

Usually, gaming companies implement PvP by connecting two players directly or through a dedicated game server. Usually combat games that require low latency between players will prefer P2P communication. In general, when two players are geographically close, P2P works very well, but it’s often unreliable when trying to communicate across regions (some carriers even block P2P protocols). For two players from two different continents to communicate through Google’s dedicated network, players first try to communicate through P2P, and if that fails, they failover to an open source implementation of STUN/TURN Server called coturn, which acts as a relay between the two players.. That way, cross continent battles leverage the low-latency and reliable Google network as much as possible.


Challenge #3: Real-time data analytics

BNE’s last challenge was around real-time data analytics. BNE wanted to offer the best user experience to their fans and one of the ways to do that is through live game operations, or LiveOps, in which operators make constant changes to the game so it always feels fresh. But to understand players’ needs, they needed data— usually users’ actions log data. And if they could get this data in near real-time, they could then make decisions on what changes to apply to the game to increase users’ satisfaction and engagement.

To gather this data, BNE used a combination of Cloud Pub/Sub, Cloud Dataflow to transform in users’ data in real-time and insert it into BigQuery.
  • Cloud Pub/Sub offers a globally reliable messaging system that buffers the logs until they can be handled by Cloud Dataflow.
  • Cloud Dataflow is a fully managed parallel processing service that lets you execute ETL in real-time and in parallel.
  • BigQuery is the fully managed data warehouse where all the game logs are stored. Since BigQuery offers petabyte-scale storage, scaling was not a concern. Thanks to heavy parallel processing when querying the logs, BNE can get a response to a query, scanning terabytes of data in a few seconds.
This system lets a game producer visualize a player’s behavior in near real-time and take decision on what new features to bring to the game or what to change inside the game to satisfy all their fans.


Takeaways

Using Cloud Spanner, BNE could focus on developing an amazing game instead of spending time on database capacity planning and scaling. Operations-wise, by using a fully managed scalable database, they drastically reduced risks related to human error as well as an operational overhead.

Using Cloud Networking, they leveraged Google’s dedicated network to offer the best user experience to their fans, even when fighting across regions.

And finally, using Google’s analytics stack (Cloud PubSub, Cloud Dataflow and BigQuery), BNE was able to analyze players’ behaviors in near real-time and make decisions about how to adjust the game to make their fans even happier!

If you want to hear more details about how they evaluated and adopted Cloud Spanner for their game, please join them at their Google Cloud NEXT’18 session in San Francisco.