Tag Archives: Infrastructure

See Our Latest Data Center Murals

Last May, we announced the Data Center Mural Project, a partnership with artists to bring a bit of the magic from the inside of our data centers to the outside. Two artists in Oklahoma and Belgium created murals that celebrate both the work that happens inside the buildings and the communities where the data centers reside.

Today, we’re excited to unveil our next two data center murals.

In Council Bluffs, Iowa, painter Gary Kelley’s mural shows how Council Bluffs has served as a hub of information for centuries. Ideas have always flowed through the region, from Lewis and Clark to the Transcontinental Railroad, and now the data center in Council Bluffs is helping bring the internet to people all over the world. 

In Dublin, Ireland, illustrator Fuchsia MacAree was inspired by how Ireland’s unique climate and fresh air, rather than mechanical cooling, regulates the temperature of Google’s data center. She’s created a series of whimsical murals depicting a windy day in Dublin, including scenes from local landmarks like Grand Canal Square, Phoenix Park and Moore Street Market.

Check out photos and videos of all the data center murals at g.co/datacentermurals.

Source: Google Cloud


See our latest data center murals

Last May, we announced the Data Center Mural Project, a partnership with artists to bring a bit of the magic from the inside of our data centers to the outside. Two artists in Oklahoma and Belgium created murals that celebrate both the work that happens inside the buildings and the communities where the data centers reside.

Today, we’re excited to unveil our next two data center murals.

In Council Bluffs, Iowa, painter Gary Kelley’s mural shows how Council Bluffs has served as a hub of information for centuries. Ideas have always flowed through the region, from Lewis and Clark to the Transcontinental Railroad, and now the data center in Council Bluffs is helping bring the internet to people all over the world. 

In Dublin, Ireland, illustrator Fuchsia MacAree was inspired by how Ireland’s unique climate and fresh air, rather than mechanical cooling, regulates the temperature of Google’s data center. She’s created a series of whimsical murals depicting a windy day in Dublin, including scenes from local landmarks like Grand Canal Square, Phoenix Park and Moore Street Market.

Check out photos and videos of all the data center murals at g.co/datacentermurals.

Source: Google Cloud


Google Cloud Platform: your Next home in the cloud



San Francisco Today at Google Cloud Next ‘17, we’re thrilled to announce new Google Cloud Platform (GCP) products, technologies and services that will help you imagine, build and run the next generation of cloud applications on our platform.

Bring your code to App Engine, we’ll handle the rest

In 2008, we launched Google App Engine, a pioneering serverless runtime environment that lets developers build web apps, APIs and mobile backends at Google-scale and speed. For nearly 10 years, some of the most innovative companies built applications that serve their users all over the world on top of App Engine. Today, we’re excited to announce into general availability a major expansion of App Engine centered around openness and developer choice that keeps App Engine’s original promise to developers: bring your code, we’ll handle the rest.

App Engine now supports Node.js, Ruby, Java 8, Python 2.7 or 3.5, Go 1.8, plus PHP 7.1 and .NET Core, both in beta, all backed by App Engine’s 99.95% SLA. Our managed runtimes make it easy to start with your favorite languages and use the open source libraries and packages of your choice. Need something different than what’s out of the box? Break the glass and go beyond our managed runtimes by supplying your own Docker container, which makes it simple to run any language, library or framework on App Engine.

The future of cloud is open: take your app to-go by having App Engine generate a Docker container containing your app and deploy it to any container-based environment, on or off GCP. App Engine gives developers an open platform while still providing a fully managed environment where developers focus only on code and on their users.


Cloud Functions public beta at your service

Up one level from fully managed applications, we’re launching Google Cloud Functions into public beta. Cloud Functions is a completely serverless environment to build and connect cloud services without having to manage infrastructure. It’s the smallest unit of compute offered by GCP and is able to spin up a single function and spin it back down instantly. Because of this, billing occurs only while the function is executing, metered to the nearest one hundred milliseconds.

Cloud Functions is a great way to build lightweight backends, and to extend the functionality of existing services. For example, Cloud Functions can respond to file changes in Google Cloud Storage or incoming Google Cloud Pub/Sub messages, perform lightweight data processing/ETL jobs or provide a layer of logic to respond to webhooks emitted by any event on the internet. Developers can securely invoke Cloud Functions directly over HTTP right out of the box without the need for any add-on services.

Cloud Functions is also a great option for mobile developers using Firebase, allowing them to build backends integrated with the Firebase platform. Cloud Functions for Firebase handles events emitted from the Firebase Realtime Database, Firebase Authentication and Firebase Analytics.

Growing the Google BigQuery universe: introducing BigQuery Data Transfer Service

Since our earliest days, our customers turned to Google to promote their advertising messages around the world, at a scale that was previously unimaginable. Today, those same customers want to use BigQuery, our powerful data analytics service, to better understand how users interact with those campaigns. With that, we’ve developed deeper integration between broader Google and GCP with the public beta of the BigQuery Data Transfer Service, which automates data movement from select Google applications directly into BigQuery. With BigQuery Data Transfer Service, marketing and business analysts can easily export data from Adwords, DoubleClick and YouTube directly into BigQuery, making it available for immediate analysis and visualization using the extensive set of tools in the BigQuery ecosystem.

Slashing data preparation time with Google Cloud Dataprep

In fact, our goal is to make it easy to import data into BigQuery, while keeping it secure. Google Cloud Dataprep is a new serverless browser-based service that can dramatically cut the time it takes to prepare data for analysis, which represents about 80% of the work that data scientists do. It intelligently connects to your data source, identifies data types, identifies anomalies and suggests data transformations. Data scientists can then visualize their data schemas until they're happy with the proposed data transformation. Dataprep then creates a data pipeline in Google Cloud Dataflow, cleans the data and exports it to BigQuery or other destinations. In other words, you can now prepare structured and unstructured data for analysis with clicks, not code. For more information on Dataprep, apply to be part of the private beta. Also, you’ll find more news about our latest database and data and analytics capabilities here and here.

Hello, (more) world

Not only are we working hard on bringing you new products and capabilities, but we want your users to access them quickly and securely  wherever they may be. That’s why we’re announcing three new Google Cloud Platform regions: California, Montreal and the Netherlands. These will bring the total number of Google Cloud regions up from six today, to more than 17 locations in the future. These new regions will deliver lower latency for customers in adjacent geographic areas, increased scalability and more disaster recovery options. Like other Google Cloud regions, the new regions will feature a minimum of three zones, benefit from Google’s global, private fibre network and offer a complement of GCP services.

Supercharging our infrastructure . . .

Customers run demanding workloads on GCP, and we're constantly striving to improve the performance of our VMs. For instance, we were honored to be the first public cloud provider to run Intel Skylake, a custom Xeon chip that delivers significant enhancements for compute-heavy workloads and a larger range of VM memory and CPU options.

We’re also doubling the number of vCPUs you can run in an instance from 32 to 64 and now offering up to 416GB of memory, which customers have asked us for as they move large enterprise applications to Google Cloud. Meanwhile, we recently began offering GPUs, which provide substantial performance improvements to parallel workloads like training machine learning models.

To continually unlock new energy sources, Schlumberger collects large quantities of data to build detailed subsurface earth models based on acoustic measurements, and GCP compute infrastructure has the unique characteristics that match Schlumberger's needs to turn this data into insights. High performance scientific computing is integral to its business, so GCP's flexibility is critical.

Schlumberger can mix and match GPUs and CPUs and dynamically create different shapes and types of virtual machines, choosing memory and storage options on demand.

"We are now leveraging the strengths offered by cloud computation stacks to bring our data processing to the next level. Ashok Belani, Executive Vice President Technology, Schlumberger

. . . without supercharging our prices

We aim to keep costs low. Today we announced Committed Use Discounts that provide up to 57% off the list price on Google Compute Engine, in exchange for a one or three year purchase commitment. Committed Use Discounts are based on the total amount of CPU and RAM you purchase, and give you the flexibility to use different instance and machine types; they apply automatically, even if you change instance types (or size). There are no upfront costs with Committed Use Discounts, and they are billed monthly. What’s more, we automatically apply Sustained Use Discounts to any additional usage above a commitment.

We're also dropping prices for Compute Engine. The specific cuts vary by region. Customers in the United States will see a 5% price drop; customers in Europe will see a 4.9% drop and customers using our Tokyo region an 8% drop.

Then there’s our improved Free Tier. First, we’ve extended the free trial from 60 days to 12 months, allowing you to use your $300 credit across all GCP services and APIs, at your own pace and on your own schedule. Second, we’re introducing new Always Free products  non-expiring usage limits that you can use to test and develop applications at no cost. New additions include Compute Engine, Cloud Pub/Sub, Google Cloud Storage and Cloud Functions, bringing the number of Always Free products up to 15, and broadening the horizons for developers getting started on GCP. Visit the Google Cloud Platform Free Tier page today for further details, terms, eligibility and to sign up.

We'll be diving into all of these product announcements in much more detail in the coming days, so stay tuned!

Google Cloud Platform is the first cloud provider to offer Intel Skylake



I’m excited to announce that Google Cloud Platform (GCP) is the first cloud provider to offer the next generation Intel Xeon processor, codenamed Skylake.

Customers across a range of industries, including healthcare, media and entertainment and financial services ask for the best performance and efficiency for their high-performance compute workloads. With Skylake processors, GCP customers are the first to benefit from the next level of performance.

Skylake includes Intel Advanced Vector Extensions (AVX-512), which make it ideal for scientific modeling, genomic research, 3D rendering, data analytics and engineering simulations. When compared to previous generations, Skylake’s AVX-512 doubles the floating-point performance for the heaviest calculations.

We optimized Skylake for Google Compute Engine’s complete family of VMs  standard, highmem, highcpu and Custom Machine Types to help bring the next generation of high performance compute instances to everyone.
"Google and Intel have had a long standing engineering partnership working on Data Center innovation. We're happy to see the latest Intel Xeon technology now available on Google Cloud Infrastructure. This technology delivers significant enhancements for compute-intensive workloads, efficiently accelerating data analytics that businesses depend on for operations and growth.”  Diane Bryant, Intel Executive Vice President and GM of the Data Center Group
Skylake processors are available in five GCP regions: Western US, Eastern US, Central US, Western Europe and Eastern Asia Pacific. Sign up here to take advantage of the new Skylake processors.

You can learn more about Skylake for Google Compute Engine and see it in action at Google Cloud NEXT ’17 in San Francisco on March 8-10. Register today!

Top 12 Google Cloud Platform posts of 2016


From product news to behind-the-scenes stories to tips and tricks, we covered a lot of ground on the Google Cloud Platform (GCP) blog this year. Here are the most popular posts from 2016.

  1. Google supercharges machine learning tasks with TPU custom chip - A look inside our custom ASIC built specifically for machine learning. This chip fast-forwards technology seven years into the future. 
    Tensor Processing Unit board
  2. Bringing Pokemon Go to life - Niantic’s augmented reality game uses more than a dozen Google Cloud services to delight and physically exert millions of Pokemon chasers across the globe.


  3. New undersea cable expands capacity for Google APAC customers and users - Together with Facebook, Pacific Light Data Communication and TE SubCom, we’re building the first direct submarine cable system between Los Angeles and Hong Kong.
  4. Introducing Cloud Natural Language API, Speech API open beta and our West Coast Region expansion - Now anyone can use machine learning models to process unstructured data or to convert speech to text. We also announced the opening of our Oregon Cloud Region (us-west1).


  5. Google to acquire Apigee - Apigee, an API management provider, helps developers integrate with outside apps and services. (Our acquisition of cloud-based software buyer and seller, Orbitera, also made big news this year.)


  6. Top 5 GCP NEXT breakout sessions on YouTube (so far) - From Site Reliability Engineering (SRE) and container management to building smart apps and analyzing 25 billion stock market events in an hour, Google presenters kept the NEXT reel rolling. (Don’t forget to sign up for Google Cloud Next 2017, which is just around the corner!)


  7. Advancing enterprise database workloads on Google Cloud Platform - Announcing that our fully managed database services Cloud SQL, Cloud Bigtable and Cloud Datastore are all generally available, plus Microsoft SQL Server images for Google Compute Engine.


  8. Google Cloud machine learning family grows with new API, editions and pricing - The new Cloud Jobs API makes it easier to fill open positions, and GPUs spike compute power for certain jobs. Also included: custom TPUs in Cloud Vision API, Cloud Translation API premium and general availability of Cloud Natural Language API.


  9. Google Cloud Platform sets a course for new horizons - In one day, we announced eight new Google Cloud regions, BigQuery support for Standard SQL and Customer Reliability Engineering (CRE), a support model in which Google engineers work directly with customer operations teams.


  10. Finding Pete’s Dragon with Cloud Vision API - Learn how Disney used machine learning to create a “digital experience” that lets kids search for Pete’s friend Elliot on their mobile and desktop screens.
  11. Top 10 GCP sessions from Google I/O 2016 - How do you develop a Node.js backend for an iOS and Android based game? What about a real-time game with Firebase? How do you build a smart RasPI bot with Cloud Vision API? You'll find the answers to these and many other burning 


  12. Spotify chooses Google Cloud Platform to power its data infrastructure - As Spotify’s user base grew to more than 75 million, it moved its backend from a homegrown infrastructure to a scalable and reliable public cloud.

Thank you for staying up to speed on GCP happenings on our blog. We look forward to much more activity in 2017, and invite you to join in on the action if you haven't already. Happy holidays!

Can you run a data center without waste? We are now in Singapore and Taiwan

Did you know that Singapore is projected to run out of landfill space by 2035? According to the Singaporean government, every year 200,000 tons of solid waste and ash are received at the Semakau landfill. That’s a lot of trash – equivalent to the weight of 18 Eiffel towers, 25,000 elephants or 100,000 houses.

Today, we’re excited to announce that none of that waste comes directly from our data center here in Singapore (or, to landfills in Taiwan, from our data center there). That’s because both our Singapore and Taiwan data centers have reached a 100% landfill diversion rate, in line with a global commitment we’ve made to achieve “zero waste to landfill” for our data centers globally.

This zero waste to landfill effort is part of a broader goal we have at Google to weavecircular economy principles into everything we do. That means instead of using raw resources (timber and ore, for example) to create new products, we keep materials in circulation for multiple uses, whether they are maintained, reused, refurbished, or recycled.

So how do we accomplish this at our data centers here in Asia, where our servers that help millions of people across the region Search, keep in touch over Gmail and stream millions of hours of YouTube a day need constant upgrading and maintenance?

To start, before we buy any new equipment or materials, we look for ways to reuse what we already have. Last year, more than half of the components we used for machine upgrades were from refurbished inventory. With the remaining equipment, we resold most into secondary markets for reuse by other organizations, and we recycled a small percentage of un-reusable hardware.

That covers the machines, but what about everything else? To reduce daily waste, we encourage Googlers to be environmentally conscious. We make recycling very easy by placing waste sorting bins like the below throughout the facilities in strategic locations.

recycling
Sorting cans in Singapore on top, and food and other waste sorting bins in Taiwan on bottom

For the small amount of waste that is still produced locally, we use our own trash disposal systems like this trash compactor at our facility in Singapore:

compactor

In addition to our two facilities in Asia, four of our other data centers in Europe and the U.S. -- nearly half -- have achieved 100% landfill diversion of all waste to date. And we’re committed to achieving zero waste at the rest of our data centers soon. As my colleague Jim Miller observed, it’s just the kind of challenge that excites us.

Six Google data centers are diverting 100% of waste from landfill

Sustainability doesn’t end with a really low PUE for our data centers. Sustainability is an important business practice we strive to incorporate into all areas of our operations. A key part of this is how resources are managed. Here we define resources as the “things” that make up our data centers—both the buildings themselves, as well as all the stuff inside. This includes the waste that is generated at a data center—it’s a resource too. The more material we can reduce and use sustainably, the more effective and efficient our operations will be.

Over the past few years we’ve started focusing downstream on what resources we’re generating via waste. We’ve been working towards zero waste to landfill at our facilities, as well as reducing the amount of waste we’re generating. Today, we are announcing a new commitment to achieve Zero Waste to Landfill for our global data center operations.

At Google, Zero Waste to Landfill means that when waste leaves our data centers, none of it goes to a landfill—100 percent is diverted to a more sustainable pathway, with no more than 10% of it going to a waste-to-energy facility, unless waste-to-energy can be proved more valuable than alternative diversion paths. Our approach is based off thestandard created by UL Environment who we partnered with to ensure the guidelines we created for our facilities were aligned and compliant with how UL defines and monitors the process.

Six of our 14 sites are achieving 100 percent diversion rates. Globally across our data center operations we are diverting at least 86 percent of waste away from landfills. At our operating data centers in Europe and APAC we have reached 100 percent diversion from landfill which currently includes a contribution from waste to energy of greater than 10 percent. These data centers include: Dublin, Ireland; Hamina, Finland; St Ghislain, Belgium; Changhua County, Taiwan and Singapore. As we continue to implement new diversion strategies and ways to design waste out altogether that percentage will decrease.

Our data center in Mayes County, Oklahoma is our first Google data center to reach Zero Waste to Landfill.

So, how did we get here, where have we had big successes? There have been a couple of themes for success. Find projects that do double duty—those that not only reduce or divert waste, but also have an added benefit, like energy savings or improved process efficiency. For example, our Mayes County data center has deployed compactors to help manage waste. Not only does it help divert waste more effectively, it also gives us accurate weight data for tracking, reduces the number of pick-ups our vendor has to make (saving us and them time and money) and is cleaner overall for the site (reducing how much janitorial work is needed).

Second, sometimes you don’t have to eliminate a waste stream or find a new diversion pathway to reduce the amount of waste, instead you can also look at extending it’s life—then you’re buying less and disposing of less. The same concepts we apply to server management, we apply to our maintenance operations to keep the data centers up and running.

Third, expect the unexpected, waste streams do not stay the same, they change and evolve over time depending on your operations. Be prepared for random new waste products and be flexible. Frequently the last 10 to 20 percent of waste diversion can be the hardest to solve, but understanding these processes is critical to success.

We’ve learned a lot along this journey and will continue to learn more—the effort certainly has not been wasteful. Zero waste to landfill requires a careful attention to the types of materials you’re generating and a deep understanding of your resource pathways. All these learnings allow us to keep pushing towards zero waste to landfill, but also to start looking upstream to add circular economy practices into our operations. Zero waste to landfill is just the first step in a long process to sustainably manage our resources throughout the entire lifecycle of our data centers.

DeepMind AI reduces energy used for cooling Google data centers by 40%

From smartphone assistants to image recognition and translation, machine learning already helps us in our everyday lives. But it can also help us to tackle some of the world’s most challenging physical problems -- such as energy consumption.  Large-scale commercial and industrial systems like data centers consume a lot of energy, and while much has been done to stem the growth of energy use, there remains a lot more to do given the world’s increasing need for computing power.

Reducing energy usage has been a major focus for us over the past  10 years: we have built our own super-efficient servers at Google, invented more efficient ways to cool our data centers and invested heavily in green energy sources, with the goal of being powered 100 percent by renewable energy. Compared to five years ago, we now get around 3.5 times the computing power out of the same amount of energy, and we continue to make many improvements each year.

Major breakthroughs, however, are few and far between -- which is why we are excited to share that by applying DeepMind’s machine learning to our own Google data centers, we’ve managed to reduce the amount of energy we use for cooling by up to 40 percent. In any large scale energy-consuming environment, this would be a huge improvement. Given how sophisticated Google’s data centers are already, it’s a phenomenal step forward.

The implications are significant for Google’s data centers, given its potential to greatly improve energy efficiency and reduce emissions overall. This will also help other companies who run on Google’s cloud to improve their own energy efficiency. While Google is only one of many data center operators in the world, many are not powered by renewable energy as we are. Every improvement in data center efficiency reduces total emissions into our environment and with technology like DeepMind’s, we can use machine learning to consume less energy and help address one of the biggest challenges of all -- climate change.

One of the primary sources of energy use in the data center environment is cooling. Just as your laptop generates a lot of heat, our data centers -- which contain servers powering Google Search, Gmail, YouTube, etc. -- also generate a lot of heat that must be removed to keep the servers running. This cooling is typically accomplished via large industrial equipment such as pumps, chillers and cooling towers. However, dynamic environments like data centers make it difficult to operate optimally for several reasons: 

  1. The equipment, how we operate that equipment, and the environment interact with each other in complex, nonlinear ways. Traditional formula-based engineering and human intuition often do not capture these interactions.
  2. The system cannot adapt quickly to internal or external changes (like the weather). This is because we cannot come up with rules and heuristics for every operating scenario.
  3. Each data center has a unique architecture and environment. A custom-tuned model for one system may not be applicable to another. Therefore, a general intelligence framework is needed to understand the data center’s interactions.
To address this problem, we began applying machine learning two years ago to operate our data centers more efficiently. And over the past few months, DeepMind researchers began working with Google’s data center team to significantly improve the system’s utility. Using a system of neural networks trained on different operating scenarios and parameters within our data centers, we created a more efficient and adaptive framework to understand data center dynamics and optimize efficiency.

We accomplished this by taking the historical data that had already been collected by thousands of sensors within the data center -- data such as temperatures, power, pump speeds, setpoints, etc. -- and using it to train an ensemble of deep neural networks. Since our objective was to improve data center energy efficiency, we trained the neural networks on the average future PUE (Power Usage Effectiveness), which is defined as the ratio of the total building energy usage to the IT energy usage. We then trained two additional ensembles of deep neural networks to predict the future temperature and pressure of the data center over the next hour. The purpose of these predictions is to simulate the recommended actions from the PUE model, to ensure that we do not go beyond any operating constraints.

We tested our model by deploying on a live data center. The graph below shows a typical day of testing, including when we turned the machine learning recommendations on, and when we turned them off.

Green_-_07_20_16_-_Deepmind_Reduces_Energy.width-1600.png
Google DeepMind graph showing results of machine learning test on power usage effectiveness in Google data centers

Our machine learning system was able to consistently achieve a 40 percent reduction in the amount of energy used for cooling, which equates to a 15 percent reduction in overall PUE overhead after accounting for electrical losses and other non-cooling inefficiencies. It also produced the lowest PUE the site had ever seen. 

Because the algorithm is a general-purpose framework to understand complex dynamics, we plan to apply this to other challenges in the data center environment and beyond in the coming months. Possible applications of this technology include improving power plant conversion efficiency (getting more energy from the same unit of input), reducing semiconductor manufacturing energy and water usage, or helping manufacturing facilities increase throughput.

We are planning to roll out this system more broadly and will share how we did it in an upcoming publication, so that other data center and industrial system operators -- and ultimately the environment -- can benefit from this major step forward.