Tag Archives: google cloud

Cloud Covered: 6 things you might have missed from Google Cloud last year

What was new with Google Cloud in 2018? Well, it depends on what particular cloud technology you’re interested in. There was plenty of news on the AI and machine learning front, along with developments on a variety of enterprise cloud components. The open cloud community continued to be a thriving place to collaborate, and Google Cloud user productivity and efficiency grew, too.

These popular stories from last year illustrate some of what you can do with Google Cloud technology.

  1. Machine and deep learning made leaps. On the hardware front, special chips designed for high performance, called Cloud TPUs, are now broadly available to speed up machine learning tasks. And we partnered with NASA’s Frontier Development Lab to use ML to build simulations and algorithms to answer one big question: Is there life on other planets?
  2. Organizations are starting to extract more value from their data. Tools like BigQuery and the Ethereum digital currency dataset, which we recently made available to everyone, help businesses find insights from their data. And The New York Times is digitizing its huge photo archive, along with all its associated data, using Google Cloud storage and database technology.
  3. There’s a new way to keep your information secure. The Titan Security Key arrived in the Google Store in 2018. Use these security keys to add two-factor verification to your Google Accounts and other services. They’re designed to defend against attacks like phishing that steal user credentials.
  4. The cloud opened the door to creating all kinds of applications and projects. For game developers, the OpenMatch open source project cuts down on development time for building multiplayer games with its matchmaking framework. And a novelist is using the new Cloud Speech-to-Text API to add visuals to poetry readings.
  5. Productivity gains with cloud came in all shapes and sizes. Check out the new developer hub for G Suite, providinglots of pro tips for developers to create, manage, and track their projects, including this tip on automatically adding a schedule from Google Sheets into Calendar.
  6. You can build on Google Cloud Platform (GCP) even more easily. A new type of containers called gVisor arrived to give developersmore options when building applications. Plus, we brought the infrastructure that powers Google Search to developers with Cloud Source Repositories for easier code search. And the Cloud Services Platform arrived in 2018—this integrated family of cloud services lets you build an end-to-end cloud while removing manual tasks from the daily workload.  

For even more of what was popular last year in Google Cloud, take a look at the top Google Cloud Platform stories of 2018. And if one of your goals this year is to start using cloud more, mark your calendar to attend Google Cloud Next ’19.

Why we’re putting 1.6 million solar panels in Tennessee and Alabama

Hundreds of engineers, electricians and construction workers are building two new, energy-efficient Google data center campuses in the Southeastern U.S.—one in Tennessee and another in northern Alabama. And we’re not stopping there—we’re also putting more carbon-free energy on the electric grid that will power our servers in the region. In the coming years, Google will purchase the output of several new solar farms as part of a deal with the Tennessee Valley Authority (TVA), totaling 413 megawatts of power from 1.6 million solar panels—that’s equivalent to the combined size of 65,000 home rooftop solar systems.

construction

An aerial view of our Tennessee data center under construction (photo credit: Aerial Innovations).

Located in Hollywood, Alabama and Yum Yum, Tennessee, the two biggest solar farms will be able to produce around 150 megawatts each. These solar sites will be among the largest renewable energy projects in the Tennessee Valley region, and the largest solar farms ever to be built for Google. Thanks to the abundant solar power generated by these new farms, electricity consumed by our data centers in Tennessee and Alabama will be matched with 100 percent renewable energy from day one, helping us match our annual electricity consumption as we grow.

Deploying solar farms does more than provide a cost-effective way to procure clean power. It will also create economic benefits for Tennessee and northern Alabama. TVA’s developer partners—NextEra Energy Resources and Invenergy—will hire hundreds of workers in the region, make long-term lease payments to property owners, and generate millions of dollars in economic activity and tax revenue for the broader community. To date, Google's more than 30 long-term contract commitments to purchase renewable energy have resulted in nearly $5 billion in investment worldwide.

Last year, we shared our long-term objective to source carbon-free electricity around the clock for each of our data centers. These new solar projects will bring us substantially closer to that goal in the Southeastern U.S. In the carbon heat map below, you can see how well our operations in the region will be matched with carbon-free energy on an hour-by-hour basis, compared to a scenario without the solar projects. The green ribbon that appears in the heat map illustrates how the solar farms will make the majority of our daytime electricity use carbon-free.

carbon heat map solar

Thanks to the deployment of 1.6 million solar panels, approximately 72 percent of our data center electricity use in Alabama and Tennessee will be matched on an hourly basis with carbon-free sources—compared to a status-quo regional grid mix that is 48 percent carbon free. (This projection is based on 2017 TVA generation, power demand of a typical Google data center, and local solar resources.)

There’s still more to do to make our data centers fully carbon free around the world, and we have a number of ideas on how to get there. We’re one step closer thanks to the solar stardom of Hollywood, Alabama and the carbon-free flavors of Yum Yum, Tennessee.

Four things you might have missed from Chrome Enterprise in 2018

It’s been a busy year for Chrome Enterprise—we welcomed new hardware for enterprises, helped boost workplace productivity, and celebrated ten years of Chrome. Here’s a look at four updates you might have missed from Chrome Enterprise in 2018.

1. We helped businesses prepare for the era of cloud workers

The availability of cloud-based apps and technology has fundamentally changed the way we work, and as a result, many businesses are rethinking the devices and tools they provide their workforce. This year we commissioned a study with Forrester that takes a closer look at the new era of cloud workers. We hosted a half-day virtual event, Cloud Worker Live, to share insights and practical advice, and we’ve made all the sessions available to watch online.

And we also want to help businesses identify the cloud workers in their organization to better support them with the right cloud-based tools. A new Forrester report we commissioned provides key recommendations for workforce segmentation, and we offered some insights on how we do it ourselves here at Google.

2. We launched our Grab and Go program to help businesses stay productive

When an employee’s device isn’t working, it can have more consequences than you think—from the hours employees devote to troubleshooting devices instead of completing projects, to the time IT teams spend on repair and replacement. To address this problem for both workers and businesses, we introduced our Grab and Go program to enterprises in July. Since then, we’ve expanded the program with new partners, and Waymo shared with us how Grab and Go has helped them support their shift workers and dispatchers. You can learn more about Grab and Go on our website.

3. We helped help admins stay up-to-date with Chrome releases

If looking after Chrome browser and devices is part of your job, you probably know that Chrome releases a full OS update about every 6 weeks. Our new Admin Insider series gives you a quick snapshot of the most important changes so you can take action. And if you need even more info, you can now sign up to receive new release details as they become available.

4. We heard from customers all over the world

This year we took a closer look at more than a dozen enterprises that have adopted Chrome Enterprise in every corner of the world. For example, in India and Africa, Dr. Agarwal’s Eye Hospital is making clinical care easier for doctors and their patients by deploying more than a thousand Chrome devices across its 70 facilities. In France, Veolia, a global water, waste, and energy management company, is rolling out Chrome devices to all of its nearly 170,000 employees to increase productivity and collaboration across its offices on 5 different continents. And in Australia, Service NSW is providing better government services through Chrome-powered kiosks.

There’s a lot more to come in 2019. In the meantime, you can learn more about Chrome Enterprise on our website.

Cloud covered: What was new in Google Cloud for November

In November here in the U.S., we felt some Thanksgiving gratitude that there’s never a dull moment in cloud technology. We’ve been keeping track of what’s new and quickly evolving, from AI and ML tools to storage and databases. Here are a few of the highlights from last month in Google Cloud.

There’s a new way to make a Google Doc.

Here’s a new, time-saving (and dare we say, fun?) way to create a Google Doc when you’ve got to get your ideas down on the page immediately. Type in doc.new, docs.new or document.new into your web browser and it’ll bring up a new Google Doc. See how it works.

The New York Times uses Google Cloud to digitize its photo archive.

The New York Times photo archive, nicknamed “the morgue,” contains more than a hundred years’ worth of photos—five to seven million in all. The paper built a processing pipeline using Google Cloud Platform (GCP) products to digitize, organize and easily search those photos. See some of the pictures and read more on their plans.

Asia Pacific cloud users can access GCP data faster.

We were excited to announce the opening of our Hong Kong region last month, and plans for the Jakarta region, to bring faster access to GCP data and apps for users. Locating your company’s data closer to a cloud region means you can transmit that data faster, with lower network latency. Find your own location latency here.

Non-data scientists can now experiment with AI and ML.

Artificial intelligence (AI) and machine learning (ML) are hot topics in tech these days—but how do you even start using these concepts? Our new central AI Hub is now in its first stage of availability, offering pipelines, modules, and other preconfigured ML content. Check out real-world examples of AI and ML, like using data analytics to predict health problems or predict potential hazardous driving areas in Chicago.

We put forth our principles for building ethical AI.

AI is a fascinating technology, full of great potential. It’s also still a technology built by humans, dependent on us to input data and train models. We’re considering AI principles every step of the way, and working to eliminate bias from AI models, use AI for positive results, make sure AI is interpretable by humans, and helping businesses prepare for a future with more automation built in. Find out more about how we’re creating AI ethics at Google.

We described our microservices vision.

A microservices architecture is one where discrete, single-purpose software units are the basis to build large, distributed apps that work in both hybrid and on-prem situations—especially interesting as businesses continue to run their IT operations both in their own data centers and with cloud resources. Using container technology means developers can deploy new apps faster, and lets developers use that microservices architecture more easily. The missing piece has been a management layer. Read more on how Istio fills the gap.

For all of what we covered in November, check out the Google Cloud blog.

Bringing the power of cloud to news organizations

As news consumption becomes increasingly digital, local, small and medium-sized news organizations need new tools to thrive. We created the Google News Initiative Cloud Program, to help publishers use Google Cloud to come up with imaginative solutions to business and storytelling. The first phase of the program focused on providing 200,000 free G Suite license to news companies with fewer than 500 employees through this application.

 Building on that effort, today we’re opening applications for the GNI Cloud Credit Program. This will give qualifying organizations with fewer than 1,000 employees the opportunity to apply for up to $100,000 each in Google Cloud Platform credits, as well up to $50,000 in implementation support. This provides publishers with an on-ramp to implement technologies that can help them build more sustainable businesses and provide readers with relevant, engaging and more personalized content.

With a wide range of tools, cloud technology can be tailored to each news organization's unique needs. To help get the most of their Cloud Credits, all publishers in the program will work with third-party cloud specialists to craft a strategy that uses cloud’s diverse tools to support storytelling and business needs. 

For example, with Google Cloud Platform credits, publishers can simplify time-intensive tasks like translating articles and transcribing interviews through tools like Cloud Speech to Text and the Cloud Translation API.

Cloud can help publishers understand articles and classify content to provide more personalized offerings to their readers using the Natural Language Processing API and intelligently organize entire photo archives of millions of photos to help reporters uncover new sources of information to tell more engaging stories.

With BigQuery and machine learning, publishers can modernize their infrastructure to improve distribution and analyze digital behaviors to better understand their audiences. And publishers will be able to build a more scalable, engaging app experiences with tools like Firebase, while lessening the burden on their support teams.

The Cloud Program is a key part of the Google News Initiatives’s mission to elevate quality journalism, enable new business models, and empower news organizations to innovate through technology. We are partnering with key industry associations around the world including WAN-IFRAONA, and LMA to spread the word about this program to more news organizations around the globe.

You can learn more about other Google News Initiative here.

The Internet is 24×7. Carbon-free energy should be too.

Electricity is the fuel that allows our data centers to deliver billions of Google searches, YouTube views, and much more—every single day, around the clock. Our commitment to carbon-free energy should be around the clock too.

Today we published an inside look at the sources of Google's electricity around the globe, to gauge how we're tracking toward our long-term aspiration of sourcing carbon-free energy on a truly 24x7 basis. Our new discussion paper highlights how some of our data centers—like the one in Hamina, Finland—are already performing remarkably well on this front. The paper shares location-specific “Carbon Heat Maps” to visualize how well a data center is matched with carbon-free energy on an hour-by-hour basis. For Hamina, a heat map shows that 97 percent of the facility’s electricity use last year was matched with carbon-free sources.

carbon-free_blog-asset_Finland-heatmap.jpg

Last year, 97 percent of our Finland data center’s electricity use was matched on an hourly basis with carbon-free sources.

The predominance of carbon-free energy at our Finland data center is partly due to Google’s purchases of wind energy in the Nordic region. Indeed, our large-scale procurement of wind and solar power worldwide is a cornerstone of our sustainability efforts, and has made Google the world’s largest corporate buyer of renewable energy. Last year we matched 100 percent of our annual electricity consumption with renewable energy purchases, and will continue to do so as we grow.

In many cases, we’ve partnered with local utilities and governments to increase the supply of renewable energy in the regions where we operate. For example, near our data center in Lenoir, NC, we worked with our local electricity supplier to establish one of the first utility solar purchase programs in the U.S. Solar alone, however, is unable to provide electricity around the clock. When the sun is shining, our Lenoir data center is quite carbon-free (indicated by the midday green ribbon in the Carbon Heat Map below), but at nighttime it’s more carbon-intensive; we plan to tackle this issue in the coming years by procuring additional types of carbon-free energy.

carbon-free_blog-asset_NC-heatmap.jpg

Last year, 67 percent of our North Carolina data center’s electricity use was matched on an hourly basis with carbon-free sources.

The Carbon Heat Maps demonstrate that there are times and places where our electricity profile is not yet fully carbon-free. They suggest that our 100 percent renewable energy purchasing goal—which relies on buying surplus renewable energy when it’s sunny and windy, to offset the lack of renewable energy supply in other situations—is an important first step toward achieving a fully carbon-free future. Ultimately, we aspire to source carbon-free energy for our operations in all places, at all times.

Creating a carbon-free future will be no easy feat, but the urgency of climate change demands bold solutions. Our discussion paper identifies several key actions that we and the rest of the world must take—including doubling down on renewable energy purchases in a greater number of regions—to achieve 24x7 carbon-free energy. We have our work cut out for us and couldn’t be more excited to push forward.

Protect your online accounts with Titan Security Keys

Phishing—when an attacker tries to trick you into giving them your credentials—is a common threat to all online users. Google's automated defenses securely block the overwhelming majority of sign-in attempts even if an attacker has your username or password, but we always recommend you enable two-step verification (2SV) to further protect your online accounts.

There are many forms of 2SV—from text (SMS) message codes, to the Google Authenticator app, to hardware second factors like security keys. And while any second factor will greatly improve the security of your account, for those who want the strongest account protection, we’ve long advocated the use of security keys for 2SV.

Today, we’re making it easier to get a security key by making Google’s own Titan Security Keys available on the Google Store

Titan Security Key

Titan Security Key

Titan Security Keys have extra “special sauce” from Google—firmware that’s embedded in a hardware chip within the key that helps to verify that the key hasn’t been tampered with. We’ve gone into more detail about how this works on the Google Cloud blog.

Titan Security Keys work with popular browsers (including Chrome) and a growing ecosystem of services (including Gmail, Facebook, Twitter, Dropbox and more) that support FIDO standards

Getting started

It’s easy to get started with Titan Security Keys. Kits of two keys (one USB and one Bluetooth) are now available to U.S. customers on the Google Store and will be coming soon to additional regions.

To set them up with your Google Account, sign in and navigate to the 2-Step Verification page (see detailed instructions on our help center). Titan Security Keys are also compatible with the Advanced Protection Program, Google's strongest security for users at high risk. And Google Cloud admins can enable security key enforcement in G Suite, Cloud Identity, and Google Cloud Platform to ensure that users use security keys for their accounts.

For more information, visit our website or read our detailed post on Google Cloud.

Unlock your team’s creativity: running great hackathons

Creative, talented employees have awesome ideas, but chances are they rarely have enough time to actually try them out and find out which ones are worth pursuing. To allow their imagination to run free and spur creative innovation, companies need to create space and opportunities for employees to try out crazy new proposals. That’s why every so often, we regularly set aside some time to build a small, ad-hoc team around an idea, brainstorm, design, hack and share what we discovered.


A hackathon shifts the routine, gets people out of their comfort zone, and allows decisions to be made quickly. It creates new leadership opportunities, a chance to experiment, and an invitation to innovate. For our teams it’s also resulted in new products, new applications of emerging technologies, and important new cross-team collaborations. While not every hackathon will result in new products or features, we always find value in the learning and exploring that occurs.


Here are our tips for setting up a successful hackathon at your workplace:


Get support from your management and executive leadership.

A hackathon requires asking people to set aside their normal work for a few days (or a whole week) and that will impact the short-term ability to progress toward quarterly or annual goals. Make sure your leadership actively support the hackathon and its goals, so the team isn’t getting mixed messages about the trade-offs involved.


Your leaders also need to set the scene for the hackathon itself: what’s our goal for this hackathon, and what is expected from participants? This is a perfect time to emphasize the opportunity for risk-taking, crazy ideas, new technology experiments and creativity. A hackathon gives leaders the opportunity to empower the team to make decisions, tackle problems in new ways, and fail spectacularly.


Some of those failures can teach you more about your own process, infrastructure and tooling than successful efforts might—allowing the entire organization to become more efficient and productive. In other words, hackathons may only result in learning, not fantastic new product ideas; it’s a gamble, but a good one to take.


Get the right people in the room.

The magic of a hackathon is it encourages your teams to mix and work with new people, so they aren’t just coding with the folks they work with every day. Gather experts in a variety of relevant subject areas (machine learning, privacy, cloud storage, mobile development, etc.) to act as advisors and technology problem solvers, so teams don’t burn time trying to learn new technology from scratch.


Organize, organize, organize.

Organizing and running the hackathon takes its own big chunk of work. We set aside one or two large spaces for presentations and team formation. We set up an internal website to gather information and publicize, and get fun swag items that encourage participation and act as mementos or trophies. In the end we evaluate projects by voting, and award prizes to the top teams.


Real collaboration happens best face to face, and everyone being in the same room allows for free-flowing conversation. We’ve usually coordinate simultaneous hackathons at multiple different office sites, to minimize travel time and open up participation to folks on the greater team, regardless of their location.


Prepare your hackers by giving prompts in advance.

We’ve found a variety of prompts and brainstorming exercises to help leading up to the hackathon, so people can hit the ground running when the week starts. For example, you can ask people to finish the sentences:

  • I wish I could …

  • How might we …

  • If only I could take time to fix …

  • It’s such a pain that …

  • Wouldn’t it be better for everyone if …


These prompts can help push people to think outside their normal scope of work. They might experiment with changes to commonly used processes or tools, or try to solve an existing business problem in a totally novel way. We sometimes see teams organize around work that removes a cumbersome task they have to do but don’t want to, or something they can’t do but wish they could.


You may want to schedule tech talks in the week or two before the hackathon, to get people thinking or inspire new ideas. These can cover new technologies you want to explore (augmented reality, deep learning, new wireless protocols), unsolved problems that need attention, or basics of a platform or piece of infrastructure that’s likely to be used by many teams.


Next up

I’ll be back with part two next week, covering advice for forming groups, sharing ideas and showcasing the results of your time hacking.

Google Cloud’s continuing commitment to advance healthcare data interoperability

Patient needs are at the forefront of everything Google Cloud builds for healthcare. And as patient expectations for seamless experiences have increased, so has our commitment to eliminating the technological barriers that make it challenging for providers to deliver connected care.

Data interoperability is one important element to delivering connected care to patients. At HIMSS 2017, we announced our support for the HL7 FHIR (Fast Healthcare Interoperability Resources) Foundation to help the developer community advance data interoperability. Earlier this year we launched our Cloud Healthcare API to provide a scalable and security-focused infrastructure solution designed to ingest, process, and manage key healthcare data types. The Cloud Healthcare API empowers customers to use their healthcare data—including HL7v2, FHIR, and DICOM—for analytics and machine learning in the cloud.

To deliver true healthcare data interoperability, many stakeholders in the healthcare ecosystem need to engage to develop collaboratively and support open standards, open specifications, and open source tools that facilitate a frictionless healthcare data exchange with appropriate permissions and controls.

To that end, today at the Blue Button 2.0 Developer Conference at the White House, Google along with Amazon, IBM, Microsoft, Oracle, and Salesforce are announcing our joint commitment to removing barriers for the adoption of technologies for healthcare interoperability, particularly those that are enabled through the cloud and AI. The common goal of this program is to deliver better patient care, higher user satisfaction, and lower costs across the entire health ecosystem.

The statement is available here on ITIC.org.

After NEXT 2018: Trends in higher education and research

From classrooms to campus infrastructure, higher education is rapidly adapting to cloud technology. So it’s no surprise that academic faculty and staff were well represented among panelists and attendees at this year’sGoogle Cloud Next. Several of our more than 500 breakout sessions at Next spoke to the needs of higher education, as did critical announcements like our partnership with the National Institutes of Health to make public biomedical datasets available to researchers. Here are ten major themes that came out of our higher education sessions at Next:

  1. Collaborating across campuses. Learning technologists from St. Norbert College, Lehigh University, University of Notre Dame, and Indiana University explained how G Suite and CourseKit, Google’s new integrated learning management tool, are helping teachers and students exchange ideas.
  2. Navigating change.Academic IT managers told stories of how they’ve overcome the organizational challenges of cloud migration and offered some tips for others: start small, engage key stakeholders, and take advantage of Google’s teams of engineers and representatives, who are enthusiastic and knowledgeable allies. According to Joshua Humphrey, Team Lead, Enterprise Computing, Georgia State University, "We've been using GCP for almost three years now and we've seen an average yearly savings of 44%. Whenever people ask why we moved to the cloud this is what we point to. Usability and savings."
  3. Fostering student creativity. In our higher education booth at Next, students demonstrated projects that extended their learning beyond the classroom. For example, students at California State University at San Bernardino built a mobile rover that checks internet connectivity on campus, and students at High Tech High used G Suite and Chromebooks to help them create their own handmade soap company.
  4. Reproducing scientific research. Science is built on consistent, reliable, repeatable findings. Academic research panelists at the University of Michigan are using Docker on Compute Engine to containerize pipeline tools so any researcher can run the same pipeline without having to worry about affecting the final outcome.
  5. Powering bioinformaticsToday’s biomedical research often requires storing and processing hundreds of terabytes of data. Teams at SUNY Downstate, Northeastern, and the University of South Carolina demonstrated how they used BigQuery and Compute Engine to build complex simulations and manage huge datasets for neuroscience, epidemiology, and environmental research.
  6. Accelerating genomics research. Moving data to the cloud enables faster processing to test more hypotheses and uncover insights. Researchers from Stanford, Duke, and Michigan showed how they streamlined their genomics workloads and cut months off their processing time using GCP.
  7. Democratizing access to deep learningAutoML Vision, Natural Language, and Translation, all in beta, were announced at Next and can help researchers build custom ML models without specialized knowledge in machine learning or coding. As Google’s Chief Scientist of AI and Machine Learning Fei-Fei Li noted in her blog post, Google’s aim “is to make AI not just more powerful, but more accessible.”
  8. Transforming LMS analytics. Scalable tools can turn the data collected by learning management systems and student information services into insights about student behavior. Google’s strategic partnership with Unizin allows a consortium of universities to integrate data and learning sciences, while Ivy Tech used ML Engine to build a predictive algorithm to improve student performance in courses.
  9. Personalizing machine learning and AI for student services. We’re seeing a growing trend of universities investigating AI to create virtual assistants. Recently Strayer University shared with us how they used Dialogflow to do just that, and at Next, Carnegie Mellon walked us through their process of building SARA, a socially-aware robot assistant.
  10. Strengthening security for academic IT. Natural disasters threaten on-premise data centers, with earthquakes, flooding, and hurricanes demanding robust disaster-recovery planning. Georgia State, the University of Minnesota, and Stanford’s Graduate School of Business shared how they improved the reliability and cost-efficiency of their data backup by migrating to GCP.
We've been using GCP for almost three years now and we've seen an average yearly savings of 44%. Whenever people ask why we moved to the cloud this is what we point to: usability and savings Joshua Humphrey
Enterprise Computing, Georgia State University



To learn more about our solutions for higher education, visit our website, explore our credits programs for teaching and research, or speak with a member of our team.