Monthly Archives: February 2020

Exploring personal histories for Black History Month on Google Cloud

During Black History month, we honor those who have come before us, the legends who inspire us, and especially the people in our midst every day. 

We sat down with a few Cloud Googlers who help bring our cloud technology to more people and businesses to tackle issues ranging from helping promote sustainable fishing globally to quantifying the impact of projects to make clean water more available around the world. We discussed their personal histories, the people and moments that inspire them, and how identity shapes their work--and heard anecdotes about working in Congress, traveling the world, and more.  

Here, they share the path they took to Cloud and some of the things they’ve learned along the way.

Michee_Smith 1.jpg

Michee Smith, Product Manager
Being yourself always pays off.

As a product manager within Google Cloud, Michee Smith is responsible for making sure products work as expected for people. Michee’s area of focus is customer privacy. She’s passionate about making customers comfortable with keeping their data in our cloud. For her, it’s important to make privacy products easy to use, and that customers know what to expect around data access.

Michee’s path to technology started at Rochester Institute of Technology, where she knew she’d be around people who were different from her. That helped build an understanding and empathy for different cultures and groups of people that still inform her work today. “I’ve always had a belief in myself, which I credit in part to being raised in the Black church, a supportive and encouraging environment,” she says.

Her advice for those entering tech fields? Don’t counsel yourself out of doing anything. Other people might tell you “no,” but don’t let yourself be the one to say it.

“I want people to know I’m not a unicorn—I’m not here because I’m necessarily special, but because I haven’t let rejection stop me,” Michee says. “The superpower I rely on is that I won’t let other people tell me I’m not good enough.”

Hamidou1 (1).jpg

Hamidou Dia, VP, Solutions Engineering
Education is a lifelong pursuit.

From his Senegalese childhood to his European education to his work running the global solutions engineering organization at Google Cloud, Hamidou Dia has always had a passion for education. At Google, he leads a team focused on helping customers around the world and across industries solve their most complex business problems using Google Cloud technology. 

Hamidou’s passion for education was instilled by his mother who knew that education would open doors for him. After being selected to attend one of only five high schools in Senegal, he then attended college in France on academic scholarship. It was there he first interacted with a PC, wrote his first program and got hooked on technology, deciding to study engineering and then earning a master’s degree in computer science. Says Hamidou, “I love technology and how it can be so helpful in everyday life, and I knew right away it was the field for me.” 

Having lived in the U.S. for over 20 years and raising his family here, Hamidou has always advised his kids to embrace their heritage and stay true to themselves. “Don’t let others tell you what you can and can’t do,” he says. “Carve your own path.” 

What advice does he give to those new to the workforce? Be passionate about continuous learning and growth, no matter where you are in your career. “I always refer back to the principles I was raised with in West Africa. Number one is character. It’s having integrity in everything you do,” he says. “Second is that it’s all about hard work. In the technology industry, finding your area of expertise, and always continuing to learn more, is how you can stay on top of your game. And finally, don’t be afraid. The greatest challenges are often where the greatest opportunities lie.”
google_cloud_albert_sanders.jpg

Albert Sanders, Senior Counsel, Government Affairs & Public Policy 
People, policy and technology make a big impact.

Albert Sanders has worked in the White House, negotiated bipartisan deals in Congress, and recently addressed the United Nations General Assembly. His personal and professional travels have taken him to five continents—and he’s visited 11 (and counting!) countries in Africa. At Google Cloud, his team works with governments across the globe to pave the way for new Google Cloud data centers that help expand access to technology and enable more people to benefit from cloud computing.

Choosing a career in public policy stemmed from an early interest in government, and his experience in an overcrowded high school, where there were often not enough seats or textbooks to go around. “I learned early on that the decisions made in city halls, capitol buildings, and government agencies have a direct impact—sometimes positive, sometimes negative—on real people,” he says. That path started with law school, and led to work on Capitol Hill and then in the Obama White House.

Connecting to his history started with Albert’s first trip to South Africa several years ago. “Traveling through Africa is intensely personal,” he says. “Many Americans may take for granted that they can trace their family origins to places outside the United States. One of the many enduring legacies of slavery is that most African Americans don’t have that direct connection to their family history. I may not know the names of my ancestors or the place of their birth, but I’m reminded regularly that they passed on to us a resilience, faith, and determination that could not be shackled.”

Along the way, Albert has gained some advice that he passes on to mentees and others: “Embrace the uncomfortable and unprecedented. And don’t be afraid to advocate for yourself,” he says. And finally: “Representation matters. One of the reasons I do my best every day is because I’m aware that I must excel for myself—and for other people of color who are still terribly underrepresented in our industry. I appreciate Google’s various initiatives to address this issue. I’m committed to doing my part to support those efforts, ensure accountability, and show what’s possible when diverse perspectives and people have a seat at the table.”

We’re grateful to these Cloud Googlers for sharing their stories, and we look forward to lots more history being made in and through technology.

3 Ways DevFest is Solving for the Environment

GDG DevFest banner

In 2019, powerful conversations on how to solve climate change took place all over the world. In that spirit, the DevFest community recently joined the discussion by looking at how tech can change how we approach the environment. For our new readers, DevFests are community-led developer events, hosted by Google Developer Groups, that take place all around the world and are focused on building community around Google’s technologies.

From DevFests in the rainforests of Brazil to the Mediterranean Sea, our community has come together to take part in a high-powered exchange of ideas, focused on how to solve for the environment. Out of these DevFests have come 3 new solutions, aimed at using tech to create eco-friendly economies, safer seas, and care for crops. Check them out below!

1. Blockchain for Biodiversity - DevFest Brazil

Blockchain for Biodiversity - DevFest Brazil

Blockchain for Biodiversity comes out of the “Jungle’s DevFest”, an event hosted by GDG Manaus that took place in the Brazilian Rainforest with over 1,000 participants. Imagined by Barbara Schorchit’s startup GeneCoin, the idea focuses on using blockchain based solutions to track how “biodiversity friendly” different products and companies are. Her goal is to help create a more eco-friendly economy by providing consumers with information to hold companies accountable for their environmental footprint. This tool for tracking environmentally conscious products will allow consumers to better understand what materials goods are made from, what energy sources were used to produce them, and how their creation will impact climate change. Welcome to a whole new take on purchasing power.

2. ECHO Marine Station - DevFest Mediterranean

ECHO Marine Station - DevFest Mediterranean

The ECHO Marine Station comes from a partnership between the Italian Coast Guard and DevFest Mediterranean hosted by GDG Nebrodi, Palermo, and Gela. The marine vehicle was created with the hope of becoming the “space station of the seas” and is rechargeable via solar panels, equipped with low-consumption electric motors, and does not create waves or noise - so as to not disturb marine life. The DevFest team took the lead on developing the software and hardware for the station with the intention of creating a tool that can analyze and monitor any marine environment.

At the DevFest, two innovative ideas were proposed on how to best purpose the marine station. The first was to program it to collect radioactive ions released by nuclear power plants, with the goal of analyzing and eventually disposing the waste. The second was to program the station to carry out expeditions to collect water samples, analyze pollution levels, and report dangerous conditions back to the coast guard.

With DevFest and the Italian Coast Guard working together, the ECHO Marine Station is ready to change how we save our seas.

3. Doctor's Eyes for Plants - DevFest MienTrung, Vietnam

Doctor's Eyes for Plants - DevFest MienTrung, Vietnam

Doctor’s Eyes for Plants is an idea that came out of DevFest Vietnam’s “Green Up Your Code event”, hosted by GDG MienTrung, Vietnam. You can watch highlights from their event here. Created by local students, the program uses a public dataset with Tensorflow to train a region-convolutional neural network to recognize 6 diseases on 3 different plant species, all in 48 hours. The team was recently able to apply the technology to rice plants, which could revolutionize the world’s capacity and efficiency for growing the crop. The rewards of this technology could be unprecedented. The creation of a healthier eco-system paired with the chance to feed more people? Now that’s high impact.

Inspired by these stories? Find a Google Developer Groups community near you, at developers.google.com/community/gdg/

Ultra-High Resolution Image Analysis with Mesh-TensorFlow



Deep neural network models form the backbone of most state-of-the-art image analysis and natural language processing algorithms. With the recent development of large-scale deep learning techniques such as data and model parallelism, large convolutional neural network (CNN) models can be trained on datasets of millions of images in minutes. However, applying a CNN model on ultra-high resolution images, such as 3D computed tomography (CT) images that can have up to 108 pixels, remains challenging. With existing techniques, a processor still needs to host a minimum of 32GB of partial, intermediate data, whereas individual GPUs or TPUs typically have only 12-32GB memory. A typical solution is to process image patches separately from one another, which leads to complicated implementation and sub-optimal performance due to information loss.

In “High Resolution Medical Image Analysis with Spatial Partitioning”, a collaboration with the Mayo Clinic, we push the boundary of massive data and model parallelism through use of the Mesh-TensorFlow framework, and demonstrate how this technique can be used for ultra-high resolution image analysis without compromising input resolution for practical feasibility. We implement a halo exchange algorithm to handle convolutional operations across spatial partitions in order to preserve relationships between neighboring partitions. As a result, we are able to train a 3D U-Net on ultra-high resolution images (3D images with 512 pixels in each dimension), with 256-way model parallelism. We have additionally open-sourced our Mesh-TensorFlow-based framework for both GPUs and TPUs for use by the broader research community.

Data and Model Parallelism with Mesh-TensorFlow
Our implementation is based on the Mesh-TensorFlow framework for easy and efficient data and model parallelism, which enables users to split tensors across a mesh of devices according to the user defined image layout. For example, users may provide the mesh of computational devices as 16 rows by 16 columns for a total of 256 processors, with two cores per processor. They then define the layout to map the spatial dimension x of their image to processor rows, map spatial dimension y to processor columns, and map the batch dimension (i.e., the number of image segments to be processed simultaneously) to cores. The partitioning and distributing of a training batch is implemented by Mesh-TensorFlow at the tensor level, without users worrying about implementation details. The figure below shows the concept with a simplified example:
Spatial partitioning of ultra-high resolution images, in this case, a 3D CT scan.
Spatial Partitioning with Halo Exchange
A convolution operation executed on an image often applies a filter that extends beyond the edge of the frame. While there are ways to address this when dealing with a single image, standard approaches do not take into account that for segmented images information beyond the frame edge may still be relevant. In order to yield accurate results, convolution operations on an image that has been spatially partitioned and redistributed across processors must take into account each image segment’s neighbors.

One potential solution might be to include overlapping regions in each spatial partition. However, since there are very likely many subsequent convolutional layers and each of them introduces overlap, the overlap will be relatively large — in fact, in most cases, the overlap could cover the entire image. Moreover, all overlapping regions must be included from the start, at the very first layer, which may run into the memory constraints that we are trying to resolve.

Our solution is totally different: we implemented a data communication step called halo exchange. Before every convolution operation, each spatial partition exchanges (receives and sends) margins with its neighbors, effectively expanding the image segment at its margins. The convolution operations are then applied locally on each device. This ensures that the result of the convolutions for the whole of the image remain identical with or without spatial partitioning.
Halo exchange ensures that cross-partition convolutions handle image segment edges correctly.
Proof of Concept - Segmentation of Liver Tumor CT Scans
We then applied this framework to the task of segmenting 3D CT scans of liver tumors (LiTS benchmark). For the evaluation metric, we use the Sørensen–Dice coefficient, which ranges from 0.0 to 1.0 with a score of 0 indicating no overlap between segmented and ground truth tumor regions and 1 indicating a perfect match. The results shown below demonstrate that higher data resolution yields better results. Although the return tends to diminish when using the full 5123 resolution (512 pixels in each of x, y, z directions), this work does open the possibility for ultra-high resolution image analysis.
Higher resolution data yields better segmentation accuracy.
Conclusion
Existing data and model parallelism techniques enabled the training of neural networks with billions of parameters, but cannot handle input images above ~108 pixels. In this work, we explore the applicability of CNNs on these ultra-high resolution images, and demonstrate promising results. Our Mesh-TensorFlow-based implementation works on both GPUs and TPUs, and with the released code, we hope to provide a possible solution for some previously impossible tasks.

Acknowledgments
We thank our collaborators Panagiotis Korfiatis, Ph.D., and Daniel Blezek, Ph.D., from Mayo Clinic for providing the initial 3D U-net model and training data. Thank you Greg Mikels for the POC work with Mayo Clinic. Special thanks to all the co-authors of the paper especially Noam Shazeer.

Source: Google AI Blog


Best Practices for News coverage with Search

Having up-to-date information during large, public events is critical, as the landscape changes by the minute. This guide highlights some tools that news publishers can use to create a data rich and engaging experience for their users.

Add Article structured data to AMP pages

Adding Article structured data to your news, blog, and sports article AMP pages can make the content eligible for an enhanced appearance in Google Search results. Enhanced features may include placement in the Top stories carousel, host carousel, and Visual stories. Learn how to mark up your article.
You can now test and validate your AMP article markup in the Rich Results Test tool. Enter your page’s URL or a code snippet, and the Rich Result Test shows the AMP Articles that were found on the page (as well as other rich result types), and any errors or suggestions for your AMP Articles. You can also save the test history and share the test results.
We also recommend that you provide a publication date so that Google can expose this information in Search results, if this information is considered to be useful to the user.

Mark up your live-streaming video content

If you are live-streaming a video during an event, you can be eligible for a LIVE badge by marking your video with BroadcastEvent. We strongly recommend that you use the Indexing API to ensure that your live-streaming video content gets crawled and indexed in a timely way. The Indexing API allows any site owner to directly notify Google when certain types of pages are added or removed. This allows Google to schedule pages for a fresh crawl, which can lead to more relevant user traffic as your content is updated. For websites with many short-lived pages like livestream videos, the Indexing API keeps content fresh in search results. Learn how to get started with the Indexing API.

For AMP pages: Update the cache and use components

Use the following to ensure your AMP content is published and up-to-date the moment news breaks.

Update the cache


When people click an AMP page, the Google AMP Cache automatically requests updates to serve fresh content for the next person once the content has been cached. However, if you want to force an update to the cache in response to a change in the content on the origin domain, you can send an update request to the Google AMP Cache. This is useful if your pages are changing in response to a live news event.

Use news-related AMP components

  • <amp-live-list>: Add live content to your article and have it updated based on a source document. This is a great choice if you just want content to reload easily, without having to set up or configure additional services on your backend. Learn how to implement <amp-live-list>.
  • <amp-script>: Run your own JavaScript inside of AMP pages. This flexibility means that anything you are publishing on your desktop or non-AMP mobile pages, you can bring over to AMP. <amp-script> supports Websockets, interactive SVGs, and more. This allows you to create engaging news pages like election coverage maps, live graphs and polls etc. As a newer feature, the AMP team is actively soliciting feedback on it. If for some reason it doesn't work for your use case, let us know.
If you have any questions, let us know through the forum or on Twitter.

Welcome to Google AdSense

Creating content your audience loves takes time, but making it profitable shouldn’t. That’s where Google AdSense comes in. 

With AdSense, trusted advertisers show their ads on your site, generating revenue for you to keep creating great content and take your business to the next level. More than 2 million publishers, just like you are using it. 

AdSense gets you the best of Google’s automation and is fully customizable. Plus, signing up is free and easy. So get started today!


Welcome to AdSense

Announcing v3_0 of the Google Ads API beta

Today we’re announcing the v3_0 release of the Google Ads API beta. To use the v3_0 features via the new endpoint, please update your client libraries. If you are upgrading from v1 or v2, some of your code may require changes when you switch to the new v3 endpoint. Please see the migration guide for more information on breaking changes.

Here are the highlights: Where can I learn more?
The following resources can help you get going with the Google Ads API: The updated client libraries and code examples will be published next week. If you have any questions or need additional help, please contact us via the forum.

Data centers are more energy efficient than ever

While Google is the world’s largest corporate purchaser of renewable energy, we’re also taking action on climate change by minimizing the amount of energy we need to use in the first place. For more than a decade, we’ve worked to make our data centers as energy efficient as possible. Today, a new paper in Science validated our efforts and those of other leaders in our industry. It found that efficiency improvements have kept energy usage almost flat across the globe’s data centers—even as demand for cloud computing has skyrocketed.

The new study shows that while the amount of computing done in data centers increased by about 550 percent between 2010 and 2018, the amount of energy consumed by data centers only grew by six percent during the same time period. The study’s authors note that these energy efficiency gains outpaced anything seen in other major sectors of the economy. As a result, while data centers now power more applications for more people than ever before, they still account for about 1 percent of global electricity consumption—the same proportion as in 2010. 

What's more, research has consistently shown that hyperscale (meaning very large) data centers are far more energy efficient than smaller, local servers. That means that a person or company can immediately reduce the energy consumption associated with their computing simply by switching to cloud-based software. As the data center industry continues to evolve its operations, this efficiency gap between local computing and cloud computing will continue to grow.

Searching for efficiency

How are data centers squeezing more work out of every electron, year after year? For Google, the answer comes down to a relentless quest to eliminate waste, at every level of our operations. We designed highly efficient Tensor Processing Units, (the AI chips behind our advances in machine learning), and outfitted all of our data centers with high-performance servers. Starting in 2014, we even began using machine learning to automatically optimize cooling in our data centers. At the same time, we’ve deployed smart temperature, lighting, and cooling controls to further reduce the energy used at our data centers. 

Our efforts have yielded promising results: Today, on average, a Google data center is twice as energy efficient as a typical enterprise data center. And compared with five years ago, we now deliver around seven times as much computing power with the same amount of electrical power. 

By directly controlling data center cooling, our AI-powered recommendation system is already delivering consistent energy savings of around 30 percent on average. And the average annual power usage effectiveness for our global fleet of data centers in 2019 hit a new record low of 1.10, compared with the industry average of 1.67—meaning that Google data centers use about six times less overhead energy for every unit of IT equipment energy.

Leading by example

So where do we go from here? We’ll continue to deploy new technologies and share the lessons we learn in the process, design the most efficient data centers possible, and disclose data on our progress. To learn about our efforts to power the internet using as little power as possible—and how we’re ensuring that the energy we use is carbon-free, around the clock—check out our latest Environment Report or visit our data center efficiency site.

Helping Developers with Permission Requests


User trust is critical to the success of developers of every size. On the Google Play Store, we aim to help developers boost the trust of their users, by surfacing signals in the Developer Console about how to improve their privacy posture. Towards this aim, we surface a message to developers when we think their app is asking for permission that is likely unnecessary.
This is important because numerous studies have shown that user trust can be affected when the purpose of a permission is not clear.1 In addition, research has shown that when users are given a choice between similar apps, and one of them requests fewer permissions than the other, they choose the app with fewer permissions.2
Determining whether or not a permission request is necessary can be challenging. Android developers request permissions in their apps for many reasons - some related to core functionality, and others related to personalization, testing, advertising, and other factors. To do this, we identify a peer set of apps with similar functionality and compare a developer’s permission requests to that of their peers. If a very large percentage of these similar apps are not asking for a permission, and the developer is, we then let the developer know that their permission request is unusual compared to their peers. Our determination of the peer set is more involved than simply using Play Store categories. Our algorithm combines multiple signals that feed Natural Language Processing (NLP) and deep learning technology to determine this set. A full explanation of our method is outlined in our recent publication, entitled “Reducing Permissions Requests in Mobile Apps” that appeared in the Internet Measurement Conference (IMC) in October 2019.3 (Note that the threshold for surfacing the warning signal, as stated in this paper, is subject to change.)
We surface this information to developers in the Play Console and we let the developer make the final call as to whether or not the permission is truly necessary. It is possible that the developer has a feature unlike all of its peers. Once a developer removes a permission, they won’t see the warning any longer. Note that the warning is based on our computation of the set of peer apps similar to the developers. This is an evolving set, frequently recomputed, so the message may go away if there is an underlying change to the set of peers apps and their behavior. Similarly, even if a developer is not currently seeing a warning about a permission, they might in the future if the underlying peer set and its behavior changes. An example warning is depicted below.

This warning also helps to remind developers that they are not obligated to include all of the permission requests occurring within the libraries they include inside their apps. We are pleased to say that in the first year after deployment of this advice signal nearly 60% of warned apps removed permissions. Moreover, this occurred across all Play Store categories and all app popularity levels. The breadth of this developer response impacted over 55 billion app installs.3 This warning is one component of Google’s larger strategy to help protect users and help developers achieve good security and privacy practices, such as Project Strobe, our guidelines on permissions best practices, and our requirements around safe traffic handling.
Acknowledgements
Giles Hogben, Android Play Dashboard and Pre-Launch Report teams

References

[1] Modeling Users’ Mobile App Privacy Preferences: Restoring Usability in a Sea of Permission Settings, by J. Lin B. Liu, N. Sadeh and J. Hong. In Proceedings of Usenix Symposium on Privacy & Security (SOUPS) 2014.
[2] Using Personal Examples to Improve Risk Communication for Security & Privacy Decisions, by M. Harbach, M. Hettig, S. Weber, and M. Smith. In Proceedings of the SIGCHI Conference on Human Computing Factors in Computing Systems, 2014.
[3] Reducing Permission Requests in Mobile Apps, by S. T. Peddinti, I. Bilogrevic, N. Taft, M Pelikan, U. Erlingsson, P. Anthonysamy and G. Hogben. In Proceedings of ACM Internet Measurement Conference (IMC) 2019.

An innovation challenge to sustain diverse media

Most communities in North America are diverse. They are comprised of people of various ethnicities, income levels, and countries of origin. In a lot of cases these diverse audiences are not effectively represented in the pages of their local news publication or remain untapped as an opportunity to increase engagement and grow the business for a news organization. That’s why it's increasingly important for publishers to understand the diverse communities they serve. 


We hope this is where the Google Innovation Challenge can help. Last year, the GNI North American Innovation Challenge focused on generating revenue and increasing audience engagement for local journalism. 34 projects in 17 states and provinces received $5.8 million to help with projects ranging from a way for local news providers to access and monetize audio clips, to testing a new approach to local news discovery, engagement and membership.


This year the spotlight will shift toward helping publishers understand their audiences so that they can build a sustainable business. Through our work with the Borealis Racial Equity in Journalism Fund we know that communities with the least access to relevant news are also most likely to be left out of policy creation and civic processes. A diverse and ethnic media is a critical news source for underrepresented groups, filling gaps for stories that don’t rise to mainstream media, and providing a positive and authentic representation of their cultures. It's important that publishers who cover underrepresented audiences continue to thrive as the world becomes increasingly digital. 


How this challenge works:


The North American GNI Innovation Challenge will provide funding for projects that have a clear focus on diversity, equity and inclusion in journalism, promote the creation of sustainable models for local media that address diverse audiences, and recognize that as an opportunity for driving engagement and revenue.


We’re looking for a breadth of projects, and examples might include using technology to understand the business impact of overlooking certain audiences, designing strategies to improve discovery of local and diverse content, or diversifying revenue streams. Please join us on March 18th at 9a.m. PST for town hall where we will answer your questions. You can tune in using this link.


How to apply: 

Applications open today, and the deadline to submit is May 12th, 2020 at 11:59 p.m. PT.  Over the next 10 weeks we’ll hold workshops and bootcamps to get the word out and answer questions about the Innovation Challenge. You can also get in contact with us at [email protected]


We’re looking forward to seeing what creative ideas you come up with.

Stadia Savepoint: February updates


With February coming to a close, we’re back with another issue of our Stadia Savepoint series, giving you a summary of recent news on Stadia.

This month we announced nine new games coming to Stadia, featuring three games launching First on Stadia. That included “Spitlings,” the chaotic multi-player platformer which launched earlier this week and is the focus of our first developer Q&A with Massive Miniteam. 

Stadia on new phones

Stadia on Samsung, ASUS, and Razer phones.

Expanded Android support

We’ve added Stadia compatibility to 19 new phones from Samsung, ASUS, and Razer, bringing the ability to play our entire library across tens of millions of devices. See here for more info. 

New games coming to Stadia

  • SteamWorld Dig

  • SteamWorld Dig 2

  • SteamWorld Heist

  • SteamWorld Quest

  • Lost Words: Beyond the Page

  • Panzer Dragoon: Remake

  • Serious Sam Collection

  • Stacks on Stacks (on Stacks)

  • The Division 2

  • Doom Eternal

Recent content launches on Stadia

  • Spitlings

  • Monster Energy Supercross - The Official Videogame 3

  • Borderlands 3 - Moxxi's Heist of the Handsome Jackpot

  • Metro Exodus - Sam’s Story

  • Mortal Kombat 11 - The Joker

  • Mortal Kombat 11 - DC Elseworlds Skin Pack

Stadia Pro updates

  • New games are free to active Stadia Pro subscribers in March: GRID, SteamWorld Dig 2, and SteamWorld Quest.

  • Existing games still available to add to your collection: Destiny 2, Farming Simulator 19 Platinum Edition, Gylt, Metro Exodus and Thumper.

  • Act quickly: Farming Simulator 19 Platinum Edition leaves Stadia Pro on February 29.

  • Ongoing discounts for Stadia Pro subscribers: Check out the web or mobile Stadia store for the latest.

That’s it for February, we’ll be back soon to share more updates. As always, stay tuned to the Stadia Community BlogFacebook, and Twitter for the latest news.