Stadia: a new way to play

For 20 years, Google has worked to put the world’s information at your fingertips. Instant delivery of that information is made possible through our data centre and network capabilities, and now we're using that technology to change how you access and enjoy video games.

Stadia is a new video game platform, delivering instant access to your favourite games on any type of screen—whether it’s a TV, laptop, desktop, tablet or mobile phone. Our goal is to make those games available in resolutions up to 4K and 60 frames per second with HDR and surround sound. We’ll be launching later this year in select countries including the U.S., Canada, U.K. and much of Europe.

To build Stadia, we’ve thought deeply about what it means to be a gamer and worked to converge two distinct worlds: people who play video games and people who love watching them. Stadia will lift restrictions on the games we create and play—and the communities who enjoy them.

Advanced game streaming 

Using our globally connected network of Google data centres, Stadia will free players from the limitations of traditional consoles and PCs.

When players use Stadia, they'll be able to access their games at all times, and on virtually any screen. And developers will have access to nearly unlimited resources to create the games they’ve always dreamed of. It’s a powerful hardware stack combining server class GPU, CPU, memory and storage, and with the power of Google’s data center infrastructure, Stadia can evolve as quickly as the imagination of game creators.

Data centres make Stadia possible, but what sets the system apart is how it works with other Google services. In a world where there are more than 200 million people watching game-related content daily on YouTube, Stadia makes many of those games playable with the press of a button. If you watch one of your favourite creators playing Assassin's Creed Odyssey, simply click the “play now” button. Seconds later, you’ll be running around ancient Greece in your own game/on your own adventure—no downloads, no updates, no patches and no installs.

But what’s a gaming platform without its own dedicated controller? Enter the Stadia controller*.


When we designed the Stadia controller, we listened to gamers about what they wanted in a controller. First, we made sure to develop a direct connection from the Stadia controller to our data centre through Wi-Fi for the best possible gaming performance. The controller also includes a button for instant capture, saving and sharing gameplay in up to 4k of stunning resolution. And it comes equipped with a Google Assistant button and built-in microphone.

Using Google’s vast experience, reach and decades of investment we’re making Stadia a powerful gaming platform for players, developers and YouTube content creators— of all sizes. We’re building a playground for every imagination.

*This device is a prototype unit and cannot be marketed, sold, leased, or distributed until it complies with applicable essential requirements and obtains required legal authorizations.

Dev Channel Update for Desktop

The dev channel has been updated to 74.0.3729.22 for Windows, Mac & Linux.


A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.
Krishna Govind
Google Chrome

Stadia: a new way to play

For 20 years, Google has worked to put the world’s information at your fingertips. Instant delivery of that information is made possible through our data center and network capabilities, and now we're using that technology to change how you access and enjoy video games.

Stadia is a new video game platform, delivering instant access to your favorite games on any type of screen—whether it’s  a TV, laptop, desktop, tablet or mobile phone. Our goal is to make those games available in resolutions up to 4K and 60 frames per second with HDR and surround sound. We’ll be launching later this year in select countries including the U.S., Canada, U.K. and much of Europe.

To build Stadia, we’ve thought deeply about what it means to be a gamer and worked to converge two distinct worlds: people who play video games and people who love watching them. Stadia will lift restrictions on the games we create and play—and the communities who enjoy them.

Advanced game streaming

Using our globally connected network of Google data centers, Stadia will free players from the limitations of traditional consoles and PCs.

When players use Stadia, they'll be able to access their games at all times, and on virtually any screen. And developers will have access to nearly unlimited resources to create the games they’ve always dreamed of. It’s a powerful hardware stack combining server class GPU, CPU, memory and storage, and with the power of Google’s data center infrastructure, Stadia can evolve as quickly as the imagination of game creators. 

Data centers make Stadia possible, but what sets the system apart is how it works with other Google services. In a world where there are more than 200 million people watching game-related content daily on YouTube, Stadia makes many of those games playable with the press of a button. If you watch one of your favorite creators playing Assassin's Creed Odyssey, simply click the “play now” button.Seconds later, you’ll be running around ancient Greece in your own game/on your own adventure—no downloads, no updates, no patches and no installs.

But what’s a gaming platform without its own dedicated controller? Enter the Stadia controller*.

The Stadia controller

When we designed the Stadia controller, we listened to gamers about what they wanted in a controller. First, we made sure to develop a direct connection from Stadia controller to our data center through Wi-Fi for the best possible gaming performance. The controller also includes a button for instant capture, saving and sharing gameplay in up to 4k of stunning resolution. And it comes equipped with a Google Assistant button and built-in microphone.

Using Google’s vast experience, reach and decades of investment we’re making Stadia a powerful gaming platform for players, developers and YouTube content creators—of all sizes. We’re building a playground for every imagination.

*This device has not been authorized as required by the rules of the Federal Communications Commission. This device is not, and may not be, offered for sale or lease, or sold or leased, until authorization is obtained.

It’s now easier to insert images in cells in Google Sheets

What’s changing  

We’ve made it simpler to add images inside of cells in Google Sheets. Previously, it was only possible to insert publicly hosted images into a cell using the IMAGE function.

Now, you can insert any image, like those saved on your desktop or mobile device, into a cell by using the IMAGE function or the new option found inside the Insert menu.


Who’s impacted 

End users

Why you’d use it 

  • You’ve told us this feature would be helpful for many tasks like: 
  • Adding receipts to expense-tracking spreadsheets 
  • Adding icons to icon libraries 
  • Adding logos to better brand your resources 
  • Adding product images to inventory lists, and more. 

How to get started 

  • Admins: No action needed. 
  • End users: You can add images directly to cells in two ways on Desktop: 
    • Use the IMAGE() function 
    • Via the menu bar at the top of a Sheet: Insert > Image > Image in cell 
      • Select image from Drive or upload one. 

  • On Mobile: 
    • Tap once on a cell to select 
    • Tap again to bring up menu: Insert > Tap the “+” at the top of the screen > Image > Image in cell 
    • Select an image from the options presented to you. 

Additional details 

You can have multiple cells containing an image in a Sheet, but note that only one image per cell is possible at the moment. 

Images inside cells will be associated with a row and move along with the data—so, if you move rows, filter or sort them, the images will move with the content in the row, unlike previously when images would sit on top of the grid. 

Using the formatting and alignment tools, you can pin the image to a specific corner of the cell or set the alignment how you’d like. By default, images will align to the bottom left corner of the cell.

Helpful links 


Availability 

Rollout details 
G Suite editions 
  • Available to all G Suite Editions. 

On/off by default? 
  • This feature will be ON by default 

Measuring the Limits of Data Parallel Training for Neural Networks



Over the past decade, neural networks have achieved state-of-the-art results in a wide variety of prediction tasks, including image classification, machine translation, and speech recognition. These successes have been driven, at least in part, by hardware and software improvements that have significantly accelerated neural network training. Faster training has directly resulted in dramatic improvements to model quality, both by allowing more training data to be processed and by allowing researchers to try new ideas and configurations more rapidly. Today, hardware developments like Cloud TPU Pods are rapidly increasing the amount of computation available for neural network training, which raises the possibility of harnessing additional computation to make neural networks train even faster and facilitate even greater improvements to model quality. But how exactly should we harness this unprecedented amount of computation, and should we always expect more computation to facilitate faster training?

The most common way to utilize massive compute power is to distribute computations between different processors and perform those computations simultaneously. When training neural networks, the primary ways to achieve this are model parallelism, which involves distributing the neural network across different processors, and data parallelism, which involves distributing training examples across different processors and computing updates to the neural network in parallel. While model parallelism makes it possible to train neural networks that are larger than a single processor can support, it usually requires tailoring the model architecture to the available hardware. In contrast, data parallelism is model agnostic and applicable to any neural network architecture – it is the simplest and most widely used technique for parallelizing neural network training. For the most common neural network training algorithms (synchronous stochastic gradient descent and its variants), the scale of data parallelism corresponds to the batch size, the number of training examples used to compute each update to the neural network. But what are the limits of this type of parallelization, and when should we expect to see large speedups?

In "Measuring the Effects of Data Parallelism in Neural Network Training", we investigate the relationship between batch size and training time by running experiments on six different types of neural networks across seven different datasets using three different optimization algorithms ("optimizers"). In total, we trained over 100K individual models across ~450 workloads, and observed a seemingly universal relationship between batch size and training time across all workloads we tested. We also study how this relationship varies with the dataset, neural network architecture, and optimizer, and found extremely large variation between workloads. Additionally, we are excited to share our raw data for further analysis by the research community. The data includes over 71M model evaluations to make up the training curves of all 100K+ individual models we trained, and can be used to reproduce all 24 plots in our paper.

Universal Relationship Between Batch Size and Training Time
In an idealized data parallel system that spends negligible time synchronizing between processors, training time can be measured in the number of training steps (updates to the neural network's parameters). Under this assumption, we observed three distinct scaling regimes in the relationship between batch size and training time: a "perfect scaling" regime where doubling the batch size halves the number of training steps required to reach a target out-of-sample error, followed by a regime of "diminishing returns", and finally a "maximal data parallelism" regime where further increasing the batch size does not reduce training time, even assuming idealized hardware.

For all workloads we tested, we observed a universal relationship between batch size and training speed with three distinct regimes: perfect scaling (following the dashed line), diminishing returns (diverging from the dashed line), and maximal data parallelism (where the trend plateaus). The transition points between the regimes vary dramatically between different workloads.
Although the basic relationship between batch size and training time appears to be universal, we found that the transition points between the different scaling regimes vary dramatically across neural network architectures and datasets. This means that while simple data parallelism can provide large speedups for some workloads at the limits of today's hardware (e.g. Cloud TPU Pods), and perhaps beyond, some workloads require moving beyond simple data parallelism in order to benefit from the largest scale hardware that exists today, let alone hardware that has yet to be built. For example, in the plot above, ResNet-8 on CIFAR-10 cannot benefit from batch sizes larger than 1,024, whereas ResNet-50 on ImageNet continues to benefit from increasing the batch size up to at least 65,536.

Optimizing Workloads
If one could predict which workloads benefit most from data parallel training, then one could tailor their workloads to make maximal use of the available hardware. However, our results suggest that this will often not be straightforward, because the maximum useful batch size depends, at least somewhat, on every aspect of the workload: the neural network architecture, the dataset, and the optimizer. For example, some neural network architectures can benefit from much larger batch sizes than others, even when trained on the same dataset with the same optimizer. Although this effect sometimes depends on the width and depth of the network, it is inconsistent between different types of network and some networks do not even have obvious notions of "width" and "depth". And while we found that some datasets can benefit from much larger batch sizes than others, these differences are not always explained by the size of the dataset—sometimes smaller datasets benefit more from larger batch sizes than larger datasets.

Left: A transformer neural network scales to much larger batch sizes than an LSTM neural network on the LM1B dataset. Right: The Common Crawl dataset does not benefit from larger batch sizes than the LM1B dataset, even though it is 1,000 times the size.
Perhaps our most promising finding is that even small changes to the optimization algorithm, such as allowing momentum in stochastic gradient descent, can dramatically improve how well training scales with increasing batch size. This raises the possibility of designing new optimizers, or testing the scaling properties of optimizers that we did not consider, to find optimizers that can make maximal use of increased data parallelism.

Future Work
Utilizing additional data parallelism by increasing the batch size is a simple way to produce valuable speedups across a range of workloads, but, for all the workloads we tried, the benefits diminished within the limits of state-of-the-art hardware. However, our results suggest that some optimization algorithms may be able to consistently extend the perfect scaling regime across many models and data sets. Future work could perform the same measurements with other optimizers, beyond the few closely-related ones we tried, to see if any existing optimizer extends perfect scaling across many problems.

Acknowledgements
The authors of this study were Chris Shallue, Jaehoon Lee, Joe Antognini, Jascha Sohl-Dickstein, Roy Frostig and George Dahl (Chris and Jaehoon contributed equally). Many researchers have done work in this area that we have built on, so please see our paper for a full discussion of related work.

Source: Google AI Blog


New guide: Rethink your eCommerce experience with Google Ad Manager

When shopping online, today’s consumers want seamless, convenient, and genuinely enjoyable experiences. They expect retailers to understand what they want, with advertising that puts the right products in front of them at just the right moments. So, how exactly are retailers making it all happen — especially with the modern path to purchase spanning across so many different devices, formats, and channels?   

Leading retailers are accelerating their businesses by using integrated technology and data to help them connect with shoppers at each step of their journey. With platforms that provide advanced insights, retailers are collaborating with brands on innovative ad formats and placements that drive action by delivering a frictionless experience across their sites and apps.

In our new guide, Transforming shopping experiences on your eCommerce platform, we show you how Ad Manager can help you deliver unique experiences at every stage of the shopping journey, and increase profits while serving your customers, partners, and employees.

Ready for a new approach to eCommerce? Download the full guide here and see how you can create experiences that work for all of your customers.

Helping Latino students learn to code

Growing up in Costa Rica, I was always passionate about creating things and solving puzzles. That’s what drove me to computer science. I saw it as an opportunity to explore my interests and open doors to new possibilities. It's that love and passion that eventually helped me get to Google, and to the United States, where I now live.

Computer science requires students to learn how to think in a totally new way. Getting into that mindset can be really hard for anyone, but it can be even tougher if you’re learning key phrases, concepts, and acronyms in an environment that feels different from your everyday life.

That’s why I’m proud to share that Google.org is making a $5 million grant to UnidosUS, the YWCA and the Hispanic Heritage Foundation. The grant will bring computer science (CS) education to over one million Latino students and their families by 2022 with computer science curricula, including CS First, Google’s coding curriculum for elementary and middle school students. Additionally, it will support students' experience with how they learn about computer science, helping them explore CS and offering culturally relevant resources to engage parents.

This $5 million grant is part of a new $25 million Google.org commitment in 2019 to increase Black and Latino students’ access to computer science (CS) and AI education across the US. This initiative will help these students develop the technical skills and confidence they need for the future, and help prepare them to succeed in the careers they pursue.

Even as a fluent English speaker, I can’t count the number of times people misunderstand me because I pronounce things differently, or the times it takes me a little longer to understand because my day-to-day work language is not my primary language. This language barrier is not the only barrier—students from underrepresented communities, especially those who are Black and Latino, often don’t feel represented or connected to their first introduction to the field.

While Black and Latino students have equal interest in CS education, they often face social barriers to learning CS, such as a lack of role models, and a lack of learning materials that reflect their lived experiences, like those that are in a language they understand. On top of these social barriers, these students often face structural barriers, such as not having equal access to learn CS in or outside of the classroom. 

Along with the grant, CS First is launching its first set of lessons in Spanish. In the first activity, "Animar un nombre," students choose the name of something or someone they care about and bring the letters to life using code. The second activity, "Un descubrimiento inusual,” encourages students to code a story about when two characters discover a surprising object.

Today’s announcement is an exciting part of Google.org’s work to support students who have historically been underrepresented in computer science. These grants to partner organizations will help Black and Latino students access materials and engage with role models who feel connected to their culture. We will also help create more opportunities for students to access the courses they need to continue their studies.

To me, the new Spanish coding lessons are more than just a fun way to learn coding. They are opportunities for entire communities of students to see themselves reflected in computer science education materials, perhaps for the first time. It’s our hope that students like the ones I met will use CS to create more inventions and opportunities for us all.

Introducing train ticket booking on Google Pay, powered by IRCTC

https://lh3.googleusercontent.com/XI79Nhf_yN3T6YHPF_LUF3OXo5zEbqwIFI8bocRtA7n1oz5kGdUCsJlAYIKxRVK2vSvgU9HwvZvNjaHGc07BXU7HQVPb6aw-r1tK5ZyA4C9_vpv1s9UpmBWSxLjdSfcEswhB8KmQ
Users can now buy train tickets directly on Google Pay, at no additional cost

Search, book and cancel train tickets, with ease and convenience, right within the app

Starting today, train ticket booking for Indian Railways, powered by Indian Railway Catering and Tourism Corporation (IRCTC), is now available on Google Pay!


We’ve been constantly striving to bring Indian users helpful ways to make money transactions simpler, and this is another way to make the lives of our users easier through Google Pay.  

Now, all users in India with the latest Google Pay app using Android and iOS devices will be able to:


Search, book, cancel - All from Google Pay
You will be able to search and browse train options, and book or cancel tickets with your IRCTC account, at no additional cost, within a fast, intuitive and reliable user experience.


See availability and duration
You can easily see details like seat availability, journey duration, and travel times between your stations of choice, so it is easy to compare and choose between options.


Buy tickets at no additional cost
Book your train tickets on Google Pay, without any extra charges, as we launch. So go ahead, update your app to check it out now!
Search, Select and Book right within Google Pay


We have already seen hugely encouraging user response for cab and bus ticket booking options on Google Pay with abhiBus, Goibibo, RedBus, Uber* and Yatra. Now with IRCTC on Google Pay, your travel is made simpler even on trains.  


In the last year and a half since we launched Google Pay (then known as Tez), we’ve worked to develop new features to make payments simple for our Indian users. We are thrilled that millions of Indians are adopting Google Pay month on month from all over the country, in towns and villages to recharge their mobile plans, pay bills, pay for movies, book cabs, buy bus tickets and more.


We will continue innovating on Google Pay to help you get to where you need to go by bus, by cab, or by train, and make everyday payment transactions a simple process for you.
Enjoy your ride!


Posted by Ambarish Kenghe, Director, Product Management, Google Pay

*only available on Android

Coming to India: Express, a faster way to back up with Google Photos

https://2.bp.blogspot.com/-Lnapm5-5ZRE/XJCl1gDazdI/AAAAAAAAA04/snvLFNK63dsjaIMj12HQLSbSvbVD8dilwCLcBGAs/s1600/Photos_Express_Frame_1%2B%25281%2529.png
Since introducing Google Photos, we’ve aspired to be the home for all of your photos, helping you bring together a lifetime of memories in one place. To safely store your memories, we’ve offered two backup options: Original Quality and High Quality. However, in India specifically, we heard from people using the app that their backup experience was at times longer and stalled because they might not always have frequent access to WiFi. In fact, we learned that over a third of people using Google Photos in India have some photos that hadn’t been backed up in over a month.


We want to make sure we’re building experiences in our app that meet the unique needs for people no matter where they are, so last December, we began offering a new backup option in Google Photos called Express backup to a small percentage of people using Google Photos on Android in India. Express provides faster backup at a reduced resolution, making it easier to ensure memories are saved even when you might have poor or infrequent WiFi connectivity.



Over the past week, we’ve started rolling out Express backup to more users in India and by the end of the week, Android users on the latest version of Google Photos should start seeing it as an option for backup. In addition to Express, you will still have the option to choose from the existing backup options: Original Quality and High Quality. And, in addition to rolling out Express as an additional backup option in India, we’re also introducing a new Data Cap option for backup. This gives users more granular daily controls for using cellular data to back up. People can select from a range of daily caps, starting at 5MB.


We’re starting to bring Express backup to dozens of other countries, rolling out slowly so we can listen to feedback and continue to improve the backup experience around the world.

Posted by Raja Ayyagari, Product Manager, Google Photos

Set start times and import reminders in Tasks

What’s changing

We’re adding three highly-requested features to Tasks. You can now:
  • Set a date and time for your tasks and receive notifications
  • Create repeating tasks
  • Import reminders into Tasks

Who’s impacted

End users

Why you’d use it

We’ve heard from you that you’d like Tasks to be the one destination to track what you need to do in G Suite. These features will help make sure all of your to-dos are in Tasks, and ensure that you can keep track of the deadlines associated with them. Additionally, importing reminders to Tasks can help your users if your organization is currently transitioning from Inbox to Gmail.

How to get started

  • Admins: No action needed
  • End users - Date/time and repeating tasks:
    • When you create or edit a task, you’ll now see a new “Add date/time” field.
    • After clicking on Add date/time, you can enter the date, time, and recurrence of this task.

  • End users - Import to tasks:
    • When you open Tasks on the web or your mobile app, you’ll see a prompt to copy your existing reminders over to Tasks. You can also trigger this manually by opening the overflow menu in the top right.
    • You’ll be able to select which list in Tasks you’d like to add them to, or create a new list.
    • You can also indicate whether or not you’d like these reminders to be deleted once they are copied.

Additional details

New time features
Every task now has two time-based properties, date and start time, that are available in the edit screen of each task.

These tasks will then show up in Google Calendar on the web at their specific time, as long as you have the “Tasks” calendar enabled on the left-hand side. If you’ve enabled mobile notifications, you'll also get notified for tasks at their scheduled dates and times in the Tasks mobile apps (Android/iOS). For tasks that have a date, but don’t have a time, you’ll get notifications at 9am local time.


If a task wasn’t marked as completed, you’ll get a second notification at 9am the day after a task was due.

Importing reminders into Tasks
This import tool will pull your reminders (from Inbox/Gmail, Calendar, or the Assistant) into Tasks.

When importing reminders into Tasks, we’ll copy over the title, date, time and recurrence of the reminder. Please note, reminders with locations associated will not be imported. Additionally, this is a one-time import and not a constant sync.

Availability

Rollout details
G Suite editions
  • Available to all G Suite editions

On/off by default?
  • Both features will be ON by default.
Stay up to date with G Suite launches