An updated plan & resources for upcoming changes to Groups settings

What’s changing 

Based on your feedback following our previous announcement, Changes to Google Groups settings starting May 6, 2019, we’re making the following changes:


  • Additional improvements to the Groups Settings API to help you plan for and manage the changes (see more details below). 
  • “Post as the group” will remain a separate setting - it will not be merged as we previously stated. 
  • “New member posts are moderated” will remain an option for moderation - it will not be deprecated as we previously stated. 
  • “Take topics” will be merged into the content metadata settings


To help you plan for these changes, we’re also sharing a Google Sheet which can help identify what the new settings will be for a group. In addition, we’re changing the rollout schedule so the new settings will start to take effect in Scheduled Release domains on June 3, four weeks after Rapid Release domains.

Use our Help Center to see details of these changes and see how you can prepare for the update.

Who’s impacted 

Admins and end users

Why you’d use it 

We hope these resources will help you better understand and prepare for the changes to Groups settings.

How to get started 




Additional details 

Groups API improvements 
On March 25th, 2019, we’ll be updating the Groups Settings API. These updates align the API with the product changes we’re making (outlined in our previous announcement and this post) and mean it’s easier to use the API to prepare. API updates include:


  • All settings that are to-be merged will be exposed via the API. This means you can audit your current groups via API, and make changes to ensure new settings are inferred as you want them to be. 
  • New merged settings will be exposed via the API. This means you can query the new merged settings and ensure they are going to be inferred as expected. Note that It will be read-only (i.e. inferred value) until launch, at which point it will also support write. 
  • New bit for custom roles exposed. If you use custom roles, API queries may return incorrect values. The new bit will highlight if a group uses custom roles for one of the merged settings and so will help you identify groups that require manual review. 
  • New bit for collaborative inbox exposed. We will expose a new bit that represents whether collaborative inbox will be enabled for a group. If you expect your group to have collaborative inbox functionality (e.g. topic assignment), ensure that this bit is true. You may do this by enabling any of the collaborative inbox features. Note that it will be read-only (i.e. inferred value) until launch, at which point it will also support write. 
  • New bit for who can discover group exposed. We will expose a new bit that represents who the group will be visible to. This setting will replace show in group directory. Note that it will be read-only (i.e. inferred value) until launch, at which point it will also support write. 


See our Cloud blog post for more details on these API changes and how to use them.

“Post as the group” will not be merged into the content moderator setting 
Previously we stated that this setting would be merged. However, you told us that it was valuable and we should keep it separate, so we’re updating the plans and will not merge it.

“New member posts are moderated” will continue to be supported. 
The “New member posts are moderated” setting, exposed in the API as MODERATE_NEW_MEMBERS, will continue to be supported as a value for moderation.

“Take topics” will be merged with content metadata 
We previously suggested that “Take topics” would remain a standalone setting. However, this will now be merged as part of the content metadata settings.

New worksheet to help visualize changes
We’ve created this Google Sheet which will show you what the new settings will be for any group if you input the current settings. This can help you check the settings will be inferred as you want them.

Helpful links 




Availability 

Rollout details 


G Suite editions 
Available to all G Suite editions.

On/off by default? 
This feature will be ON by default.
Stay up to date with G Suite launches

Minor updates related to the Activity Dashboard in Editors and the Admin console

Quick launch summary 

We’re making two minor updates to verbiage found in the Activity Dashboard in Editors and related settings within the Admin console. As we launch more features to the Activity Dashboard, these updates will help keep things clear for both admins and end users.

In the Admin console: 
In the Activity dashboard settings, where it previously read “Access to activity dashboard” on the left hand navigation, it will now read “Access to view history.”



From here, Admins can set the ability for users to access Viewers and Viewer trend activity in the dashboard to ON/OFF. To learn more about file activity visibility, see this article in the Help Center.

In Editors: 
Within the Activity dashboard, the “View time” tab has been renamed to “Viewers.” From this tab, document owners can see the last time users with Edit access viewed the file and take action to follow-up. To learn more about view history in Docs, Sheets, and Slides, see this Help Center article. 

We’re also changing the icon for the Viewers tab — previously it was a clock, now it will be a person. This change is to better indicate the purpose of this tab, which is viewer history, not time viewers spent in the document.



Availability 

Rollout details
G Suite editions 
  • Available to all G Suite Editions.

Stay up to date with G Suite launches

Stadia: a new way to play

For 20 years, Google has worked to put the world’s information at your fingertips. Instant delivery of that information is made possible through our data centre and network capabilities, and now we're using that technology to change how you access and enjoy video games.

Stadia is a new video game platform, delivering instant access to your favourite games on any type of screen—whether it’s a TV, laptop, desktop, tablet or mobile phone. Our goal is to make those games available in resolutions up to 4K and 60 frames per second with HDR and surround sound. We’ll be launching later this year in select countries including the U.S., Canada, U.K. and much of Europe.

To build Stadia, we’ve thought deeply about what it means to be a gamer and worked to converge two distinct worlds: people who play video games and people who love watching them. Stadia will lift restrictions on the games we create and play—and the communities who enjoy them.

Advanced game streaming 

Using our globally connected network of Google data centres, Stadia will free players from the limitations of traditional consoles and PCs.

When players use Stadia, they'll be able to access their games at all times, and on virtually any screen. And developers will have access to nearly unlimited resources to create the games they’ve always dreamed of. It’s a powerful hardware stack combining server class GPU, CPU, memory and storage, and with the power of Google’s data center infrastructure, Stadia can evolve as quickly as the imagination of game creators.

Data centres make Stadia possible, but what sets the system apart is how it works with other Google services. In a world where there are more than 200 million people watching game-related content daily on YouTube, Stadia makes many of those games playable with the press of a button. If you watch one of your favourite creators playing Assassin's Creed Odyssey, simply click the “play now” button. Seconds later, you’ll be running around ancient Greece in your own game/on your own adventure—no downloads, no updates, no patches and no installs.

But what’s a gaming platform without its own dedicated controller? Enter the Stadia controller*.


When we designed the Stadia controller, we listened to gamers about what they wanted in a controller. First, we made sure to develop a direct connection from the Stadia controller to our data centre through Wi-Fi for the best possible gaming performance. The controller also includes a button for instant capture, saving and sharing gameplay in up to 4k of stunning resolution. And it comes equipped with a Google Assistant button and built-in microphone.

Using Google’s vast experience, reach and decades of investment we’re making Stadia a powerful gaming platform for players, developers and YouTube content creators— of all sizes. We’re building a playground for every imagination.

*This device is a prototype unit and cannot be marketed, sold, leased, or distributed until it complies with applicable essential requirements and obtains required legal authorizations.

Dev Channel Update for Desktop

The dev channel has been updated to 74.0.3729.22 for Windows, Mac & Linux.


A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.
Krishna Govind
Google Chrome

Stadia: a new way to play

For 20 years, Google has worked to put the world’s information at your fingertips. Instant delivery of that information is made possible through our data center and network capabilities, and now we're using that technology to change how you access and enjoy video games.

Stadia is a new video game platform, delivering instant access to your favorite games on any type of screen—whether it’s  a TV, laptop, desktop, tablet or mobile phone. Our goal is to make those games available in resolutions up to 4K and 60 frames per second with HDR and surround sound. We’ll be launching later this year in select countries including the U.S., Canada, U.K. and much of Europe.

To build Stadia, we’ve thought deeply about what it means to be a gamer and worked to converge two distinct worlds: people who play video games and people who love watching them. Stadia will lift restrictions on the games we create and play—and the communities who enjoy them.

Advanced game streaming

Using our globally connected network of Google data centers, Stadia will free players from the limitations of traditional consoles and PCs.

When players use Stadia, they'll be able to access their games at all times, and on virtually any screen. And developers will have access to nearly unlimited resources to create the games they’ve always dreamed of. It’s a powerful hardware stack combining server class GPU, CPU, memory and storage, and with the power of Google’s data center infrastructure, Stadia can evolve as quickly as the imagination of game creators. 

Data centers make Stadia possible, but what sets the system apart is how it works with other Google services. In a world where there are more than 200 million people watching game-related content daily on YouTube, Stadia makes many of those games playable with the press of a button. If you watch one of your favorite creators playing Assassin's Creed Odyssey, simply click the “play now” button.Seconds later, you’ll be running around ancient Greece in your own game/on your own adventure—no downloads, no updates, no patches and no installs.

But what’s a gaming platform without its own dedicated controller? Enter the Stadia controller*.

The Stadia controller

When we designed the Stadia controller, we listened to gamers about what they wanted in a controller. First, we made sure to develop a direct connection from Stadia controller to our data center through Wi-Fi for the best possible gaming performance. The controller also includes a button for instant capture, saving and sharing gameplay in up to 4k of stunning resolution. And it comes equipped with a Google Assistant button and built-in microphone.

Using Google’s vast experience, reach and decades of investment we’re making Stadia a powerful gaming platform for players, developers and YouTube content creators—of all sizes. We’re building a playground for every imagination.

*This device has not been authorized as required by the rules of the Federal Communications Commission. This device is not, and may not be, offered for sale or lease, or sold or leased, until authorization is obtained.

It’s now easier to insert images in cells in Google Sheets

What’s changing  

We’ve made it simpler to add images inside of cells in Google Sheets. Previously, it was only possible to insert publicly hosted images into a cell using the IMAGE function.

Now, you can insert any image, like those saved on your desktop or mobile device, into a cell by using the IMAGE function or the new option found inside the Insert menu.


Who’s impacted 

End users

Why you’d use it 

  • You’ve told us this feature would be helpful for many tasks like: 
  • Adding receipts to expense-tracking spreadsheets 
  • Adding icons to icon libraries 
  • Adding logos to better brand your resources 
  • Adding product images to inventory lists, and more. 

How to get started 

  • Admins: No action needed. 
  • End users: You can add images directly to cells in two ways on Desktop: 
    • Use the IMAGE() function 
    • Via the menu bar at the top of a Sheet: Insert > Image > Image in cell 
      • Select image from Drive or upload one. 

  • On Mobile: 
    • Tap once on a cell to select 
    • Tap again to bring up menu: Insert > Tap the “+” at the top of the screen > Image > Image in cell 
    • Select an image from the options presented to you. 

Additional details 

You can have multiple cells containing an image in a Sheet, but note that only one image per cell is possible at the moment. 

Images inside cells will be associated with a row and move along with the data—so, if you move rows, filter or sort them, the images will move with the content in the row, unlike previously when images would sit on top of the grid. 

Using the formatting and alignment tools, you can pin the image to a specific corner of the cell or set the alignment how you’d like. By default, images will align to the bottom left corner of the cell.

Helpful links 


Availability 

Rollout details 
G Suite editions 
  • Available to all G Suite Editions. 

On/off by default? 
  • This feature will be ON by default 

Measuring the Limits of Data Parallel Training for Neural Networks



Over the past decade, neural networks have achieved state-of-the-art results in a wide variety of prediction tasks, including image classification, machine translation, and speech recognition. These successes have been driven, at least in part, by hardware and software improvements that have significantly accelerated neural network training. Faster training has directly resulted in dramatic improvements to model quality, both by allowing more training data to be processed and by allowing researchers to try new ideas and configurations more rapidly. Today, hardware developments like Cloud TPU Pods are rapidly increasing the amount of computation available for neural network training, which raises the possibility of harnessing additional computation to make neural networks train even faster and facilitate even greater improvements to model quality. But how exactly should we harness this unprecedented amount of computation, and should we always expect more computation to facilitate faster training?

The most common way to utilize massive compute power is to distribute computations between different processors and perform those computations simultaneously. When training neural networks, the primary ways to achieve this are model parallelism, which involves distributing the neural network across different processors, and data parallelism, which involves distributing training examples across different processors and computing updates to the neural network in parallel. While model parallelism makes it possible to train neural networks that are larger than a single processor can support, it usually requires tailoring the model architecture to the available hardware. In contrast, data parallelism is model agnostic and applicable to any neural network architecture – it is the simplest and most widely used technique for parallelizing neural network training. For the most common neural network training algorithms (synchronous stochastic gradient descent and its variants), the scale of data parallelism corresponds to the batch size, the number of training examples used to compute each update to the neural network. But what are the limits of this type of parallelization, and when should we expect to see large speedups?

In "Measuring the Effects of Data Parallelism in Neural Network Training", we investigate the relationship between batch size and training time by running experiments on six different types of neural networks across seven different datasets using three different optimization algorithms ("optimizers"). In total, we trained over 100K individual models across ~450 workloads, and observed a seemingly universal relationship between batch size and training time across all workloads we tested. We also study how this relationship varies with the dataset, neural network architecture, and optimizer, and found extremely large variation between workloads. Additionally, we are excited to share our raw data for further analysis by the research community. The data includes over 71M model evaluations to make up the training curves of all 100K+ individual models we trained, and can be used to reproduce all 24 plots in our paper.

Universal Relationship Between Batch Size and Training Time
In an idealized data parallel system that spends negligible time synchronizing between processors, training time can be measured in the number of training steps (updates to the neural network's parameters). Under this assumption, we observed three distinct scaling regimes in the relationship between batch size and training time: a "perfect scaling" regime where doubling the batch size halves the number of training steps required to reach a target out-of-sample error, followed by a regime of "diminishing returns", and finally a "maximal data parallelism" regime where further increasing the batch size does not reduce training time, even assuming idealized hardware.

For all workloads we tested, we observed a universal relationship between batch size and training speed with three distinct regimes: perfect scaling (following the dashed line), diminishing returns (diverging from the dashed line), and maximal data parallelism (where the trend plateaus). The transition points between the regimes vary dramatically between different workloads.
Although the basic relationship between batch size and training time appears to be universal, we found that the transition points between the different scaling regimes vary dramatically across neural network architectures and datasets. This means that while simple data parallelism can provide large speedups for some workloads at the limits of today's hardware (e.g. Cloud TPU Pods), and perhaps beyond, some workloads require moving beyond simple data parallelism in order to benefit from the largest scale hardware that exists today, let alone hardware that has yet to be built. For example, in the plot above, ResNet-8 on CIFAR-10 cannot benefit from batch sizes larger than 1,024, whereas ResNet-50 on ImageNet continues to benefit from increasing the batch size up to at least 65,536.

Optimizing Workloads
If one could predict which workloads benefit most from data parallel training, then one could tailor their workloads to make maximal use of the available hardware. However, our results suggest that this will often not be straightforward, because the maximum useful batch size depends, at least somewhat, on every aspect of the workload: the neural network architecture, the dataset, and the optimizer. For example, some neural network architectures can benefit from much larger batch sizes than others, even when trained on the same dataset with the same optimizer. Although this effect sometimes depends on the width and depth of the network, it is inconsistent between different types of network and some networks do not even have obvious notions of "width" and "depth". And while we found that some datasets can benefit from much larger batch sizes than others, these differences are not always explained by the size of the dataset—sometimes smaller datasets benefit more from larger batch sizes than larger datasets.

Left: A transformer neural network scales to much larger batch sizes than an LSTM neural network on the LM1B dataset. Right: The Common Crawl dataset does not benefit from larger batch sizes than the LM1B dataset, even though it is 1,000 times the size.
Perhaps our most promising finding is that even small changes to the optimization algorithm, such as allowing momentum in stochastic gradient descent, can dramatically improve how well training scales with increasing batch size. This raises the possibility of designing new optimizers, or testing the scaling properties of optimizers that we did not consider, to find optimizers that can make maximal use of increased data parallelism.

Future Work
Utilizing additional data parallelism by increasing the batch size is a simple way to produce valuable speedups across a range of workloads, but, for all the workloads we tried, the benefits diminished within the limits of state-of-the-art hardware. However, our results suggest that some optimization algorithms may be able to consistently extend the perfect scaling regime across many models and data sets. Future work could perform the same measurements with other optimizers, beyond the few closely-related ones we tried, to see if any existing optimizer extends perfect scaling across many problems.

Acknowledgements
The authors of this study were Chris Shallue, Jaehoon Lee, Joe Antognini, Jascha Sohl-Dickstein, Roy Frostig and George Dahl (Chris and Jaehoon contributed equally). Many researchers have done work in this area that we have built on, so please see our paper for a full discussion of related work.

Source: Google AI Blog


New guide: Rethink your eCommerce experience with Google Ad Manager

When shopping online, today’s consumers want seamless, convenient, and genuinely enjoyable experiences. They expect retailers to understand what they want, with advertising that puts the right products in front of them at just the right moments. So, how exactly are retailers making it all happen — especially with the modern path to purchase spanning across so many different devices, formats, and channels?   

Leading retailers are accelerating their businesses by using integrated technology and data to help them connect with shoppers at each step of their journey. With platforms that provide advanced insights, retailers are collaborating with brands on innovative ad formats and placements that drive action by delivering a frictionless experience across their sites and apps.

In our new guide, Transforming shopping experiences on your eCommerce platform, we show you how Ad Manager can help you deliver unique experiences at every stage of the shopping journey, and increase profits while serving your customers, partners, and employees.

Ready for a new approach to eCommerce? Download the full guide here and see how you can create experiences that work for all of your customers.

Helping Latino students learn to code

Growing up in Costa Rica, I was always passionate about creating things and solving puzzles. That’s what drove me to computer science. I saw it as an opportunity to explore my interests and open doors to new possibilities. It's that love and passion that eventually helped me get to Google, and to the United States, where I now live.

Computer science requires students to learn how to think in a totally new way. Getting into that mindset can be really hard for anyone, but it can be even tougher if you’re learning key phrases, concepts, and acronyms in an environment that feels different from your everyday life.

That’s why I’m proud to share that Google.org is making a $5 million grant to UnidosUS, the YWCA and the Hispanic Heritage Foundation. The grant will bring computer science (CS) education to over one million Latino students and their families by 2022 with computer science curricula, including CS First, Google’s coding curriculum for elementary and middle school students. Additionally, it will support students' experience with how they learn about computer science, helping them explore CS and offering culturally relevant resources to engage parents.

This $5 million grant is part of a new $25 million Google.org commitment in 2019 to increase Black and Latino students’ access to computer science (CS) and AI education across the US. This initiative will help these students develop the technical skills and confidence they need for the future, and help prepare them to succeed in the careers they pursue.

Even as a fluent English speaker, I can’t count the number of times people misunderstand me because I pronounce things differently, or the times it takes me a little longer to understand because my day-to-day work language is not my primary language. This language barrier is not the only barrier—students from underrepresented communities, especially those who are Black and Latino, often don’t feel represented or connected to their first introduction to the field.

While Black and Latino students have equal interest in CS education, they often face social barriers to learning CS, such as a lack of role models, and a lack of learning materials that reflect their lived experiences, like those that are in a language they understand. On top of these social barriers, these students often face structural barriers, such as not having equal access to learn CS in or outside of the classroom. 

Along with the grant, CS First is launching its first set of lessons in Spanish. In the first activity, "Animar un nombre," students choose the name of something or someone they care about and bring the letters to life using code. The second activity, "Un descubrimiento inusual,” encourages students to code a story about when two characters discover a surprising object.

Today’s announcement is an exciting part of Google.org’s work to support students who have historically been underrepresented in computer science. These grants to partner organizations will help Black and Latino students access materials and engage with role models who feel connected to their culture. We will also help create more opportunities for students to access the courses they need to continue their studies.

To me, the new Spanish coding lessons are more than just a fun way to learn coding. They are opportunities for entire communities of students to see themselves reflected in computer science education materials, perhaps for the first time. It’s our hope that students like the ones I met will use CS to create more inventions and opportunities for us all.

Introducing train ticket booking on Google Pay, powered by IRCTC

https://lh3.googleusercontent.com/XI79Nhf_yN3T6YHPF_LUF3OXo5zEbqwIFI8bocRtA7n1oz5kGdUCsJlAYIKxRVK2vSvgU9HwvZvNjaHGc07BXU7HQVPb6aw-r1tK5ZyA4C9_vpv1s9UpmBWSxLjdSfcEswhB8KmQ
Users can now buy train tickets directly on Google Pay, at no additional cost

Search, book and cancel train tickets, with ease and convenience, right within the app

Starting today, train ticket booking for Indian Railways, powered by Indian Railway Catering and Tourism Corporation (IRCTC), is now available on Google Pay!


We’ve been constantly striving to bring Indian users helpful ways to make money transactions simpler, and this is another way to make the lives of our users easier through Google Pay.  

Now, all users in India with the latest Google Pay app using Android and iOS devices will be able to:


Search, book, cancel - All from Google Pay
You will be able to search and browse train options, and book or cancel tickets with your IRCTC account, at no additional cost, within a fast, intuitive and reliable user experience.


See availability and duration
You can easily see details like seat availability, journey duration, and travel times between your stations of choice, so it is easy to compare and choose between options.


Buy tickets at no additional cost
Book your train tickets on Google Pay, without any extra charges, as we launch. So go ahead, update your app to check it out now!
Search, Select and Book right within Google Pay


We have already seen hugely encouraging user response for cab and bus ticket booking options on Google Pay with abhiBus, Goibibo, RedBus, Uber* and Yatra. Now with IRCTC on Google Pay, your travel is made simpler even on trains.  


In the last year and a half since we launched Google Pay (then known as Tez), we’ve worked to develop new features to make payments simple for our Indian users. We are thrilled that millions of Indians are adopting Google Pay month on month from all over the country, in towns and villages to recharge their mobile plans, pay bills, pay for movies, book cabs, buy bus tickets and more.


We will continue innovating on Google Pay to help you get to where you need to go by bus, by cab, or by train, and make everyday payment transactions a simple process for you.
Enjoy your ride!


Posted by Ambarish Kenghe, Director, Product Management, Google Pay

*only available on Android