Enhanced menus in Google Docs improves findability of key features on desktop

What’s changing 

We’re updating the menus in Google Docs to make it easier to locate the most commonly-used features. In this update you’ll notice: 

  • Shortened menus for better navigation 
  • Reorganization for more intuitive feature location 
  • Prominent icons for faster recognition 


Enhanced menu




Who’s impacted 

End users 

Why it’s important 

The new design improves findability of key features, making it quicker and easier to use Docs. Note that existing functionality isn't changing with this launch. 

For features that have been reorganized, we hope that their new menu location will be more intuitive and make it easier and faster to navigate the product. In particular, Apps Script-related functionality is now grouped under the new “Extensions” menu. This includes access to the Apps Script IDE as well as management of add-ons. 

Getting started 

  • Admins: There is no admin control for this feature. 
  • End users: This feature will be available by default. Visit the Help Center to learn more about using Google Docs 

Rollout pace 

Availability 

  • Available to all Google Workspace customers, as well as legacy G Suite Basic and Business customers Available to users with personal Google Accounts 

 Resources 

How data drives a hyperlocal news strategy in Los Angeles

Editor’s Note from Ludovic Blecher, Head of Google News Initiative Innovation: The GNI Innovation Challengeprogram is designed to stimulate forward-thinking ideas for the news industry. The story below by Gabriel Kahn, professor at USC Annenberg School of Journalism, is part of an innovator seriessharing inspiring stories and lessons from funded projects.

The Crosstown team of 10 represented by a cartoon line up which includes their names.

One year ago, our team at the University of Southern California started the Crosstown Neighborhood Data Project. Rapidly expanding news deserts - areas that receive no regular news coverage - can be seen across the US. Small town newspapers are drying up, and toxic “pink-slime” pseudo journalism is seeping in. These news deserts are growing even in big cities. Los Angeles has lost four local papers recently, and so many neighborhoods are overlooked by the news outlets that remain. That is why we started covering every corner of Los Angeles with a four-person editorial team.

It sounds impossible, but it’s not. Here’s how we did it, and what we learned. 

Each week, Crosstown sends out 110 unique email newsletters, one for each neighborhood in this city of four million. The newsletter features brief news stories that hit people where they live: charts and graphics on the number of new COVID infections and vaccination rates, plus pieces about housing, crime and traffic in each neighborhood. 

How do we do this? Through data. We’ve been collecting a trove of information on how Los Angeles lives, works and gets around. All this data is free, but much of it is hard to read and is stored on clunky local government websites. We scrape the data and organize it by neighborhood. That way we can quickly tell how many homes were burglarized in Hollywood last month, or figure out the neighborhood where the most new buildings are going up.

We then write one template for our newsletter, and our custom-built software creates 110 different versions, each with the proper data, visualizations and context for that neighborhood. This wasn’t easy. Our software engineering team spent a year building it, funded by the Google News Initiative Innovation Challenge. We’ve now sent out more than 60 sets of weekly newsletters and learned a great deal. 

Increased engagement

Our biggest takeaway is that people truly engage with news when it’s about their neighborhood. The open rates on our newsletter are over 70%. Most weeks they exceed 80%. The lowest we ever recorded was 55%. This compares to the industry standard for news-related newsletter opening rates (22% according to MailChimp or just under 24% according to CampaignMonitor).

Why? People can’t get this news anywhere else. No other news organizations deliver this level of hyper-localized data. Second, it’s news people want. Currently, there is a widespread impression that Los Angeles is in the midst of a crime wave. Giving people verified stats about their neighborhood and explaining the broader context, such as whether a particular type of crime is rising or falling and how their area compares to others in the city, is a vital public service. 

One example of this is that in one of our newsletters we included the number of building demolitions that had taken place in each neighborhood. A reader then had hard data for her Hollywood neighborhood, which she took to city planners and made a public testimony to convey endangered and historic sites.

Our newsletter also hits the inbox with an appealing subject line, such as “Omicron’s impact on Koreatown,” or, “How much illegal dumping is happening in Venice?” When you live in a big city, it can be difficult to get a read on your own neighborhood. A weekly email with some basic information can be invaluable. 

We’ve found it’s also a great way to engage the audience. Some neighborhoods are battling pressing issues such as traffic congestion or rapidly rising rents. When we cover that in a story, they write back wanting to know more. This allows us to figure out who cares about what across an entire city. In the year since we launched, traffic to the website has increased by 30%.

More importantly, we have a tenfold increase in our audience reaching back out to us. We now know what neighborhoods these audience members live in, because they respond to us directly from their neighborhood email account. This helps us understand which issues are most important to people in different parts of the city.

We’re only at the beginning of understanding what kind of hyperlocal stories we can tell. But our goals for this year lie beyond Los Angeles. We’re now piloting our project with three other newsrooms and we’re hoping to find even more that want to try this technology and approach. We believe using data in this way can be a powerful tool to help newsrooms reach and engage new audiences without raising costs.

Chrome for iOS Update

Hi, everyone! We've just released Chrome 101 (101.0.4951.44) for iOS; it'll become available on App Store in the next few hours.

This release includes stability and performance improvements. You can see a full list of the changes in the Git log. If you find a new issue, please let us know by filing a bug.

Harry Souders

Google Chrome

Get more information about your apps in Google Play

We work hard to keep Google Play a safe, trusted space for people to enjoy the latest Android apps. Today, we’re launching a new feature, the Data safety section, where developers will be required to give people more information about how apps collect, share and secure users’ data. Users will start seeing the Data safety section in Google Play today, and developers are required to complete this section for their apps by July 20th. As app developers update their functionality or change their data handling practices, they will show the latest in the apps’ Data safety section.

A unified view of app safety in Google Play

We heard from users and app developers that displaying the data an app collects, without additional context, is not enough. Users want to know for what purpose their data is being collected and whether the developer is sharing user data with third parties. In addition, users want to understand how app developers are securing user data after an app is downloaded. That’s why we designed the Data safety section to allow developers to clearly mark what data is being collected and for what purpose it's being used. Users can also see whether the app needs this data to function or if this data collection is optional.

Here are the information developers can show in the Data safety section:

  • Whether the developer is collecting data and for what purpose.
  • Whether the developer is sharing data with third parties.
  • The app’s security practices, like encryption of data in transit and whether users can ask for data to be deleted.
  • Whether a qualifying app has committed to following Google Play’s Families Policy to better protect children in the Play store.
  • Whether the developer has validated their security practices against a global security standard (more specifically, the MASVS).
Android phone showing the Data safety section of an app on Google Play

Putting users in control, before and after you download

Giving users more visibility into how apps collect, share and secure their data through the Data safety section is just one way we’re keeping the Android users and ecosystem safe.

We’ve also worked hard to give users control of installed apps through simple permissions features. For example, when an app asks to access “your location”, users can quickly and easily decide whether they want to grant that permission - for one time use, only while using the app, or all the time. For sensitive permissions like camera, microphone, or location data, people can go to the Android Privacy dashboard to review data access by apps.

Apps should help users explore the world, connect with loved ones, do work, learn something new, and more without compromising user safety. The new Data safety section, in addition to Google Play’s existing safety features, gives people the visibility and control they need to enjoy their apps.

To learn more about Google Play’s Data safety section, check out this guide.

A productivity expert’s tips for returning to the office

Two years ago, as many of us were thrown into remote work, I wrote a blog post about tips for working from home. Now, as many of us find ourselves returning to the office or preparing to do so soon, I wanted to talk about a few ways we can transition productively to (yet another) new (er, maybe old?) working environment where some of us are in the office, some aren’t…or some combination of the above.

Here are my top 10 tips for being productive in a hybrid work environment:

  1. Make sure people know where you are. Nothing screams inefficiency more than hundreds of emails and calendar invites (and invite changes) where everyone is trying to figure out who is where, when and on what days. Take the guesswork out of it by setting yourworking location and yourworking hours in Calendar, and RSVP to meetings with your location.
  2. Add other responsibilities to Google Calendar. Do you have commute time? School drop off? Moving to a different office campus mid-day? Add it to your Calendar now; consider making theseOOO events so they auto decline if they are scheduled over.
  3. Optimize your calendar for connection and focus. Chances are good that you either find it easier to focus at home or in the workplace. As you consider the hybrid work options available to you, think about where you want to get your best focused work done and build it into your calendar. Wherever it happens, minimize distractions (mute notifications, use noise-canceling headphones) and schedule Focus Time in your calendar so colleagues know that you’re heads down.
  4. Keep your “hot spots” and your “not spots.” Our brain makes associations with the sights, sounds and smells of places and when we do an activity in the same place regularly, it makes it easier to "get in the zone" each time we go back to that same spot. Keep “hot spots” in your house and at work where you do certain things. “I always code at my desk,” “I always answer customer emails from this cafe in my building,” “I always sit on my front porch to read industry news.” Your brain will associate those spots with those things and make switching between tasks easier. Similarly, safeguard your “not spots” — places you NEVER work. If you’ve never worked in a spot, like your bedroom, it’s easy to relax there because your brain only associates it with relaxation.
  5. Group meetings by type, content and location wherever possible. Many people think of their schedule like a puzzle: “Sure, wherever you find a 30-minute slot, throw a meeting in there!” But your energy and focus are changing (and challenged) when you bounce from a one one one meeting to a brainstorm to a project check-in…the list goes on and on . Be intentional about when you place meetings as much as possible. Group meetings of similar type and topic, especially given the new variety in location. Theme your days and minimize switching topics and types of meeting. Call Tuesday your “Project A” day, and place work time and meetings for that project on that day. If Wednesday morning is your manager’s staff meeting, block time afterwards to digest updates and trickle down information to your team as needed.
Two side by side images, one showing a calendar with various color-coded, unorganized meetings. This is labeled "what most people do." The other images shows all calendar meetings organized by color in blocks. This is labeled "time grouping."

6. Build in some things that happen every day. To give yourself some consistency, try finding 1-3 things that you do every day, no matter where you’re working. If you commute from 8:15 a.m.-9 a.m. into the office and listen to an audiobook, go on a walk and listen to your book during the same time period. If you always take a walk at home after lunch, do it at work, too. Always get an afternoon coffee at the office? Make yourself a latte at home. These signals help you keep your flow and make it a consistent “work day” no matter where you are.

7. Make adaily planevery night. At the beginning of the pandemic, I saw a surge in the use of the planning resources. People had gotten used to “showing up” in an office every morning, then deciding what to do with their time. Working from home required people to figure out exactly what they were doing and when. This type of planning is still important as you bounce back and forth to different work environments with different types of schedules. Fill out daily plan *the night before* to make the most of the following day. What you intend to do will marinate while you sleep and you’ll approach the day focused and intentional.

8. A new “season” of work calls for spring cleaning . A new schedule at the office, much like the New Year or a new job, is a great time for a “spring cleaning” of your work life. Do you need to keep that recurring meeting you set up two years ago to keep in touch with people you'll now see in the office? Should your team be meeting in-person on a different day given everyone’s locations? Do you need to lighten up your schedule to make more time for travel?

9. Write down three things you learned from working from home and take them with you. Working from home was a time of discovery for many of us. Let’s not lose those insights as we head back to the office. Maybe you realized you work best after a mid-morning workout, or that you get burnt out if you start work before 9 a.m. Take a moment to write down three things you learned and build them into your new schedule.

10. Take time to adjust. Two years ago, no one had any idea we’d be at home for so long. And during that time, many of us became great at being productive while working remotely. Others realized they definitely wanted to go back to the office. Whatever your preference, we gave each other grace. Let’s do the same this time as many of us transition yet again, and continue extending it to those who will remain remote.

New features to grow your business with Performance Max

As people move quickly between channels and devices, today’s consumer journey is always-on and rarely straightforward. Automation is helping businesses meet their customers at the right moment along this complex journey. Whether you’re looking to stay ahead of shifting consumer behavior or unlock incremental conversions from new places, Performance Max finds the optimal mix of Google Ads inventory and formats to help you drive better results.[3a6511]

In the coming weeks, we’re introducing new features to help you acquire new customers, better understand performance and start upgrading your Smart Shopping campaigns to Performance Max in just one click.

Focus on new customers

Performance Max optimizes results based on your conversion goals and looks for the highest-ROI conversion opportunities — regardless of channel. The new customer acquisition goal in Performance Max is rolling out over the next few weeks for all advertisers looking to generate leads or increase online sales. This was previously available for retailers using Smart Shopping campaigns and is now expanding to more advertiser goals in Performance Max.

This goal will allow you to either bid more for new customers compared to existing customers, or focus your optimizations on new customers only while maintaining your cost efficiency. You’ll also have more flexible ways to identify new customers, like providing your own first-party data through Customer Match lists, setting up conversion tags and using Google’s autodetection method.

Guide your campaigns with helpful insights

The Insights page helps you understand decisions guided by automation and find levers to improve results in your campaigns. In the coming weeks, we’re rolling out consumer interest insights to all advertisers to help you uncover search themes that are delivering conversions. Two new types of insights are also arriving for Performance Max.

With asset audience insights, you’ll be able to better understand how your text, image and video assets resonate with specific customer segments. For instance, if you’re an outdoor retailer running a campaign for bikes, you may find that exercise enthusiasts engage more with images of people mountain biking rather than product images of the bike itself. Using these insights, you can tailor your creative and influence your broader marketing strategy.

After you create your Performance Max campaigns, diagnostic insights will provide a snapshot of outstanding setup issues preventing your ads from showing. Each issue will include suggestions to resolve them, so you can easily and quickly get your campaign up and running. For example, if your creative assets are disapproved, you’ll be prompted to fix them so you can start serving your ads and avoid missing out on conversion opportunities.

Start upgrading your Smart Shopping campaigns

In January, we shared a preview of how to upgrade your Smart Shopping and Local campaigns to Performance Max to access additional inventory and formats across YouTube, Search text ads and Discover.[235f18]Over the coming weeks, you’ll see a notification in your Google Ads account when the “one-click” upgrade tool is ready for your Smart Shopping campaigns. You’ll also be able to access the tool from the Recommendations page and the Campaigns page. You can start upgrading your Local campaigns in June.

When you upgrade your Smart Shopping or Local campaign, it will become a new, separate Performance Max campaign that keeps the learnings from your previous campaign to maintain consistent performance. The campaign budget and settings from your previous campaign will also be carried over. Visit our Help Center for more details on the upgrade experience.

Retailers across the globe are seeing continued success with Performance Max. In fact, advertisers who upgrade Smart Shopping campaigns to Performance Max see an average increase of 12% in conversion value at the same or better ROAS.[bd1e6d]

Upgrading your existing Smart Shopping and Local campaigns helps ensure you can take advantage of expanded inventory and get your campaigns ready for the holiday season. You’ll be able to choose when to upgrade your campaigns until the automatic upgrade process begins. Smart Shopping campaigns will be automatically upgraded from July through September, and Local campaigns will be automatically upgraded from August through September. You’ll also be able to create new Performance Max campaigns through Google Ads, the Google Ads API, or starting in early summer, through e-commerce partners like WooCommerce and BigCommerce.

Check out our upgrade video tutorial and best practices to set up your Performance Max campaigns, and follow @AdsLiasion to stay informed throughout the upgrade process. On May 24, join us at Google Marketing Live where we’ll share what’s ahead for Performance Max.

Southeast Asian travelers are back

Before COVID-19, the countries of Southeast Asia were some of the world’s most popular travel destinations. The pandemic changed that in a matter of months — with devastating repercussions for the region’s $380 billion tourism industry. In early 2022, though, the tide started to turn again. Southeast Asian nations have eased travel restrictions, and the region’s travelers are eager to make up for lost time. They’re committed to traveling more frequently, open to new destinations, and determined to make the most of the opportunities that are now opening up.

To understand these travelers’ preferences and expectations — and the opportunity that resurgent demand creates for the region’s tourism operators — we took a closer look at some recent Google Search trends.

Resurgent demand

In Southeast Asia, inbound travel demand – visits by non-residents to a country – has experienced the fastest upturn in the Philippines and Indonesia, based on search volumes. In March, inbound demand for the Philippines had already surpassed pre-pandemic figures (hitting 104% of pre-pandemic search volumes), while Indonesia is close to a full rebound too (94%). These two countries have also seen the fastest resurgence in outbound travel – visits by their residents to other countries – with search volumes bouncing back to 70% of pre-pandemic levels. Singapore is in third place for both inbound and outbound travel demand.

Chart that demonstrates inbound and outbound travel demand for each Southeast Asian country in March 2022, with Indonesia and the Philippines showing the fastest rebound, followed by Malaysia, Vietnam,

Travelers crave luxury and care about sustainability

While the surge in demand is welcome, it’s important that the industry understands and caters to travelers’ changing needs. Search trends make it clear that the travel environment today is more complex than it was before the pandemic.

  • People are spending more time researching, planning and finding options, seeking peace of mind, and making sure they’re covered for unexpected changes. We saw year-on-year growth of more than 165% in travel insurance-related searches in Singapore, Malaysia and the Philippines.
  • Tourists are keen to stay longer when they do travel: interest in vacation rentals among Southeast Asian travelers rose by more than 1010% year-on-year.
  • “Revenge travelers” — those most eager to make up for lost time — are ready to pay for premium travel options. Among travelers from the Philippines, searches for “luxury resorts” and “beach resorts” are up 60% year on year.
  • There's growing consciousness of sustainability across the region — and particularly in Singapore and the Philippines. Searches related to sustainability have grown by 45% since 2019, while searches related to greenhouse gas emissions have increased by more than 163% in Singapore and by more than 156% in the Philippines.

How we’re adapting Google tools to help

We’re committed to helping travelers find the long-awaited travel experience they’re looking for, while navigating the complex environment. On Google Travel, the Flights, Hotels and Things to Do sections now provide more information on COVID — and give travelers the option to search for flexible booking options. The Google Travel Help website makes it easier for people to understand travel policies, restrictions, and special requirements. And for travelers seeking out new experiences, we’ve added more destinations to the Explore tab — including smaller cities and national parks — and options to filter by interests like outdoors, beaches or skiing.

We’re also helping travelers make more sustainable choices when they research and book, including giving hotels the ability to show an eco-certified badge next to their name and share details about their sustainability practices, plus providing carbon emission estimates for flights.

Supporting the industry recovery

In addition to evolving our tools for travelers, we’re doing a lot of work to help our industry partners tap into travel insights and plan for the future. Using Travel Insights with Google, businesses, governments and tourism boards can make decisions based on up-to-date information and move quickly when an opportunity arises.

To help smaller businesses in the travel industry reach potential customers on a large scale, we’ve made it possible for all hotels and travel companies to show free booking links in their profiles — and see how many people clicked on those links by generating reports on Hotel Center.

This is a pivotal time for the industry. People are finally booking trips, having dreamed about it (and saved up for it) for so long. They have higher expectations, including for seamless digital experiences throughout their journey. But they’re ready to spend more money and time on travel than they would have in the past. And the resurgent demand we see in Southeast Asia is just the beginning, with major destinations like China and Japan yet to re-open.

Looking ahead, there’s an enormous opportunity for travel businesses who can understand their customers and give them relevant, personalized experiences. We’ll keep doing everything we can to help, and to contribute to a strong, sustainable travel recovery across the region.

Sculpt, sketch and see the world in new cultural games

Creating new and engaging ways for you to learn about the world's art, culture, and history has always been the focus of the creative coders and artists in residence at the Google Arts & Culture Lab. Play can be an incredible vehicle for learning which is why in 2021 the team launched “Play with Arts & Culture”, a series of puzzle and trivia games that made it fun to discover and learn about cultural treasures from our partners’ collections. Today, you are invited to try four new games which will challenge you to learn through play. Simply visit g.co/artgames or press the Play tab (it looks like this ?) within the Google Arts & Culture app for Android and iOS .

Set your personal best score

All four of these games will let you earn and save High Scores. If you’re logged in to Google Arts & Culture, your best score for each game will be automatically saved and synced across your devices and displayed on the Play page so you never lose track of your personal best. When you beat your record, a congratulatory notification will let you share your high score with friends and challenge them to do better.

We hope you’ll have a lot of fun discovering Arts & Culture through our latest collection of games and learn something interesting along the way. Get playing and start setting your high scores today at g.co/artgames or in the Play tab (it looks like this ?) on the Google Arts & Culture app for Android and iOS.

Google at ICLR 2022

The 10th International Conference on Learning Representations (ICLR 2022) kicks off this week, bringing together researchers, entrepreneurs, engineers and students alike to discuss and explore the rapidly advancing field of deep learning. Entirely virtual this year, ICLR 2022 offers conference and workshop tracks that present some of the latest research in deep learning and its applications to areas ranging from computer vision, speech recognition and text understanding to robotics, computational biology, and more.

As a Platinum Sponsor of ICLR 2022 and Champion DEI Action Fund contributor, Google will have a robust presence with nearly 100 accepted publications and extensive participation on organizing committees and in workshops. If you have registered for ICLR 2022, we hope you’ll watch our talks and learn about the work done at Google to address complex problems that affect billions of people. Here you can learn more about the research we will be presenting as well as our general involvement at ICLR 2022 (those with Google affiliations in bold).

Senior Area Chairs:
Includes: Been Kim, Dale Schuurmans, Sergey Levine

Area Chairs:
Includes: Adam White, Aditya Menon, Aleksandra Faust, Amin Karbasi, Amir Globerson, Andrew Dai, Balaji Lakshminarayanan, Behnam Neyshabur, Ben Poole, Bhuwan Dhingra, Bo Dai, Boqing Gong, Cristian Sminchisescu, David Ha, David Woodruff, Denny Zhou, Dipanjan Das, Dumitru Erhan, Dustin Tran, Emma Strubell, Eunsol Choi, George Dahl, George Tucker, Hanie Sedghi, Heinrich Jiang, Hossein Mobahi, Hugo Larochelle, Izhak Shafran, Jasper Snoek, Jean-Philippe Vert, Jeffrey Pennington, Justin Gilmer, Karol Hausman, Kevin Swersky, Krzysztof Choromanski, Mathieu Blondel, Matt Kusner, Michael Ryoo, Ming-Hsuan Yang, Minmin Chen, Mirella Lapata, Mohammad Ghavamzadeh, Mohammad Norouzi, Naman Agarwal, Nicholas Carlini, Olivier Bachem, Piyush Rai, Prateek Jain, Quentin Berthet, Richard Nock, Rose Yu, Sewoong Oh, Silvio Lattanzi, Slav Petrov, Srinadh Bhojanapalli, Tim Salimans, Ting Chen, Tong Zhang, Vikas Sindhwani, Weiran Wang, William Cohen, Xiaoming Liu

Workflow Chairs:
Includes: Yaguang Li

Diversity Equity & Inclusion Chairs:
Includes: Rosanne Liu

Invited Talks
Beyond Interpretability: Developing a Language to Shape Our Relationships with AI
Google Speaker: Been Kim

Do You See What I See? Large-Scale Learning from Multimodal Videos
Google Speaker: Cordelia Schmid

Publications
Hyperparameter Tuning with Renyi Differential Privacy – 2022 Outstanding Paper Award
Nicolas Papernot, Thomas Steinke

MIDI-DDSP: Detailed Control of Musical Performance via Hierarchical Modeling
Yusong Wu, Ethan Manilow, Yi Deng, Rigel Swavely, Kyle Kastner, Tim Cooijmans, Aaron Courville, Cheng-Zhi Anna Huang, Jesse Engel

The Information Geometry of Unsupervised Reinforcement Learning
Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine

Learning Strides in Convolutional Neural Networks – 2022 Outstanding Paper Award
Rachid Riad*, Olivier Teboul, David Grangier, Neil Zeghidour

Poisoning and Backdooring Contrastive Learning
Nicholas Carlini, Andreas Terzis

Coordination Among Neural Modules Through a Shared Global Workspace
Anirudh Goyal, Aniket Didolkar, Alex Lamb, Kartikeya Badola, Nan Rosemary Ke, Nasim Rahaman, Jonathan Binas, Charles Blundell, Michael Mozer, Yoshua Bengio

Fine-Tuned Language Models Are Zero-Shot Learners (see the blog post)
Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, Quoc V. Le

Large Language Models Can Be Strong Differentially Private Learners
Xuechen Li, Florian Tramèr, Percy Liang, Tatsunori Hashimoto

Progressive Distillation for Fast Sampling of Diffusion Models
Tim Salimans, Jonathan Ho

Exploring the Limits of Large Scale Pre-training
Samira Abnar, Mostafa Dehghani, Behnam Neyshabur, Hanie Sedghi

Scarf: Self-Supervised Contrastive Learning Using Random Feature Corruption
Dara Bahri, Heinrich Jiang, Yi Tay, Donald Metzler

Scalable Sampling for Nonsymmetric Determinantal Point Processes
Insu Han, Mike Gartrell, Jennifer Gillenwater, Elvis Dohmatob, Amin Karbasi

When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations
Xiangning Chen, Cho-Jui Hsieh, Boqing Gong

ViTGAN: Training GANs with Vision Transformers
Kwonjoon Lee, Huiwen Chang, Lu Jiang, Han Zhang, Zhuowen Tu, Ce Liu

Generalized Decision Transformer for Offline Hindsight Information Matching
Hiroki Furuta, Yutaka Matsuo, Shixiang Shane Gu

The MultiBERTs: BERT Reproductions for Robustness Analysis
Thibault Sellam, Steve Yadlowsky, Ian Tenney, Jason Wei, Naomi Saphra, Alexander D’Amour, Tal Linzen, Jasmijn Bastings, Iulia Turc, Jacob Eisenstein, Dipanjan Das, Ellie Pavlick

Scaling Laws for Neural Machine Translation
Behrooz Ghorbani, Orhan Firat, Markus Freitag, Ankur Bapna, Maxim Krikun, Xavier Garcia, Ciprian Chelba, Colin Cherry

Interpretable Unsupervised Diversity Denoising and Artefact Removal
Mangal Prakash, Mauricio Delbracio, Peyman Milanfar, Florian Jug

Understanding Latent Correlation-Based Multiview Learning and Self-Supervision: An Identifiability Perspective
Qi Lyu, Xiao Fu, Weiran Wang, Songtao Lu

Memorizing Transformers
Yuhuai Wu, Markus N. Rabe, DeLesley Hutchins, Christian Szegedy

Churn Reduction via Distillation
Heinrich Jiang, Harikrishna Narasimhan, Dara Bahri, Andrew Cotter, Afshin Rostamizadeh

DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization
Aviral Kumar, Rishabh Agarwal, Tengyu Ma, Aaron Courville, George Tucker, Sergey Levine

Path Auxiliary Proposal for MCMC in Discrete Space
Haoran Sun, Hanjun Dai, Wei Xia, Arun Ramamurthy

On the Relation Between Statistical Learning and Perceptual Distances
Alexander Hepburn, Valero Laparra, Raul Santos-Rodriguez, Johannes Ballé, Jesús Malo

Possibility Before Utility: Learning And Using Hierarchical Affordances
Robby Costales, Shariq Iqbal, Fei Sha

MT3: Multi-Task Multitrack Music Transcription
Josh Gardner*, Ian Simon, Ethan Manilow*, Curtis Hawthorne, Jesse Engel

Bayesian Neural Network Priors Revisited
Vincent Fortuin, Adrià Garriga-Alonso, Sebastian W. Ober, Florian Wenzel, Gunnar Rätsch, Richard E. Turner, Mark van der Wilk, Laurence Aitchison

GradMax: Growing Neural Networks using Gradient Information
Utku Evci, Bart van Merrienboer, Thomas Unterthiner, Fabian Pedregosa, Max Vladymyrov

Scene Transformer: A Unified Architecture for Predicting Future Trajectories of Multiple Agents
Jiquan Ngiam, Benjamin Caine, Vijay Vasudevan, Zhengdong Zhang, Hao-Tien Lewis Chiang, Jeffrey Ling, Rebecca Roelofs, Alex Bewley, Chenxi Liu, Ashish Venugopal, David Weiss, Ben Sapp, Zhifeng Chen, Jonathon Shlens

The Role of Pretrained Representations for the OOD Generalization of RL Agents
Frederik Träuble, Andrea Dittadi, Manuel Wüthrich, Felix Widmaier, Peter Gehler, Ole Winther, Francesco Locatello, Olivier Bachem, Bernhard Schölkopf, Stefan Bauer

Autoregressive Diffusion Models
Emiel Hoogeboom, Alexey A. Gritsenko, Jasmijn Bastings, Ben Poole, Rianne van den Berg, Tim Salimans

The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks
Rahim Entezari, Hanie Seghi, Olga Saukh, Behnam Neyshabur

DISSECT: Disentangled Simultaneous Explanations via Concept Traversals
Asma Ghandeharioun, Been Kim, Chun-Liang Li, Brendan Jou, Brian Eoff, Rosalind W. Picard

Anisotropic Random Feature Regression in High Dimensions
Gabriel C. Mel, Jeffrey Pennington

Open-Vocabulary Object Detection via Vision and Language Knowledge Distillation
Xiuye Gu, Tsung-Yi Lin*, Weicheng Kuo, Yin Cui

MCMC Should Mix: Learning Energy-Based Model with Flow-Based Backbone
Erik Nijkamp*, Ruiqi Gao, Pavel Sountsov, Srinivas Vasudevan, Bo Pang, Song-Chun Zhu, Ying Nian Wu

Effect of Scale on Catastrophic Forgetting in Neural Networks
Vinay Ramasesh, Aitor Lewkowycz, Ethan Dyer

Incremental False Negative Detection for Contrastive Learning
Tsai-Shien Chen, Wei-Chih Hung, Hung-Yu Tseng, Shao-Yi Chien, Ming-Hsuan Yang

Towards Evaluating the Robustness of Neural Networks Learned by Transduction
Jiefeng Chen, Xi Wu, Yang Guo, Yingyu Liang, Somesh Jha

What Do We Mean by Generalization in Federated Learning?
Honglin Yuan*, Warren Morningstar, Lin Ning, Karan Singhal

ViDT: An Efficient and Effective Fully Transformer-Based Object Detector
Hwanjun Song, Deqing Sun, Sanghyuk Chun, Varun Jampani, Dongyoon Han, Byeongho Heo, Wonjae Kim, Ming-Hsuan Yang

Measuring CLEVRness: Black-Box Testing of Visual Reasoning Models
Spyridon Mouselinos, Henryk Michalewski, Mateusz Malinowski

Wisdom of Committees: An Overlooked Approach To Faster and More Accurate Models (see the blog post)
Xiaofang Wang, Dan Kondratyuk, Eric Christiansen, Kris M. Kitani, Yair Alon (prev. Movshovitz-Attias), Elad Eban

Leveraging Unlabeled Data to Predict Out-of-Distribution Performance
Saurabh Garg*, Sivaraman Balakrishnan, Zachary C. Lipton, Behnam Neyshabur, Hanie Sedghi

Data-Driven Offline Optimization for Architecting Hardware Accelerators (see the blog post)
Aviral Kumar, Amir Yazdanbakhsh, Milad Hashemi, Kevin Swersky, Sergey Levine

Diurnal or Nocturnal? Federated Learning of Multi-branch Networks from Periodically Shifting Distributions
Chen Zhu*, Zheng Xu, Mingqing Chen, Jakub Konecny, Andrew Hard, Tom Goldstein

Policy Gradients Incorporating the Future
David Venuto, Elaine Lau, Doina Precup, Ofir Nachum

Discrete Representations Strengthen Vision Transformer Robustness
Chengzhi Mao*, Lu Jiang, Mostafa Dehghani, Carl Vondrick, Rahul Sukthankar, Irfan Essa

SimVLM: Simple Visual Language Model Pretraining with Weak Supervision (see the blog post)
Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, Yuan Cao

Neural Stochastic Dual Dynamic Programming
Hanjun Dai, Yuan Xue, Zia Syed, Dale Schuurmans, Bo Dai

PolyLoss: A Polynomial Expansion Perspective of Classification Loss Functions
Zhaoqi Leng, Mingxing Tan, Chenxi Liu, Ekin Dogus Cubuk, Xiaojie Shi, Shuyang Cheng, Dragomir Anguelov

Information Prioritization Through Empowerment in Visual Model-Based RL
Homanga Bharadhwaj*, Mohammad Babaeizadeh, Dumitru Erhan, Sergey Levine

Value Function Spaces: Skill-Centric State Abstractions for Long-Horizon Reasoning
Dhruv Shah, Peng Xu, Yao Lu, Ted Xiao, Alexander Toshev, Sergey Levine, Brian Ichter

Understanding and Leveraging Overparameterization in Recursive Value Estimation
Chenjun Xiao, Bo Dai, Jincheng Mei, Oscar Ramirez, Ramki Gummadi, Chris Harris, Dale Schuurmans

The Efficiency Misnomer
Mostafa Dehghani, Anurag Arnab, Lucas Beyer, Ashish Vaswani, Yi Tay

On the Role of Population Heterogeneity in Emergent Communication
Mathieu Rita, Florian Strub, Jean-Bastien Grill, Olivier Pietquin, Emmanuel Dupoux

No One Representation to Rule Them All: Overlapping Features of Training Methods
Raphael Gontijo-Lopes, Yann Dauphin, Ekin D. Cubuk

Data Poisoning Won’t Save You From Facial Recognition
Evani Radiya-Dixit, Sanghyun Hong, Nicholas Carlini, Florian Tramèr

AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation
David Berthelot, Rebecca Roelofs, Kihyuk Sohn, Nicholas Carlini, Alex Kurakin

Maximum Entropy RL (Provably) Solves Some Robust RL Problems
Benjamin Eysenbach, Sergey Levine

Auto-scaling Vision Transformers Without Training
Wuyang Chen, Wei Huang, Xianzhi Du, Xiaodan Song, Zhangyang Wang, Denny Zhou

Optimizing Few-Step Diffusion Samplers by Gradient Descent
Daniel Watson, William Chan, Jonathan Ho, Mohammad Norouzi

ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning
Vamsi Aribandi, Yi Tay, Tal Schuster, Jinfeng Rao, Huaixiu Steven Zheng, Sanket Vaibhav Mehta, Honglei Zhuang, Vinh Q. Tran, Dara Bahri, Jianmo Ni, Jai Gupta, Kai Hui, Sebastian Ruder, Donald Metzler

Fortuitous Forgetting in Connectionist Networks
Hattie Zhou, Ankit Vani, Hugo Larochelle, Aaron Courville

Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent
Oliver Bryniarski, Nabeel Hingun, Pedro Pachuca, Vincent Wang, Nicholas Carlini

Benchmarking the Spectrum of Agent Capabilities
Danijar Hafner

Charformer: Fast Character Transformers via Gradient-Based Subword Tokenization
Yi Tay, Vinh Q. Tran, Sebastian Ruder, Jai Gupta, Hyung Won Chung, Dara Bahri, Zhen Qin, Simon Baumgartner, Cong Yu, Donald Metzler

Mention Memory: Incorporating Textual Knowledge into Transformers Through Entity Mention Attention
Michiel de Jong, Yury Zemlyanskiy, Nicholas FitzGerald, Fei Sha, William Cohen

Eigencurve: Optimal Learning Rate Schedule for SGD on Quadratic Objectives with Skewed Hessian Spectrums
Rui Pan, Haishan Ye, Tong Zhang

Scale Efficiently: Insights from Pre-training and Fine-Tuning Transformers
Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler

Omni-Scale CNNs: A Simple and Effective Kernel Size Configuration for Time Series Classification
Wensi Tang, Guodong Long, Lu Liu,Tianyi Zhou, Michael Blumenstein, Jing Jiang

Embedded-Model Flows: Combining the Inductive Biases of Model-Free Deep Learning and Explicit Probabilistic Modeling
Gianluigi Silvestri, Emily Fertig, Dave Moore, Luca Ambrogioni

Post Hoc Explanations May be Ineffective for Detecting Unknown Spurious Correlation
Julius Adebayo, Michael Muelly, Hal Abelson, Been Kim

Axiomatic Explanations for Visual Search, Retrieval, and Similarity Learning
Mark Hamilton, Scott Lundberg, Stephanie Fu, Lei Zhang, William T. Freeman

Pix2seq: A Language Modeling Framework for Object Detection (see the blog post)
Ting Chen, Saurabh Saxena, Lala Li, David J. Fleet, Geoffrey Hinton

Mirror Descent Policy Optimization
Manan Tomar, Lior Shani, Yonathan Efroni, Mohammad Ghavamzadeh

CodeTrek: Flexible Modeling of Code Using an Extensible Relational Representation
Pardis Pashakhanloo, Aaditya Naik, Yuepeng Wang, Hanjun Dai, Petros Maniatis, Mayur Naik

Conditional Object-Centric Learning From Video
Thomas Kipf, Gamaleldin F. Elsayed, Aravindh Mahendran, Austin Stone, Sara Sabour, Georg Heigold, Rico Jonschkowski, Alexey Dosovitskiy, Klaus Greff

A Loss Curvature Perspective on Training Instabilities of Deep Learning Models
Justin Gilmer, Behrooz Ghorbani, Ankush Garg, Sneha Kudugunta, Behnam Neyshabur, David Cardoze, George E. Dahl, Zack Nado, Orhan Firat

Autonomous Reinforcement Learning: Formalism and Benchmarking
Archit Sharma, Kelvin Xu, Nikhil Sardana, Abhishek Gupta, Karol Hausman, Sergey Levine, Chelsea Finn

TRAIL: Near-Optimal Imitation Learning with Suboptimal Data
Mengjiao Yang, Sergey Levine, Ofir Nachum

Minimax Optimization With Smooth Algorithmic Adversaries
Tanner Fiez, Lillian J. Ratliff, Chi Jin, Praneeth Netrapalli

Unsupervised Semantic Segmentation by Distilling Feature Correspondences
Mark Hamilton, Zhoutong Zhang, Bharath Hariharan, Noah Snavely, William T. Freeman

InfinityGAN: Towards Infinite-Pixel Image Synthesis
Chieh Hubert Lin, Hsin-Ying Lee, Yen-Chi Cheng, Sergey Tulyakov, Ming-Hsuan Yang

Shuffle Private Stochastic Convex Optimization
Albert Cheu, Matthew Joseph, Jieming Mao, Binghui Peng

Hybrid Random Features
Krzysztof Choromanski, Haoxian Chen, Han Lin, Yuanzhe Ma, Arijit Sehanobish, Deepali Jain, Michael S Ryoo, Jake Varley, Andy Zeng, Valerii Likhosherstov, Dmitry Kalashnikov, Vikas Sindhwani, Adrian Weller

Vector-Quantized Image Modeling With Improved VQGAN
Jiahui Yu, Xin Li, Jing Yu Koh, Han Zhang, Ruoming Pang, James Qin, Alexander Ku, Yuanzhong Xu, Jason Baldridge, Yonghui Wu

On the Benefits of Maximum Likelihood Estimation for Regression and Forecasting
Pranjal Awasthi, Abhimanyu Das, Rajat Sen, Ananda Theertha Suresh

Surrogate Gap Minimization Improves Sharpness-Aware Training
Juntang Zhuang*, Boqing Gong, Liangzhe Yuan, Yin Cui, Hartwig Adam, Nicha C. Dvornek, Sekhar Tatikonda, James S. Duncan, Ting Liu

Online Target Q-learning With Reverse Experience Replay: Efficiently Finding the Optimal Policy for Linear MDPs
Naman Agarwal, Prateek Jain, Dheeraj Nagaraj, Praneeth Netrapalli, Syomantak Chaudhuri

CrossBeam: Learning to Search in Bottom-Up Program Synthesis
Kensen Shi, Hanjun Dai, Kevin Ellis, Charles Sutton

Workshops
Workshop on the Elements of Reasoning: Objects, Structure, and Causality (OSC)
Organizers include: Klaus Greff, Thomas Kipf

Workshop on Agent Learning in Open-Endedness
Organizers include: Krishna Srinivasan
Speakers include: Natasha Jaques, Danijar Hafner

Wiki-M3L: Wikipedia and Multi-modal & Multi-lingual Research
Organizers include: Klaus Greff, Thomas Kipf
Speakers include: Jason Baldridge, Tom Duerig

Setting Up ML Evaluation Standards to Accelerate Progress
Organizers include: Rishabh Agarwal
Speakers and Panelists include: Katherine Heller, Sara Hooker, Corinna Cortes

From Cells to Societies: Collective Learning Across Scales
Organizers include: Mark Sandler, Max Vladymyrov
Speakers include: Blaise Aguera y Arcas, Alexander Mordvintsev, Michael Mozer

Emergent Communication: New Frontiers
Speakers include: Natasha Jaques

Deep Learning for Code
Organizers include: Jonathan Herzig

GroundedML: Anchoring Machine Learning in Classical Algorithmic Theory
Speakers include: Gintare Karolina Dziugaite

Generalizable Policy Learning in the Physical World
Speakers and Panelists include: Mrinal Kalakrishnan

CoSubmitting Summer (CSS) Workshop
Organizers include: Rosanne Liu



*Work done while at Google.  

Source: Google AI Blog


Quick access to additional actions when composing a message in Google Chat on iOS

Quick launch summary 

When using Google Chat on iOS, you can now easily take additional actions by hovering over the plus (“+”) icon next to the compose bar. You’ll see a variety of options such as: 
  • Sharing a Google Meet link 
  • Creating a meeting in Calendar 
  • Accessing Google Drive 
  • Text formatting options and more. 




We hope this makes it easier to do your best work and collaborate when using Google Chat on your mobile device. 

Getting started 

  • Admins: There is no admin action required. 
  • End users: Visit the Help Center to learn more about how to use Google Chat

Rollout pace 


Availability 

  • Available to all Google Workspace customers and users with personal Google Accounts 

Resources