Sunset of the Ad Manager API v202102

On Tuesday, March 1, 2022, in accordance with the deprecation schedule, v202102 of the Ad Manager API will sunset. At that time, any requests made to this version will return errors.

If you’re still using v202102, now is the time to upgrade to a newer release and take advantage of additional functionality. For example, in v202105 and newer versions we added support for pushing creative previews to linked devices.

When you’re ready to upgrade, check the full release notes to identify any breaking changes. Here are a few examples of changes that may impact your applications:

As always, don't hesitate to reach out to us on the developer forum with any questions.

Jetpack Compose 1.1 is now stable!

Posted by Florina Muntenescu, Android Developer Relations Engineer

Blue background with phone icon

Today, we’re releasing version 1.1 of Jetpack Compose, Android's modern, native UI toolkit, continuing to build out our roadmap. This release contains new features like improved focus handling, touch target sizing, ImageVector caching, and support for Android 12 stretch overscroll. Compose 1.1 also graduates a number of previously experimental APIs to stable and supports newer versions of Kotlin. We've already updated our samples, codelabs, and Accompanist library to work with Compose 1.1.

New stable features and APIs

Image vector caching

Compose 1.1 introduces image vector caching bringing big performance improvements. We’ve added a caching mechanism to painterResource API to cache all instances of ImageVectors that are parsed with a given resource id and theme. The cache will be invalidated on configuration changes.

Touch target sizing

With respect to Compose 1.0, Material components will expand their layout space to meet Material accessibility guidelines touch target size. For instance, a RadioButton's touch target will expand to a minimum size of 48x48dp, even if you set the RadioButton's size to be smaller. This aligns Compose Material to the same behavior of Material Design Components, providing consistent behavior if you mix Views and Compose. This change also ensures that when you create your UI using Compose Material components, minimum requirements for touch target accessibility will be met.

If you find this change breaks existing layout logic, set LocalMinimumTouchTargetEnforcement to false to disable this behavior, but please be mindful this might reduce the usability of your app, and should be used with caution.

RadioButton touch target update 
Left: Compose 1.0, right: Compose 1.1 
 

RadioButton touch target update
Left: Compose 1.0, right: Compose 1.1

Experimental to stable APIs

Several APIs graduated from experimental to stable. Highlights include:

New experimental APIs

We’re continuing to bring new features to Compose. Here are a few highlights:

  • AnimatedContent can now be saved and restored when using rememberSaveable.
  • LazyColumn/LazyRow item positions can be animated using Modifier.animateItemPlacement().
  • You can use the new BringIntoView API to send a request to parents so that they scroll to bring an item into view.

Try out the new APIs using @OptIn and give us feedback!

Note: Using Compose 1.1 requires using Kotlin 1.6.10. Check out the Compose to Kotlin Compatibility Map for more information.

Wondering what’s next? Check out our updated roadmap to see the features we’re currently thinking about and working on, such as lazy item animations, downloadable fonts, moveable content, and more!

Jetpack Compose is stable, ready for production, and continues to add the features you’ve been asking us for. We’ve been thrilled to see tens of thousands of apps start using Jetpack Compose in production already and we can’t wait to see what you’ll build!

We’re grateful for all of the bug reports and feature requests submitted to our issue tracker over the Alphas and Betas - they help us to improve Compose and build the APIs you need. Do continue providing your feedback and help us make Compose better!

Happy composing!

The jobs people want, according to Search trends

Lots of people quit in 2021. Like,a lot. Month after month, a record number of people put their tools away, shut their laptops, took off their badges, handed in their two week’s notice or simply walked out the door and didn’t go back.

We were curious — what did people leave their jobs to do next? What careers piqued their interest, what training programs did they pursue? We looked at Google Search trends to get an idea.

The first thing we noticed was how global this experience has been. The “Great Resignation” of 2021 is usually talked about as an American phenomenon, but Search trends suggest that people everywhere were looking to leave their jobs. The top countries searching for “how to leave your job” come from five different continents: ThePhilippines is at the top, followed by South Africa, then the U.S., Australia and the U.K.

Search trends also reveal what career paths people are interested in. To find this out, we looked at the jobs people searched for alongside the phrase “how to become,” such as “how to become an astronaut.” Trends showed us that over the past year, people were most interested in jobs that involve helping others, travel and working in real estate — ideally in a role that doesn’t require a traditional boss. .

Most-searched “how to become” jobs — January 2021-January 2022

  1. Real estate agent
  2. Flight attendant
  3. Notary
  4. Therapist
  5. Pilot
  6. Firefighter
  7. Personal trainer
  8. Psychiatrist
  9. Physical therapist
  10. Electrician

When drilling down to look at the most-searched “how to become…” jobs in each U.S. state, there were some distinct regional trends. People in the South and Midwest were interested in becoming a notary (with the exception of several Appalachian states). Large portions of the Northeast, Northern Midwest and Western U.S. were interested in real estate careers.

Notably, only two states’ most-searched jobs did not include notary, real estate agent, electrician or pilot: New Mexico, where people were most interested in becoming a flight attendant, and Montana, where people sought information about personal training over any other profession.

Map of U.S. with each state's top searched job noted by various colors. Notary Real Estate Agent cover most of the map.

We also looked at Search trends related to professional certifications and training programs. Similar to the “how to become” trends, these showed that people are interested in real estate. We also learned that people are interested in jobs involving hair and beauty, medical assisting and data analytics.

Top trending professional certifications and training programs in the U.S. — January 2021-January 2022

  1. Google data analytics professional certificate
  2. NCMA certification
  3. Child development associate certification
  4. Eyelash technician training program
  5. Electrician training program
  6. Real estate training program
  7. Barber training program

Finding a new job on Search

If you’re one of the many people who left their job in recent months, here are some tips for finding your next gig on Search:

  • Search for any job and a dedicated section will appear with opportunities from employers and job boards across the web. Use filters to sort by title, location, date posted and more.
  • Customize your job search to your experience level, or to the benefits or job environment you want. Try searching “WFH jobs” or “no experience jobs near me.”
  • See jobs specific to your educational background by adding your field of study to your job search, such as “jobs for history majors.” And try “no degree jobs” to see opportunities that don’t require a bachelor’s.
  • Keep a lookout for the “Actively Hiring" filter that shows jobs from employers that are hiring a lot right now.
  • Save jobs you want to come back to by tapping the bookmark icon on any posting. Find jobs you’ve saved in the “Saved” tab of the job search tool on the web, or in the Collections tab of the Google app on iOS and Android.
Screenshots of the job search tool on Google Search showing open retail job postings.

For another way to get skills and training while looking for a job, check outGrow with Google for online resources related to resume creation, interview best practices, networking and more.

Unpacking 7 features on the latest Samsung Galaxy devices

Today at Galaxy Unpacked, Samsung unveiled the new Galaxy S22 series and Galaxy Tab S8 series and updates coming soon to the Galaxy Watch4 series. Together with Samsung, we’re introducing new features that help you communicate in new ways, get more done and stay entertained with your Galaxy devices.

More ways to connect with live sharing on Google Duo

Video calling with Duo can help you connect with friends and family, no matter how far away. With live sharing support across your favorite apps, you will be able to use Duo on your Galaxy S22 series and Galaxy Tab S8 series to brainstorm ideas with your friends and colleagues through Jamboard, share ideas and images in Samsung Notes and Gallery, watch videos together on YouTube or search for locations on Google Maps.

Preview YouTube videos on Messages by Google

People share YouTube videos on Messages all the time — in fact, they're one of the most-shared types of links on the app overall. In the coming weeks, you’ll be able to see a preview of the video your friends and family share with you right in the conversation, so you can quickly decide whether to watch it now or later. And you can tap again to play the video as well, without ever leaving the chat.

Optimized for accessibility with Voice Access

Voice Access on Android is designed to help people with disabilities navigate and control their device without needing to use their hands. While it’s optimized for people with motor disabilities like ALS, spinal cord injuries or arthritis, it can also be helpful for anyone with a temporary disability like a broken arm, or people whose hands are otherwise occupied. Built into the Galaxy S22 series and Galaxy Tab S8 series, you don’t need to download a separate app, and you can use Voice Access prompts to quickly and easily tap, scroll and navigate your device. Tap, scroll, and browse your device with voice commands. Either set Voice Access to start whenever you use your device, or say, “Hey Google, Voice Access” and the accessibility prompts will help you open apps and manage your device.

Color your world with Material You

Coming with Android 12 out of the box, the Galaxy S22 series and Galaxy Tab S8 series will let you personalize your device by taking advantage of the beautiful Material You design. Change your wallpaper and the look and feel of your entire device, including your notifications, apps and more, will change to match the color palette.

Three phones showing different screens with background color adaptation

Easily set up Google Play apps on your Galaxy Watch4

Setting up a new Galaxy Watch4 has never been easier. Next month, we’ll be improving the setup process so your apps on your Android phone appear as recommended apps on your watch. With a simple tap on your phone, you can install all of your favorite apps from Google Play.

Phone screen showing options to select and sync apps onto your watch

Get help on your watch with Google Assistant

As you move through the day, Google is there to help you get things done across your devices. In the coming months, we’ll bring Google Assistant to Galaxy Watch4. Soon, you can ask Google to help set a timer while cooking, stay on top of your appointments by asking your calendar what’s next, or playing your favorite music – right from your wrist. Google Assistant will be available for download on Google Play and feature a new design with faster than ever response times on your watch. Once activated, just say “Hey Google” to get started.

Listen on the go with YouTube Music Premium

Whether you’re working out or commuting to work, the YouTube Music app on Wear OS provides access to more than 80 million songs and thousands of playlists. Currently, YouTube Premium and YouTube Music Premium subscribers have the ability to download music for ad-free offline listening. Coming soon, we’re adding Wi-Fi and LTE streaming support so subscribers can discover new tunes without their phone nearby. This will be available on Galaxy Watch4 and other Wear OS devices.

We will continue to build on our longstanding partnership to bring helpful Google features to all of your favorite Samsung devices. With the Galaxy S22 series and Galaxy Tab S8 series, you'll receive a four-month trial of YouTube Premium (terms apply[1527fc]) on us. Learn more about the new Samsung Galaxy devices here.

Drive with Cupid this Valentine’s Day

Whether you’re into Valentine’s Day celebrations or over them, the greatest matchmaker of all time, Cupid, is here to guide you through the ups and downs of love on the road with the latest driving experience on Waze. With nearly 3,000 years(!) of experience bringing couples together, and personal lessons learned on his journey to find a special someone, you’ll hear Cupid’s words of wisdom — and some hot takes — on the state of dating and love in 2022.

For some extra Valentine’s Day spirit, pair Cupid’s voice with the limited edition Lovewagon and a Cupid Mood to help (literally) spread the love on Waze.

​​Chocolate, candy hearts or a dozen long-stemmed roses are great Valentine’s Day standbys. But Cupid knows the real route to love: a keen eye for a match, a magic arrow, his sage words... and a fun, traffic-free journey to your destination. So, what are you waiting for?

See the full Cupid experience or tap “My Waze” in your Waze app and click the Cupid banner to activate. It’ll be available everywhere, in English, for a limited time.

Lifting our voices for Black History Month

Black History Month marks a special time each year when we celebrate the achievements and contributions of Black Americans, reflect on the trials and tribulations we’ve overcome and set our sights on making progress on the work that still lies ahead.

I’ve been fortunate to call the Washington, D.C. metro area home for most of my life. I grew up in a little town in Montgomery County, Maryland, went to state school for college, graduated from Howard University School of Law and taught fifth grade at a school in Anacostia. If it weren't for my connection to these culturally rich and complex communities, I wouldn’t be who I am today. It was these lived experiences that inspired me to pursue a career at the intersection of technology and social justice, and I’m honored to do that work everyday here at Google.

As I reflect on my time at Howard, I am both grateful and constantly reminded of the challenges and inequities Historically Black Colleges and Universities (HBCUs) continue to face. This especially rings true in light of the deeply concerning threats made against the safety and security of several HBCU campuses across the U.S. over the last few weeks. But Google understands the immense talent and creativity these institutions help foster and want to ensure they continue to thrive for generations to come.

In that spirit, we're announcing a new $6 million investment in The Thurgood Marshall College Fund (TMCF) and United Negro College Fund (UNCF), building on the momentum of our $50 million grant to 10 HBCUs last year. The unrestricted funding will support scholarships, faculty programs, research grants and curriculum development for their HBCU networks. We’ll also be giving $250,000 in donated search ads to UNCF, which will provide additional support to raise funds for college scholarships and further promote HBCUs.

We know there’s still more work to be done to ensure tech’s workforce better represents the communities that use our products every day. Together with organizations like TMCF, UNCF, NAACP and theNational Urban League, we’re eager to continue finding more ways to provide the tools, resources and opportunities necessary to make that goal a reality.

Supporting Black business owners and entrepreneurs

Black business owners and entrepreneurs are navigating uncharted territory with the compounding effects of the pandemic, supply chain disruptions and inflation. So whether it’s finding new ways to engage customers online or securing that much-needed round of funding, Google wants to be a resource for this community of business leaders.

Last week, Grow with Google began the statewide expansion of its Digital Coaches program, which provides digital skills training to help Black and Latino small businesses reach new customers and grow. Since 2017, Digital Coaches in 20 cities have helped train more than 100,000 business owners, and they will now offer training across their states to equip more Black-and Latino-owned businesses with digital skills.

In addition to this expanded training, Grow with Google and the U.S. Black Chambers, Inc. will host their second National Black-Owned Business Summit on Thursday, February 24, providing virtual trainings and guest speakers for 2,500+ Black businesses across the country. The trainings will focus on how to create a search-friendly website and how to reach more customers with Google Ads. Attendees will also have access to sign up for a limited number of one-on-one coaching sessions from Googlers. Business owners can register for the summit at g.co/grow/BlackOwnedSummit.

We’re also kicking off our third round of investments to the Google for Startups Black Founders Fund in the U.S., with another $5 million in funding. Over the last two years, we’ve welcomed 126 Black founders into our network and provided $10 million in non-dilutive funding, meaning founders do not give up any ownership in their company in exchange for funding. We’ve also expanded the Black Founders Fund globally to support founders from Brazil, Africa and Europe. In total, Black Founders Fund recipients have gone on to raise over $137 million in additional capital from outside investors as a result of the $16 million in non-dilutive funding.

Amplifying Black music, art and culture

Throughout the month of February, we’ll be spotlighting Black culture across our products and platforms, showcasing the trendsetters, history-makers and innovators that inspire us every day.

As part of our ongoing commitment to supporting underrepresented voices in entertainment, we recently partnered with Raedio to support its new Raedio Creators Program. Two emerging women artists and two composers will receive funding and other resources to create their own music, while retaining full ownership of their work. Submissions open on February 15 and recipients will be announced in March. And today, we announced a partnership with Motown Records where we’ll support its Motown Records Creator Program to provide an emerging women content creator an immersive, five-month fellowship assisting the label’s women executives and artists. Applications are open now through March 8.

We’ll also launch a series of music playlists in the YouTube Music app. This week’s playlist, titled “Lifting Voices…Strong,” celebrates voices in Black culture and history featuring Kendrick Lamar, Angela Davis and Beyoncé, among others.

Our Google Arts & Culture partners are releasing new content this month, bringing together local and global voices. Projects range from the Black legends of Detroit’s rock and roll scene shared by the The Carr Center to international artists from the Haiti Film Institute, as well as works created by artist Sonya Clark that showcase the power of the African Diaspora with help from the National Museum of Women in the Arts. With these new exhibits, our dedicated Black History Month hub now has over 11,850 images, artifacts, videos and stories from more than 85 global partners.

If you’d like to grow your knowledge about historical figures and key events, simply ask your Google Assistant “Hey Google, what happened today in Black history?” to get a daily dose of facts adapted from the Black Heritage Day Calendar created by author, lecturer and civil rights activist Dr. Carl Mack.

And if you haven’t seen it already, today’s Google Doodle pays homage to Toni Stone, the first woman and woman of color in history to play professional baseball in a men’s major baseball league. Guest artist Monique Wray’s animated illustration brings Stone’s legacy to life for new generations. She draws inspiration from baseball action photography while incorporating Stone’s sense of humor and signature curly hair. Monique also worked with us to redesign the YouTube logo on the homepage, alongside illustrator Sabrena Khadija, to celebrate the #YouTubeBlack creators and artists who are shaping and shifting culture.

There’s a lot of other great Black History Month content to explore throughout the month. That includes Google TV programming with TV, movie and music recommendations featuring Black voices from around the world and a special spotlight on Black women who have made a mark on culture. We’ll also be spotlighting some of the latest apps created by Black developers on Google Play, as well as a month-long video series highlighting Black Women in Tech as part of our Women Techmakers initiative. And for all the Pixel users out there, be sure to download the curated wallpapers from visual artist Aurélia Durand to help give your phone some added style.

Head over to g.co/blackhistorymonth2022 to stay up to date on all this and more during the month of February. And while Black History Month is officially celebrated for one month, the unsung heroes and historical contributions of Black Americans deserve to be celebrated each and every day.

Dive deeper into local news with News Showcase

I’ve been a local news reader for a very long time, starting with my hometown paper, the St. Louis Post-Dispatch in Missouri. Admittedly, I started with the comics and word games like the junior jumble, but I came to appreciate the daily pulse of news about what was happening in my city. The Post-Dispatch was my connection to the city and to my neighbors.

In building Google News Showcase, our product and licensing program for news publishers, we want people who use Google’s news products to feel the same way about their local paper. Google News Showcase gives local publishers a way to show their editorial expertise and explain important issues to readers. In doing so, we hope readers are able to more deeply connect with their communities.

Today, we’re doing more to make it easier to find local publishers in Google News Showcase by bringing their panels into the local section of Google News. News Showcase publishers hand pick the content for these local panels, enabling them to highlight the most important stories of the day in their area and giving them another powerful way to deepen their relationship with readers. To get to the Local section on Google News, simply tap the Local section on the left of news.google.com or navigate to your local section of the For You feed in the Google News app.

This image of a local Google News page shows an example of how News Showcase panels will appear to some publishers in Canada

An example of how News Showcase panels for local publishers in Canada can appear in the local section of Google News.

This image of a local Google News page shows an example of how News Showcase panels will appear to some publishers in Argentina

An example of how News Showcase panels for local publishers in Argentina can appear in the local section of Google News.

More than 90% of the publications that are part of News Showcase represent local or community news. They include Citynews in Italy, La Capital in Argentina, Frankfurter Rundschau in Germany, Jornal do Commercio in Brazil, El Colombiano in Colombia, Guelph Mercury Tribune in Canada, the ​​Anandabazar Patrika in India, and Iliffe Media in the United Kingdom. We’ve been working closely with these publishers since before the launch of News Showcase to make sure the product works well for them.

Outside of today’s news, we’re always making additional changes behind the scenes to help publishers improve their experience with News Showcase. Notably, we recently launched the ability for publishers to see how readers are engaging with their News Showcase content in real time, so they can better understand what people want to read. This gives publishers the ability to respond quickly to what's trending, add more context to their stories or add related panels to stories that are getting traction. We’ve also introduced the ability to edit the images that appear in panels directly in our publication tool, giving News Showcase editors more control and saving them time.

Since we launched News Showcase in October 2020, we’ve signed deals with more than 1,200 news publications around the world, ensuring millions of people are able to find, engage and support the news organizations that cover issues that matter to them. We’ve also launched in more than a dozen countries including India, Japan, Portugal, Germany, Brazil, Austria, the U.K., Australia, Czechia, Italy, Colombia, Argentina, Canada and Ireland. Today, we’re rolling out the product in Poland.

News Showcase is just one way we’re helping readers find news that matters to them. We recently added a new news feature in Google Search where readers see a carousel of local news stories when we’re able to find local news coverage related to their search. This helps readers find important local news around their searches and helps local news publishers reach people looking for their news. This carousel is available globally in all languages.

We also improved our ranking systems so authoritative, relevant local news sources appear more often alongside national publications in our features such as Top Stories. This ensures people will be able to find coverage from authoritative local news sources, helping them see how national stories can impact them locally.

Supporting local publishers is also a key focus of our work and that of the Google News Initiative (GNI), our effort to help news organizations and journalists thrive in a digital age. For example, the GNI Digital Growth Program is a free program aimed at helping small and mid-sized news publishers around the world develop the capabilities required to accelerate the growth of their businesses online. And the GoogleNews Lab offers partnerships and training in over 50 countries. We’ve also built our products to help journalists with different technical abilities and resources. One example is Pinpoint, a tool that uses the best of our Search, AI, and machine learning technology to help reporters quickly go through thousands of documents like forms, handwritten notes, images, e-mail archives, PDFs, and automatically transcribes audio files.

This GIF shows different ways that Pinpoint helps news organizations go through documents. For example, typing in a search for STDN Mission will bring up results in hand written notes, photos and text documents.

Pinpoint helps news organizations quickly and efficiently go through hundreds or thousands of documents.

We’re dedicated to playing our part to help support local journalism thriving in a digital age — and to helping readers discover the local news stories and understand the issues that affect them.

News Showcase is launching in Poland

We know how hard it can be to keep on top of what’s happening in your community, let alone news globally. To help address this, Google News Showcase, our product and licensing program for news publishers, will begin rolling out today in Poland as “Showcase w Wiadomościach Google.” Google has signed partnerships with 47 Polish publications, including national, regional and local publications from across the country such as Wprost, NaTemat, Spider’s Web, 300Gospodarka, Lublin24 and TuŁódź. News Showcase is part of our global investment in news and reinforces our commitment to journalism in Poland and around the world.

This image shows 47 logos of publishers from Poland that we are partnering with for News Showcase. The image is one a white background with the logos tiled.

Logos of our News Showcase partners in Poland

News Showcase panels can appear on Google products, currently on News and Discover, and direct readers to the full articles on publishers’ websites, helping them deepen their relationships with readers. Panels will also include extended access to paywalled content from participating publishers to give readers even more from their favorite sources, hopefully leading to more subscribers for the news organization. In addition to the revenue that comes directly from these more-engaged readers, participating publishers will receive monthly licensing payments from Google.

“News Showcase is another Google project supporting media outlets worldwide and once again they are doing it on such a large scale,” says Michał Mańkowski, editor-in-chief, chief operating officer and board member of naTemat Group, a nationwide online media publisher. “I am glad that Google recognizes more and more the important role of reliable and trustworthy publishers, and supports them in this way. We are proud to be in this group.”

“We are thrilled to be joining Google News Showcase. We are now certain this new platform will help us even better control how our content is seen by our readers,” says Przemysław Pająk, editor-in-chief of Spider’s Web, an independent business and technology news company. “We are not only going to use News Showcase to promote important news that we have been regularly bringing onto the media market, but also present our best columns and premium articles in an attractive way.”

Since we launched News Showcase in October 2020, we’ve signed deals with more than 1,200 news publications around the world and have launched in 14 countries including India, Japan, Germany, Portugal, Brazil, Austria, the U.K., Australia, Czechia, Italy, Colombia, Argentina, Canada and Ireland, bringing more in-depth, essential news coverage to Google News and Discover users. More than 90% of the publications that are part of News Showcase represent local or community news. Local news is an essential way for readers to connect to their communities and ensure they get the news that impacts their day-to-day lives.

This image shows examples of how some publishers in Poland will appear using News Showcase panels.

An example of how News Showcase panels will look with some of our partners in Poland.

"Projects like this one, supporting real journalism, are extremely important in a world overloaded with quick and short information, which often misses issues that matter,” says Michał M. Lisiecki, founder of PMPG Polskie Media SA, a traditional and new media publishing group. “Google News Showcase has a chance to gain recognition not only among editors and journalists but, most importantly, among readers who value quality content. I keep my fingers crossed for success, because it is now time for substantive and economic cooperation between global technology leaders and local media.”

“Quality local journalism plays a significant role in communities today. We are excited to be joining the project,” says Piotr Piotrowicz, CEO of Południowa Oficyna Wydawnicza, a local publishing group in central Poland, and CEO of the Local Media Association. “I believe Google News Showcase offers a chance for those important stories to reach new readers in our region and for local media to grow their digital future.”

This GIF shows examples of how News Showcase will look with the content of some of our news partners in Poland

An example of how News Showcase panels will look with some of our partners in Poland.

Google News Showcase is our latest effort to support publishers of all sizes and the news industry in Poland. Through the Google News Initiative we supported 163 local Polish newsrooms through our Journalism Emergency Relief Fund to help them continue their vital work throughout the COVID-19 pandemic. We also provided 6.6 million euros to support 33 Digital News Innovation Fund experimental news projects from leading publishers like Agora, Fratria, Gremi Media, Polityka, Polska Press and ZPR Media. Around the world, the Google News Initiative has supported more than7,000 news partners in over 120 countries and territories.

Since 2015, the Google News Lab has trained nearly 12,000 Polish journalists, newsroom staff and journalism students on a range of digital tools to help them research, verify and visualize their stories. Every year, we run an open Google News Lab Summer School for reporters from media located across the country to help them use those tools in their vital daily work in their local communities.

Google also sends eight billion visits each month to European news websites from products like Search and News, which publishers can monetize with online advertising and subscriptions on their websites and apps. Our ad technologies enable news organizations to sell their ad space to millions of advertisers globally — including advertisers they wouldn’t have access to without these services.

We’re dedicated to continuing our contribution to and collaboration with the news ecosystem, supporting the open web and continuing to provide access to information in Poland and elsewhere.

Chrome’s multitasking usage increases 18x on large screens

Posted by The Android Team

Google Chrome is the most widely used browser globally, and the Chrome team wants to ensure their users have a great experience across all devices. Many Chrome users have been requesting more productivity features on their mobile, tablet, and foldable devices to better match the capabilities of Chrome on desktop. To meet these needs, the team decided to invest in building features that encourage multitasking capabilities. While the team built this for phones as well, they wanted to especially focus on implementing these features where people would use them the most: large screen devices such as tablets and foldables.


GIF showing multitasking on a tablet with multiple instances

What they did

The team first decided to focus on building a way for people to open multiple Chrome windows (instances) side by side. They took advantage of 12L features such as the taskbar and also took advantage of the Samsung edge panel.

They utilized the singleInstancePerTask launch mode to build the side-by-side functionality. They wanted to balance allowing people to use many windows at once with making sure the feature was still usable. The team researched usability best practices, observed other multi-window experiences on large screen devices, and thought through limitations to ensure optimal device memory usage. They determined people could comfortably use up to five windows side by side on large screen devices, and the team updated their app to support this functionality.

The team wanted to make it easier for their users to take advantage of this feature, so they added a “New Window” shortcut in the menu. They used the new capability of intent flag combo LAUNCH_ADJACENT|NEW_TASK to create this shortcut. Having this feature be more prominently displayed in the product greatly improved the usage. They saw multi-window usage improve by 18x.


Results

This is a new feature, and the Chrome team has already seen that multi-instance for the Chrome app is used 42% more on tablets and foldables than on phones that support the feature. This usage demonstrates the functionality resonated well with Chrome users on large screen devices, and that it was worth investing in building these features to enhance the experience for Chrome users on large screens.

They also had very positive feedback from their large screen users in the form of app reviews. “This app is fabulous ?! You can split screen, change tabs, and much more. You can also play a lot of games in it. I prefer to five star this app.”

The team has future plans to further improve the Chrome experience on large screens to help their users be more productive.


Photo of Theresa Sullican, Tech Lead/Manager at Google Chrome.

Get started

Learn more about how you can get started with optimizing your app for large screens.

Robot See, Robot Do

People learn to do things by watching others — from mimicking new dance moves, to watching YouTube cooking videos. We’d like robots to do the same, i.e., to learn new skills by watching people do things during training. Today, however, the predominant paradigm for teaching robots is to remote control them using specialized hardware for teleoperation and then train them to imitate pre-recorded demonstrations. This limits both who can provide the demonstrations (programmers & roboticists) and where they can be provided (lab settings). If robots could instead self-learn new tasks by watching humans, this capability could allow them to be deployed in more unstructured settings like the home, and make it dramatically easier for anyone to teach or communicate with them, expert or otherwise. Perhaps one day, they might even be able to use Youtube videos to grow their collection of skills over time.

Our motivation is to have robots watch people do tasks, naturally with their hands, and then use that data as demonstrations for learning. Video by Teh Aik Hui and Nathaniel Lim. License: CC-BY

However, an obvious but often overlooked problem is that a robot is physically different from a human, which means it often completes tasks differently than we do. For example, in the pen manipulation task below, the hand can grab all the pens together and quickly transfer them between containers, whereas the two-fingered gripper must transport one at a time. Prior research assumes that humans and robots can do the same task similarly, which makes manually specifying one-to-one correspondences between human and robot actions easy. But with stark differences in physique, defining such correspondences for seemingly easy tasks can be surprisingly difficult and sometimes impossible.

Physically different end-effectors (i.e., “grippers”) (i.e., the part that interacts with the environment) induce different control strategies when solving the same task. Left: The hand grabs all pens and quickly transfers them between containers. Right: The two-fingered gripper transports one pen at a time.

In “XIRL: Cross-Embodiment Inverse RL”, presented as an oral paper at CoRL 2021, we explore these challenges further and introduce a self-supervised method for Cross-embodiment Inverse Reinforcement Learning (XIRL). Rather than focusing on how individual human actions should correspond to robot actions, XIRL learns the high-level task objective from videos, and summarizes that knowledge in the form of a reward function that is invariant to embodiment differences, such as shape, actions and end-effector dynamics. The learned rewards can then be used together with reinforcement learning to teach the task to agents with new physical embodiments through trial and error. Our approach is general and scales autonomously with data — the more embodiment diversity presented in the videos, the more invariant and robust the reward functions become. Experiments show that our learned reward functions lead to significantly more sample efficient (roughly 2 to 4 times) reinforcement learning on new embodiments compared to alternative methods. To extend and build on our work, we are releasing an accompanying open-source implementation of our method along with X-MAGICAL, our new simulated benchmark for cross-embodiment imitation.

Cross-Embodiment Inverse Reinforcement Learning (XIRL)
The underlying observation in this work is that in spite of the many differences induced by different embodiments, there still exist visual cues that reflect progression towards a common task objective. For example, in the pen manipulation task above, the presence of pens in the cup but not the mug, or the absence of pens on the table, are key frames that are common to different embodiments and indirectly provide cues for how close to being complete a task is. The key idea behind XIRL is to automatically discover these key moments in videos of different length and cluster them meaningfully to encode task progression. This motivation shares many similarities with unsupervised video alignment research, from which we can leverage a method called Temporal Cycle Consistency (TCC), which aligns videos accurately while learning useful visual representations for fine-grained video understanding without requiring any ground-truth correspondences.

We leverage TCC to train an encoder to temporally align video demonstrations of different experts performing the same task. The TCC loss tries to maximize the number of cycle-consistent frames (or mutual nearest-neighbors) between pairs of sequences using a differentiable formulation of soft nearest-neighbors. Once the encoder is trained, we define our reward function as simply the negative Euclidean distance between the current observation and the goal observation in the learned embedding space. We can subsequently insert the reward into a standard MDP and use an RL algorithm to learn the demonstrated behavior. Surprisingly, we find that this simple reward formulation is effective for cross-embodiment imitation.

XIRL self-supervises reward functions from expert demonstrations using temporal cycle consistency (TCC), then uses them for downstream reinforcement learning to learn new skills from third-person demonstrations.

X-MAGICAL Benchmark
To evaluate the performance of XIRL and baseline alternatives (e.g., TCN, LIFS, Goal Classifier) in a consistent environment, we created X-MAGICAL, which is a simulated benchmark for cross-embodiment imitation. X-MAGICAL features a diverse set of agent embodiments, with differences in their shapes and end-effectors, designed to solve tasks in different ways. This leads to differences in execution speeds and state-action trajectories, which poses challenges for current imitation learning techniques, e.g., ones that use time as a heuristic for weak correspondences between two trajectories. The ability to generalize across embodiments is precisely what X-MAGICAL evaluates.

The SweepToTop task we considered for our experiments is a simplified 2D equivalent of a common household robotic sweeping task, where an agent has to push three objects into a goal zone in the environment. We chose this task specifically because its long-horizon nature highlights how different agent embodiments can generate entirely different trajectories (shown below). X-MAGICAL features a Gym API and is designed to be easily extendable to new tasks and embodiments. You can try it out today with pip install x-magical.

Different agent shapes in the SweepToTop task in the X-MAGICAL benchmark need to use different strategies to reposition objects into the target area (pink), i.e., to “clear the debris”. For example, the long-stick can clear them all in one fell swoop, whereas the short-stick needs to do multiple consecutive back-and-forths.
Left: Heatmap of state visitation for each embodiment across all expert demonstrations. Right: Examples of expert trajectories for each embodiment.

Highlights
In our first set of experiments, we checked whether our learned embodiment-invariant reward function can enable successful reinforcement learning, when the expert demonstrations are provided through the agent itself. We find that XIRL significantly outperforms alternative methods especially on the tougher agents (e.g., short-stick and gripper).

Same-embodiment setting: Comparison of XIRL with baseline reward functions, using SAC for RL policy learning. XIRL is roughly 2 to 4 times more sample efficient than some of the baselines on the harder agents (short-stick and gripper).

We also find that our approach shows great potential for learning reward functions that generalize to novel embodiments. For instance, when reward learning is performed on embodiments that are different from the ones on which the policy is trained, we find that it results in significantly more sample efficient agents compared to the same baselines. Below, in the gripper subplot (bottom right) for example, the reward is first learned on demonstration videos from long-stick, medium-stick and short-stick, after which the reward function is used to train the gripper agent.

Cross-embodiment setting: XIRL performs favorably when compared with other baseline reward functions, trained on observation-only demonstrations from different embodiments. Each agent (long-stick, medium-stick, short-stick, gripper) had its reward trained using demonstrations from the other three embodiments.

We also find that we can train on real-world human demonstrations, and use the learned reward to train a Sawyer arm in simulation to push a puck to a designated target zone. In these experiments as well, our method outperforms baseline alternatives. For example, our XIRL variant trained only on the real-world demonstrations (purple in the plots below) reaches 80% of the total performance roughly 85% faster than the RLV baseline (orange).

What Do The Learned Reward Functions Look Like?
To further explore the qualitative nature of our learned rewards in more challenging real-world scenarios, we collect a dataset of the pen transfer task using various household tools.

Below, we show rewards extracted from a successful (top) and unsuccessful (bottom) demonstration. Both demonstrations follow a similar trajectory at the start of the task execution. The successful one nets a high reward for placing the pens consecutively into the mug then into the glass cup, while the unsuccessful one obtains a low reward because it drops the pens outside the glass cup towards the end of the execution (orange circle). These results are promising because they show that our learned encoder can represent fine-grained visual differences relevant to a task.

Conclusion
We highlighted XIRL, our approach to tackling the cross-embodiment imitation problem. XIRL learns an embodiment-invariant reward function that encodes task progress using a temporal cycle-consistency objective. Policies learned using our reward functions are significantly more sample-efficient than baseline alternatives. Furthermore, the reward functions do not require manually paired video frames between the demonstrator and the learner, giving them the ability to scale to an arbitrary number of embodiments or experts with varying skill levels. Overall, we are excited about this direction of work, and hope that our benchmark promotes further research in this area. For more details, please check out our paper and download the code from our GitHub repository.

Acknowledgments
Kevin and Andy summarized research performed together with Pete Florence, Jonathan Tompson, Jeannette Bohg (faculty at Stanford University) and Debidatta Dwibedi. All authors would additionally like to thank Alex Nichol, Nick Hynes, Sean Kirmani, Brent Yi, Jimmy Wu, Karl Schmeckpeper and Minttu Alakuijala for fruitful technical discussions, and Sam Toyer for invaluable help with setting up the simulated benchmark.

Source: Google AI Blog