Chrome Beta for Android Update

Hi everyone! We've just released Chrome Beta 81 (81.0.4044.71) for Android: it's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Ben Mason
Google Chrome

How we’re supporting Research in Kiwi Universities


Whether it is better tracking and trapping of introduced predators in our native bush, or improved breast cancer screening technology, more and more researchers, organisations and businesses are using artificial intelligence (AI) to help tackle big problems.

In fact, we believe that there are numerous other challenges that could be addressed with AI and have made it our mission to make the benefits of these technologies available to everyone. Helping foster Kiwi AI talent with programs like digital readiness courses for teachers is a key component of that.

Today, we’re excited to announce two new programmes arriving in New Zealand.

Launch of exploreCSR
We're launching exploreCSR in New Zealand in April which aims to enhance the undergraduate experience and to motivate more women to pursue graduate study and research careers in Computer Science and related fields. Throughout the year, the awards programme promotes the design, development, and execution of regional research-focused workshops. This will be the first time the programme has been run outside the US.

Google Faculty Research Awards
In September 2019, we opened our annual call for the Google Faculty Research Awards, a program focused on supporting world-class technical research in Computer Science, Engineering and related fields at some of the world’s best computer science departments.

One outstanding Kiwi researcher will now be supported with funding for one year to help them advance their research in areas like algorithms and security:

  • Kelly Blincoe, from the University of Auckland. Kelly aims to investigate the impact of non-inclusive behaviour that happens during software code review. Her study will enable a better understanding of the impacts of a toxic code review culture, enabling better code review guidelines and tools and paving the way for future research on interventions.


For the 2019 awards, we received 917 proposals from about 50 countries and over 330 universities, with all proposals undergoing an extensive review process involving 1100 expert reviewers across Google who assessed the proposals on merit, innovation, and alignment with our research philosophy.

Congratulations again to Kelly!


Post content

A Season of Docs story

Lack of clear and reliable documentation is one of the main shortcomings of many open source projects. Last year, Google set out to help change that by announcing the first ever Season of Docs

Season of Docs is an initiative that brings together technical writers and open source projects to collaborate for a few months, benefitting both the communities and writers.

This is the story of Audrey Tavares, one of the writers who signed up for Season of Docs.

Turning incipient curiosity into an opportunity

In 2019, Audrey was completing the Technical and Professional Communication program at Glendon College, exploring technical writing out of curiosity. One of Google’s technical writers, Nicola Yap, completed the same program and visited Audrey’s class in March to talk about her career. It was an enlightening experience, showing technical writing as an attractive alternative with plenty of opportunities, and introducing Audrey to Season of Docs.

For Audrey, this experience meant stepping into unknown territory—she knew nothing about open source software. Naturally, the first step was to familiarize herself with the communities and understand the software development paradigm. After spending time learning she submitted her Technical Writer application—which was accepted—and was assigned to Oppia, an online educational platform.

Main challenges

Audrey had two mentors to help her on her journey: one in India and the other in the United States. As you can imagine, this revealed the first challenge—time zones. While the first few days were stressful, as navigating schedules across time zones was a daunting task,with a little work, they soon came up with an arrangement that worked for everyone.

The second challenge was learning the tools. For most of us, writing a document involves opening a word processor and typing some text, however, Audrey was about to find out, things are a bit more intricate when it comes to documenting code.

When presented with the choice of a documentation tool set, Audrey decided on Write the Docs. It seemed like a very popular tool among open source communities. How hard can it be to use, right? Well, it’s not so much about how difficult it is, but how different it is for someone unfamiliar with a common software development workflow since it entails learning a few things:
Audrey was not dismayed. She pushed forward and gradually learned these new tools. Both mentors were always available, willing to help, and answered all of her questions. Their mentorship was key to her success.

Every end is a new beginning

After Season of Docs was over, Audrey decided to remain part of the Oppia community to actively contribute to make the platform even better.

The experience allowed Audrey to walk away from Season of Docs with a new set of technical skills, communication skills with software engineers, an extended professional network, and a new item in her résumé. She now works as a technical writer for a software company in Toronto.

Applications for Season of Docs 2020 start on April 13 for open source organizations and on May 11 for technical writers. Check the official announcement to learn how to participate.

By Geri Ochoa, Google Cloud

Alfred Camera: Smart camera features using MediaPipe

Guest post by the Engineering team at Alfred Camera

Please note that the information, uses, and applications expressed in the below post are solely those of our guest author, Alfred Camera.

In this article, we’d like to give you a short overview of Alfred Camera and our experience of using MediaPipe to transform our moving object feature, and how MediaPipe has helped to get things easier to achieve our goals.

What is Alfred Camera?

AlfredCamera logo

Fig.1 Alfred Camera Logo

Alfred Camera is a smart home app for both Android and iOS devices, with over 15 million downloads worldwide. By downloading the app, users are able to turn their spare phones into security cameras and monitors directly, which allows them to watch their homes, shops, pets anytime. The mission of Alfred Camera is to provide affordable home security so that everyone can find peace of mind in this busy world.

The Alfred Camera team is composed of professionals in various fields, including an engineering team with several machine learning and computer vision experts. Our aim is to integrate AI technology into devices that are accessible to everyone.

Machine Learning in Alfred Camera

Alfred Camera currently has a feature called Moving Object Detection, which continuously uses the device’s camera to monitor a target scene. Once it identifies a moving object in the area, the app will begin recording the video and send notifications to the device owner. The machine learning models for detection are hand-crafted and trained by our team using TensorFlow, and run on TensorFlow Lite with good performance even on mid-tier devices. This is important because the app is leveraging old phones and we'd like the feature to reach as many users as possible.

The Challenges

We had started building our AI features at Alfred Camera since 2017. In order to have a solid foundation to support our AI feature requirements for the coming years, we decided to rebuild our real-time video analysis pipeline. At the beginning of the project, the goals were to create a new pipeline which should be 1) modular enough so we could swap core algorithms easily with minimal changes in other parts of the pipeline, 2) having GPU acceleration designed in place, 3) cross-platform as much as possible so there’s no need to create/maintain separate implementations for different platforms. Based on the goals, we had surveyed several open source projects that had the potential but we ended up using none of them as they either fell short on the features or were not providing the readiness/stabilities that we were looking for.

We started a small team to prototype on those goals first for the Android platform. What came later were some tough challenges way above what we originally anticipated. We ran into several major design changes as some key design basics were overlooked. We needed to implement some utilities to do things that sounded trivial but required significant effort to make it right and fast. Dealing with asynchronous processing also led us into a bunch of timing issues, which took the team quite some effort to address. Not to mention debugging on real devices was extremely inefficient and painful.

Things didn't just stop here. Our product is also on iOS and we had to tackle these challenges once again. Moreover, discrepancies in the behavior between the platform-specific implementations introduced additional issues that we needed to resolve.

Even though we finally managed to get the implementations to the confidence level we wanted, that was not a very pleasant experience and we have never stopped thinking if there is a better option.

MediaPipe - A Game Changer

Google open sourced MediaPipe project in June 2019 and it immediately caught our attention. We were surprised by how it is perfectly aligned with the previous goals we set, and has functionalities that could not have been developed with the amount of engineering resources we had as a small company.

We immediately decided to start an evaluation project by building a new product feature directly using MediaPipe to see if it could live up to all the promises.

Migrating to MediaPipe

To start the evaluation, we decided to migrate our existing moving object feature to see what exactly MediaPipe can do.

Our current Moving Object Detection pipeline consists of the following main components:

  • (Moving) Object Detection Model
    As explained earlier, a TensorFlow Lite model trained by our team, tailored to run on mid-tier devices.
  • Low-light Detection and Low-light Filter
    Calculate the average luminance of the scene, and based on the result conditionally process the incoming frames to intensify the brightness of the pixels to let our users see things in the dark. We are also controlling whether we should run the detection or not as the moving object detection model does not work properly when the frame has been processed by the filter.
  • Motion Detection
    Sending frames through Moving Object Detection still consumes a significant amount of power even with a small model like the one we created. Running inferences continuously does not seem to be a good idea as most of the time there may not be any moving object in front of the camera. We decided to implement a gating mechanism where the frames are only being sent to the Moving Object Detection model based on the movements detected from the scene. The detection is done mainly by calculating the differences between two frames with some additional tricks that take the movements detected in a few frames before into consideration.
  • Area of Interest
    This is a mechanism to let users manually mask out the area where they do not want the camera to see. It can also be done automatically based on regional luminance that can be generated by the aforementioned low-light detection component.

Our current implementation has taken GPU into consideration as much as we can. A series of shaders are created to perform the tasks above and the pipeline is designed to avoid moving pixels between CPU/GPU frequently to eliminate the potential performance hits.

The pipeline involves multiple ML models that are conditionally executed, mixed CPU/GPU processing, etc. All the challenges here make it a perfect showcase for how MediaPipe could help develop a complicated pipeline.

Playing with MediaPipe

MediaPipe provides a lot of code samples for any developer to bootstrap with. We took the Object Detection on Android sample that comes with the project to start with because of the similarity with the back-end part of our pipeline. It did take us sometimes to fully understand the design concepts of MediaPipe and all the tools associated. But with the complete documentation and the great responsiveness from the MediaPipe team, we got up to speed soon to do most of the things we wanted.

That being said, there were a few challenges we needed to overcome on the road to full migration. Our original pipeline of Moving Object Detection takes the input frame asynchronously, but MediaPipe has timestamp bound limitations such that we cannot just show the result in an allochronic way. Meanwhile, we need to gather data through JNI in a specific data format. We came up with a workaround that conquered all the issues under the circumstances, which will be mentioned later.

After wrapping our models and the processing logics into calculators and wired them up, we have successfully transformed our existing implementation and created our first MediaPipe Moving Object Detection pipeline like the figure below, running on Android devices:

Fig.2 Moving Object Detection Graph

Fig.2 Moving Object Detection Graph

We do not block the video frame in the main calculation loop, and set the detection result as an input stream to show the annotation on the screen. The whole graph is designed as a multi-functioned process, the left chunk is the debug annotation and video frame output module, and the rest of the calculation occurs in the rest of the graph, e.g., low light detection, motion triggered detection, cropping of the area of interest and the detection process. In this way, the graph process will naturally separate into real-time display and asynchronous calculation.

As a result, we are able to complete a full processing for detection in under 40ms on a device with Snapdragon 660 chipset. MediaPipe’s tight integration with TensorFlow Lite provides us the flexibility to get even more performance gain by leveraging whatever acceleration techniques available (GPU or DSP) on the device.

The following figure shows the current implementation working in action:

Fig.3 Moving Object Detection running in Alfred Camera

Fig.3 Moving Object Detection running in Alfred Camera

After getting things to run on Android, Desktop GPU (OpenGL-ES) emulation was our next target to evaluate. We are already using OpenGL-ES shaders for some computer vision operations in our pipeline. Having the capability to develop the algorithm on desktop, seeing it work in action before deployment onto mobile platforms is a huge benefit to us. The feature was not ready at the time when the project was first released, but MediaPipe team had soon added Desktop GPU emulation support for Linux in follow-up releases to make this possible. We have used the capability to detect and fix some issues in the graphs we created even before we put things on the mobile devices. Although it currently only works on Linux, it is still a big leap forward for us.

Testing the algorithms and making sure they behave as expected is also a challenge for a camera application. MediaPipe helps us simplify this by using pre-recorded MP4 files as input so we could verify the behavior simply by replaying the files. There is also built-in profiling support that makes it easy for us to locate potential performance bottlenecks.

MediaPipe - Exactly What We Were Looking For

The result of the evaluation and the feedback from our engineering team were very positive and promising:

  1. We are able to design/verify the algorithm and complete core implementations directly on the desktop emulation environment, and then migrate to the target platforms with minimum efforts. As a result, complexities of debugging on real devices are greatly reduced.
  2. MediaPipe’s modular design of graphs/calculators enables us to better split up the development into different engineers/teams, try out new pipeline design easily by rewiring the graph, and test the building blocks independently to ensure quality before we put things together.
  3. MediaPipe’s cross-platform design maximizes the reusability and minimizes fragmentation of the implementations we created. Not only are the efforts required to support a new platform greatly reduced, but we are also less worried about the behavior discrepancies on different platforms due to different interpretations of the spec from platform engineers.
  4. Built-in graphics utilities and profiling support saved us a lot of time creating those common facilities and making them right, and we could be more focused on the key designs.
  5. Tight integration with TensorFlow Lite really saves lots of effort for a company like us that heavily depends on TensorFlow, and it still gives us the flexibility to easily interface with other solutions.

With just a few weeks working with MediaPipe, it has shown strong capabilities to fundamentally transform how we develop our products. Without MediaPipe we could have spent months creating the same features without the same level of performance.

Summary

Alfred Camera is designed to bring home security with AI to everyone, and MediaPipe has significantly made achieving that goal easier for our team. From Moving Object Detection to future AI-powered features, we are focusing on transforming a basic security camera use case into a smart housekeeper that can help provide even more context that our users care about. With the support of MediaPipe, we have been able to accelerate our development process and bring the features to the market at an unprecedented speed. Our team is really excited about how MediaPipe could help us progress and discover new possibilities, and is looking forward to the enhancements that are yet to come to the project.

Porsche Taylor puts women in the driver’s seat

Porsche Taylor’s first time riding a motorcycle alone could have gone better. “That first ride, I had absolutely nothing on right: My helmet was too big, I didn’t own a jacket. I might have had on some baseball gloves; everything was just totally upside-down wrong,” she says. “But I wasn’t afraid, it was exhilarating. It was trying something new, being in control. It was that initial feeling of the freedom of the wind.”

Porsche was one of the participants in the Women Riders World Relay, a relay ride that spanned the globe, beginning in February 2019 in Scotland and ending February 2020 in London. WRWR organizers used Google products like Maps, Sheets and Translate to make sure riders not only had constant, up-to-date access to their routes, but also were able to explore and connect with one another along the way. 

Video showing women riding motorcycles across the world.

“The whole team did phenomenally with the amount of time they had to put together the route and figure out the baton passes,” says Porsche. Google Maps was particularly useful for creating Porsche’s route. She and her fellow riders rode from Sept. 25 to Oct. 14, starting in Maine and heading west across the Canadian border, then down through the Southwest to the Mexican border in Texas. They crossed the country, occasionally riding through snowstorms and dropping temperatures. “When you consider the seasons we were riding through, it was a definite challenge for organizers to find routes that weren’t closed down.” 

While Google Maps could help the riders along their journey, it couldn’t do anything about inclement weather. “I quit about four times,” laughs Porsche. “Riding in the cold is not my favorite thing to do. But it was a positive experience all the way around; I don’t know that I would ride in the freezing cold again, but I would do a ride with those women again for sure. I always say the bonds are built on the ground: You’re going to love the folks you ride with to death or you won’t be so cool, and I’m happy to say I love those ladies to death.”

Porsche is vocal about the need for more representation for women in the motor sports community, and she says that things like social media visibility and technical tools like Google Hangouts have helped women who may have felt alone in their shared passion find each other. This idea is in part what inspired her to found Black Girls Ride, a magazine and community originally launched as a place for women of color who ride, which has since grown to include all women. What inspired her to launch Black Girls Ride was the lack of representation she saw when she first started riding—especially in long-distance riding. Traditionally, women filled support roles during these cross-country expeditions, taking a literal backseat to men. In fact, Porsche’s first experience on a bike was sitting behind a man, on the back of her cousin’s bike. “I didn’t so much like the feeling of being a passenger...but I loved the feeling of riding.” 

Thanks to women like Porsche and the WRWR riders, the world of motor sports is changing. “Women have become fearless and bold enough to take long distance biking trips on their own. We’re witnessing the explosion of the all-female long distance ride, where women take it upon themselves to create rides that cater to them instead of being a subset of an all-male ride. It’s where we get to take our power back.” 

Talking about these rides and seeing women taking them via social media and internet communities are crucial, says Porsche, who also mentions using Google Hangouts to connect with riders across the country. “You’re able to see the growth of female riders; women taking these long distance trips and riding solo have always been there—there are women riding today who have been doing this since the 60s—but social media is now shining a light on them.” 

That increased visibility is part of Porsche's work with Black Girls Ride. “I knew from riding in LA that there were more of us than the community would admit to. There was no representation in mainstream media, even for women who were riding professionally, there was very little to nothing,” Porsche says. Now "women all over the world are connecting to the Black Girls Ride brand. We have readers in London, Nigeria, France, just about every country you can name. I’m motivated by these women.” Black Girls Ride has become more than a publication, hosting trainings, workshops and events. And while both men and women are included, it’s Porsche’s focus to make sure women riders are invited to the table and that they are given the same representation, advertising and sponsorship opportunities. 

Most of all, she just wants women to feel welcome in this world. “It’s always been my goal to create safe spaces for women to ask questions and get the help they need without fear of ridicule,” she says. “And I’m glad I can be a part of creating that.” 

Learn more about the women behind WRWR and how they planned their relay at goo.gle/womenriders.


Source: Google LatLong


Porsche Taylor puts women in the driver’s seat

Porsche Taylor’s first time riding a motorcycle alone could have gone better. “That first ride, I had absolutely nothing on right: My helmet was too big, I didn’t own a jacket. I might have had on some baseball gloves; everything was just totally upside-down wrong,” she says. “But I wasn’t afraid, it was exhilarating. It was trying something new, being in control. It was that initial feeling of the freedom of the wind.”

Porsche was one of the participants in the Women Riders World Relay, a relay ride that spanned the globe, beginning in February 2019 in Scotland and ending February 2020 in London. WRWR organizers used Google products like Maps, Sheets and Translate to make sure riders not only had constant, up-to-date access to their routes, but also were able to explore and connect with one another along the way. 

Video showing women riding motorcycles across the world.

“The whole team did phenomenally with the amount of time they had to put together the route and figure out the baton passes,” says Porsche. Google Maps was particularly useful for creating Porsche’s route. She and her fellow riders rode from Sept. 25 to Oct. 14, starting in Maine and heading west across the Canadian border, then down through the Southwest to the Mexican border in Texas. They crossed the country, occasionally riding through snowstorms and dropping temperatures. “When you consider the seasons we were riding through, it was a definite challenge for organizers to find routes that weren’t closed down.” 

While Google Maps could help the riders along their journey, it couldn’t do anything about inclement weather. “I quit about four times,” laughs Porsche. “Riding in the cold is not my favorite thing to do. But it was a positive experience all the way around; I don’t know that I would ride in the freezing cold again, but I would do a ride with those women again for sure. I always say the bonds are built on the ground: You’re going to love the folks you ride with to death or you won’t be so cool, and I’m happy to say I love those ladies to death.”

Porsche is vocal about the need for more representation for women in the motor sports community, and she says that things like social media visibility and technical tools like Google Hangouts have helped women who may have felt alone in their shared passion find each other. This idea is in part what inspired her to found Black Girls Ride, a magazine and community originally launched as a place for women of color who ride, which has since grown to include all women. What inspired her to launch Black Girls Ride was the lack of representation she saw when she first started riding—especially in long-distance riding. Traditionally, women filled support roles during these cross-country expeditions, taking a literal backseat to men. In fact, Porsche’s first experience on a bike was sitting behind a man, on the back of her cousin’s bike. “I didn’t so much like the feeling of being a passenger...but I loved the feeling of riding.” 

Thanks to women like Porsche and the WRWR riders, the world of motor sports is changing. “Women have become fearless and bold enough to take long distance biking trips on their own. We’re witnessing the explosion of the all-female long distance ride, where women take it upon themselves to create rides that cater to them instead of being a subset of an all-male ride. It’s where we get to take our power back.” 

Talking about these rides and seeing women taking them via social media and internet communities are crucial, says Porsche, who also mentions using Google Hangouts to connect with riders across the country. “You’re able to see the growth of female riders; women taking these long distance trips and riding solo have always been there—there are women riding today who have been doing this since the 60s—but social media is now shining a light on them.” 

That increased visibility is part of Porsche's work with Black Girls Ride. “I knew from riding in LA that there were more of us than the community would admit to. There was no representation in mainstream media, even for women who were riding professionally, there was very little to nothing,” Porsche says. Now "women all over the world are connecting to the Black Girls Ride brand. We have readers in London, Nigeria, France, just about every country you can name. I’m motivated by these women.” Black Girls Ride has become more than a publication, hosting trainings, workshops and events. And while both men and women are included, it’s Porsche’s focus to make sure women riders are invited to the table and that they are given the same representation, advertising and sponsorship opportunities. 

Most of all, she just wants women to feel welcome in this world. “It’s always been my goal to create safe spaces for women to ask questions and get the help they need without fear of ridicule,” she says. “And I’m glad I can be a part of creating that.” 

Learn more about the women behind WRWR and how they planned their relay at goo.gle/womenriders.


Source: Google LatLong


Stay updated on travel advisories and airline policies


As the spread of COVID-19 continues, we’re seeing more searches for travel-related information—like travel advisories and trip cancellation policies—so we’re making some changes this week to help you find this information faster.


Stay up to date on travel advisories

When you search on Google for information like flights, hotels, or things to do you’ll start seeing COVID-19 related travel advisories or restrictions for your destination, with links to relevant information from your country’s travel authority when possible. This information will appear on the search results page, at the top of google.com/travel, and in Google Maps when you search for hotels. 


advisories_flights_ttd (1).png


Understanding airline policies for flight changes and cancellations

In response to COVID-19, many airlines have adjusted change fee and cancellation policies. When you search on Google for flights with a specific airline or go to Google Flights, we’ll direct you to our Help Center article with more information on airline policies. These policies may change, so be sure to click through to the airline’s website for the latest information.


Flights_flexibileFares (3).png

 

We hope these resources help you stay safe and make the best decisions about travel plans. We'll continue to work on ways to help as the COVID-19 situation evolves.

City of Antwerp and Google to digitize 100,000 books

The world of book publishing today is, in many ways, dignified and highbrow. But it was a different story in the 16th century, about a hundred years after the invention of the printing press. Publishing was a high-risk, high-reward proposition: With the right backing and enough capital investment, an entrepreneur could become wildly successful. But publishing the wrong thing in the wrong place could be disastrous—even fatal, with governments and religious authorities taking a very severe view of what content was fit to print.

No one knew this better than Christophe Plantin, who set up a publishing house in Antwerp, Belgium, in the mid-16th century. Facing religious intolerance and escaping persecution, he helped put the city on the map as a publishing powerhouse. His own printing operation continued in his family for generations. 

Today, Plantin’s home and business are preserved as the Plantin-Moretus Museum, a UNESCO World Heritage Site and home to 25,000 early printed books. Visitors to Antwerp can walk through the rooms where the family lived and worked, and researchers can delve into the collection’s manuscripts, books, archives and original prints.

And now, thanks to a partnership between the City of Antwerp and Google, we will digitize more than 32,000 books from the museum, along with an additional 60,000 books held by the city’s Hendrik Conscience Heritage Library.

In total, more than 100,000 international works published from the sixteenth to the nineteenth century will be made freely accessible in the coming years via Google Books and the library catalogues of both institutions. The scanned volumes, which are no longer subject to copyright, will be full-text searchable, meaning that researchers—as well as members of the public—will be able to search them easily and quickly.

The digitization will start in early 2021, allowing time for both Google and the city to set up the project and establish logistics processes. We expect it will take at least three years, partly because we don’t want to cause too much disruption to library visitors who come to view the materials. The books will be transported securely in batches from Antwerp to our European digitization center. Shortly after each work is scanned, the digital copy will appear on books.google.com, and the libraries will also receive digital copies of their respective works to incorporate into their own catalogues. 

Google Books was launched 15 years ago, with the goal of making all books from around the world digitally available and searchable for everyone. This collaboration with the City of Antwerp adds an incredibly rich collection from the Dutch-speaking world to our collection, and brings us a crucial step closer to achieving our mission.

Source: Search


City of Antwerp and Google to digitize 100,000 books

The world of book publishing today is, in many ways, dignified and highbrow. But it was a different story in the 16th century, about a hundred years after the invention of the printing press. Publishing was a high-risk, high-reward proposition: With the right backing and enough capital investment, an entrepreneur could become wildly successful. But publishing the wrong thing in the wrong place could be disastrous—even fatal, with governments and religious authorities taking a very severe view of what content was fit to print.

No one knew this better than Christophe Plantin, who set up a publishing house in Antwerp, Belgium, in the mid-16th century. Facing religious intolerance and escaping persecution, he helped put the city on the map as a publishing powerhouse. His own printing operation continued in his family for generations. 

Today, Plantin’s home and business are preserved as the Plantin-Moretus Museum, a UNESCO World Heritage Site and home to 25,000 early printed books. Visitors to Antwerp can walk through the rooms where the family lived and worked, and researchers can delve into the collection’s manuscripts, books, archives and original prints.

And now, thanks to a partnership between the City of Antwerp and Google, we will digitize more than 32,000 books from the museum, along with an additional 60,000 books held by the city’s Hendrik Conscience Heritage Library.

In total, more than 100,000 international works published from the sixteenth to the nineteenth century will be made freely accessible in the coming years via Google Books and the library catalogues of both institutions. The scanned volumes, which are no longer subject to copyright, will be full-text searchable, meaning that researchers—as well as members of the public—will be able to search them easily and quickly.

The digitization will start in early 2021, allowing time for both Google and the city to set up the project and establish logistics processes. We expect it will take at least three years, partly because we don’t want to cause too much disruption to library visitors who come to view the materials. The books will be transported securely in batches from Antwerp to our European digitization center. Shortly after each work is scanned, the digital copy will appear on books.google.com, and the libraries will also receive digital copies of their respective works to incorporate into their own catalogues. 

Google Books was launched 15 years ago, with the goal of making all books from around the world digitally available and searchable for everyone. This collaboration with the City of Antwerp adds an incredibly rich collection from the Dutch-speaking world to our collection, and brings us a crucial step closer to achieving our mission.

Massively Scaling Reinforcement Learning with SEED RL



Reinforcement learning (RL) has seen impressive advances over the last few years as demonstrated by the recent success in solving games such as Go and Dota 2. Models, or agents, learn by exploring an environment, such as a game, while optimizing for specified goals. However, current RL techniques require increasingly large amounts of training to successfully learn even simple games, which makes iterating research and product ideas computationally expensive and time consuming.

In “SEED RL: Scalable and Efficient Deep-RL with Accelerated Central Inference”, we present an RL agent that scales to thousands of machines, which enables training at millions of frames per second, and significantly improves computational efficiency. This is achieved with a novel architecture that takes advantage of accelerators (GPUs or TPUs) at scale by centralizing model inference and introducing a fast communication layer. We demonstrate the performance of SEED RL on popular RL benchmarks, such as Google Research Football, Arcade Learning Environment and DeepMind Lab, and show that by using larger models, data efficiency can be increased. The code has been open sourced on Github together with examples for running on Google Cloud with GPUs.

Current Distributed Architectures
The previous generation of distributed reinforcement learning agents, such as IMPALA, made use of accelerators specialized for numerical calculations, taking advantage of the speed and efficiency from which (un)supervised learning has benefited for years. The architecture of an RL agent is usually separated into actors and learners. The actors typically run on CPUs and iterate between taking steps in the environment and running inference on the model to predict the next action. Frequently the actor will update the parameters of the inference model, and after collecting a sufficient amount of observations, will send a trajectory of observations and actions to the learner, which then optimizes the model. In this architecture, the learner trains the model on GPUs using input from distributed inference on hundreds of machines.

Example architecture for an earlier generation RL agent, IMPALA. Inference is done on the actors, usually using inefficient CPUs. Updated model parameters are frequently sent from the learner to the actors increasing bandwidth requirements.
The architecture of RL agents (such as IMPALA) have a number of drawbacks:
  1. Using CPUs for neural network inference is much less efficient and slower than using accelerators and becomes problematic as models become larger and more computationally expensive.
  2. The bandwidth required for sending parameters and intermediate model states between the actors and learner can be a bottleneck.
  3. Handling two completely different tasks on one machine (i.e., environment rendering and inference) is unlikely to utilize machine resources optimally.
SEED RL Architecture
The SEED RL architecture is designed to solve these drawbacks. With this approach, neural network inference is done centrally by the learner on specialized hardware (GPUs or TPUs), enabling accelerated inference and avoiding the data transfer bottleneck by ensuring that the model parameters and state are kept local. While observations are sent to the learner at every environment step, latency is kept low due to a very efficient network library based on the gRPC framework with asynchronous streaming RPCs. This makes it possible to achieve up to a million queries per second on a single machine. The learner can be scaled to thousands of cores (e.g., up to 2048 on Cloud TPUs) and the number of actors can be scaled to thousands of machines to fully utilize the learner, making it possible to train at millions of frames per second. SEED RL is based on the TensorFlow 2 API and, in our experiments, was accelerated by TPUs.
Overview of the architecture of SEED RL. In contrast to the IMPALA architecture, the actors only take actions in environments. Inference is executed centrally by the learner on accelerators using batches of data from multiple actors.
In order for this architecture to be successful, two state-of-the-art algorithms are integrated into SEED RL. The first is V-trace, a policy gradient-based method, first introduced with IMPALA. In general, policy gradient-based methods predict an action distribution from which an action can be sampled. However, because the actors and the learner execute asynchronously in SEED RL, the policy of actors is slightly behind the policy of the learner, i.e., they become off-policy. The usual policy gradient-based methods are on-policy, meaning that they have the same policy for actors and learner, and suffer from convergence and numerical issues in off-policy settings. V-trace is an off-policy method and thus works well in the asynchronous SEED RL architecture.

The second algorithm is R2D2, a Q-learning method that selects an action based on the predicted future value of that action using recurrent distributed replay. This approach allows the Q-learning algorithm to be run at scale, while still allowing the use of recurrent neural networks that can predict future values based on the information of all past frames in an episode.

Experiments
SEED RL is benchmarked on the commonly used Arcade Learning Environment, DeepMind Lab environments, and on the recently released Google Research Football environment.
Frames per second comparing IMPALA and various configurations of SEED RL on DeepMind Lab. SEED RL achieves 2.4M frames per second using 4,160 CPUs. Assuming the same speed, IMPALA would need 14,000 CPUs.
On DeepMind Lab, we achieve 2.4 million frames per second with 64 Cloud TPU cores, which represents an improvement of 80x over the previous state-of-the-art distributed agent, IMPALA. This results in a significant speed-up in wall-clock time and computational efficiency. IMPALA requires 3-4x as many CPUs as SEED RL for the same speed.
Episode return (i.e., the sum of rewards) over time on the DeepMind Lab game “explore_goal_locations_small” using IMPALA and SEED RL. With SEED RL, the time to train is significantly reduced.
With an architecture optimized for use on modern accelerators, it’s natural to increase the model size in an attempt to increase data efficiency. We show that by increasing the size of the model and the input resolution, we are able to solve a previously unsolved Google Research Football task, “Hard”.
The score of different architectures on the Google Research Football “Hard” task. We show that by using an input resolution and a larger model, the score is improved, and with more training, the model can significantly outperform the builtin AI.
Additional details are provided in the paper, including our results on the Arcade Learning Environment. We believe SEED RL and the results presented, demonstrate that reinforcement learning has once again caught up with the rest of the deep learning field in terms of taking advantage of accelerators.

Acknowledgements
This project was done in collaboration with Raphaël Marinier, Piotr Stanczyk, Ke Wang, Marcin Andrychowicz and Marcin Michalski. We would also like to thank Tom Small for the visualizations.

Source: Google AI Blog