Tag Archives: AI

Updates from Coral: A new compiler and much more

Posted by Vikram Tank (Product Manager), Coral Team

Coral has been public for about a month now, and we’ve heard some great feedback about our products. As we evolve the Coral platform, we’re making our products easier to use and exposing more powerful tools for building devices with on-device AI.

Today, we're updating the Edge TPU model compiler to remove the restrictions around specific architectures, allowing you to submit any model architecture that you want. This greatly increases the variety of models that you can run on the Coral platform. Just be sure to review the TensorFlow ops supported on Edge TPU and model design requirements to take full advantage of the Edge TPU at runtime.

We're also releasing a new version of Mendel OS (3.0 Chef) for the Dev Board with a new board management tool called Mendel Development Tool (MDT).

To help with the developer workflow, our new C++ API works with the TensorFlow Lite C++ API so you can execute inferences on an Edge TPU. In addition, both the Python and C++ APIs now allow you to run multiple models in parallel, using multiple Edge TPU devices.

In addition to these updates, we’re adding new capabilities to Coral with the release of the Environmental Sensor Board. It’s an accessory board for the Coral Dev Platform (and Raspberry Pi) that brings sensor input to your models. It has integrated light, temperature, humidity, and barometric sensors, and the ability to add more sensors via it's four Grove connectors. The secure element on-board also allows for easy communication with the Google Cloud IOT Core.

The team has also been working with partners to help them evaluate whether Coral is the right fit for their products. We’re excited that Oivi has chosen us to be the base platform of their new handheld AI-camera. This product will help prevent blindness among diabetes patients by providing early, automated detection of diabetic retinopathy. Anders Eikenes, CEO of Oivi, says “Oivi is dedicated towards providing patient-centric eye care for everyone - including emerging markets. We were honoured to be selected by Google to participate in their Coral alpha program, and are looking forward to our continued cooperation. The Coral platform gives us the ability to run our screening ML models inside a handheld device; greatly expanding the access and ease of diabetic retinopathy screening.”

Finally, we’re expanding our distributor network to make it easier to get Coral boards into your hands around the world. This month, Seeed and NXP will begin to sell Coral products, in addition to Mouser.

We're excited to keep evolving the Coral platform, please keep sending us feedback at coral-support@google.com.

You can see the full release notes on Coral site.

An external advisory council to help advance the responsible development of AI

Last summer we announced Google’s AI Principles, an ethical charter to guide the responsible development and use of AI in our research and products. To complement the internal governance structure and processes that help us implement the principles, we’ve established an Advanced Technology External Advisory Council (ATEAC). This group will consider some of Google's most complex challenges that arise under our AI Principles, like facial recognition and fairness in machine learning, providing diverse perspectives to inform our work. We look forward to engaging with ATEAC members regarding these important issues and are honored to announce the members of the inaugural Council:

ATEAC_members.jpg
  • Alessandro Acquisti, a leading behavioral economist and privacy researcher. He’s a Professor of Information Technology and Public Policy at the Heinz College, Carnegie Mellon University and the PwC William W. Cooper Professor of Risk and Regulatory Innovation.
  • Bubacarr Bah, an expert in applied and computational mathematics. He’s a Senior Researcher, designated the German Research Chair of Mathematics with specialization in Data Science, at the African Institute for Mathematical Sciences (AIMS) South Africa and an Assistant Professor in the Department of Mathematical Sciences at Stellenbosch University. 
  • De Kai, a leading researcher in natural language processing, music technology and machine learning. He’s Professor of Computer Science and Engineering at the Hong Kong University of Science and Technology, and Distinguished Research Scholar at Berkeley's International Computer Science Institute.
  • Dyan Gibbens, an expert in industrial engineering and unmanned systems. She’s CEO of Trumbull, a Forbes Top 25 veteran-founded startup focused on automation, data and environmental resilience in energy and defense.
  • Joanna Bryson, an expert in psychology and AI, and a longtime leader in AI ethics. She’s an Associate Professor in the Department of Computing at the University of Bath. She has consulted for a number of companies on AI, notably at LEGO researching child-oriented programming techniques for the product that became LEGO Mindstorms.
  • Kay Coles James, a public policy expert with extensive experience working at the local, state and federal levels of government. She’s currently President of The Heritage Foundation, focusing on free enterprise, limited government, individual freedom and national defense.
  • Luciano Floridi, a leading philosopher and expert in digital ethics. He’s Professor of Philosophy and Ethics of Information at the University of Oxford, where he directs the Digital Ethics Lab of the Oxford Internet Institute, Professorial Fellow of Exeter College and Turing Fellow and Chair of the Data Ethics Group of the Alan Turing Institute.
  • William Joseph Burns, a foreign policy expert and diplomat. He previously served as U.S. deputy secretary of state, and retired from the U.S. Foreign Service in 2014 after a 33-year diplomatic career. He’s currently President of the Carnegie Endowment for International Peace, the oldest international affairs think tank in the United States.

This inaugural Council (full bios here) will serve over the course of 2019, holding four meetings starting in April. We hope this effort will inform both our own work and the broader technology sector. In addition to encouraging members to share generalizable learnings in their ongoing activities, we plan to publish a report summarizing the discussions. Council members represent their individual perspectives, and do not speak for their institutions.

We recognize that responsible development of AI is a broad area with many stakeholders. In addition to consulting with the experts on ATEAC, we’ll continue to exchange ideas and gather feedback from partners and organizations around the world.

How El País used AI to make their comments section less toxic

At El País, our vision for the perfect comments section was a place where readers would provide input, insight and tips on an investigative story, add knowledge about niche topics, double check facts and elevate the conversation to a different level. While the internet has brought amazing benefits, it didn’t deliver the utopia we—and others—had hoped for within the comments section. Around 2015, trolls, toxic comments, spam, insults and even threats took over, causing publishers to re-evaluate investing in this section of the online paper. No one seemed able to fix this broken system and several several sites either limited the amount of articles opened to comment, or shut them down completely.

We also thought about closing down comments at El País, but ultimately let them be. That was until last year, when the Google News Initiative contacted us to talk about Perspective API,  a free tool developed by Jigsaw that uses a machine learning model trained by human-generated comments labeled as toxic by human moderators. At that point Perspective API was available in English, but the aim was to use the more than 300,000 comments our readers write every month to train the model in Spanish. Earlier this year, we  partnered with Jigsaw to analyze our vast trove of public comments to understand how to spot toxicity in Spanish. We worked closely with the Jigsaw team to test the models and provided feedback in order to improve overall accuracy of the tool.

Now, when someone tries to post a toxic comment on our site we’ll show them a message in real time suggesting they make changes or rewrite it so that it’ll pass our moderation system. Since we put this system in place, the average toxicity of the comments has gone down seven percent and the number of comments has gone up 19%—leading us to suspect that the comments section is a nicer place and one our readers want to engage in. We’ve also improved the moderation process by sending the more toxic comments to experienced moderators and the less toxic to the less experienced ones.

A message on El Pais' site popping up in response to a toxic comment.

When someone posts a comment that may be perceived as toxic on El País’ site, we show a message in real time suggesting they make changes or re-write it.

Additionally, we’re including the toxicity of the articles in our data warehouse (a system we use for data analysis).Alongside the sentiment of the articles, we can now check if certain authors are associated with toxic comments, if the sentiment of the articles influences toxicity, or if some commenters always have high toxicity across all their comments. With this data, we aim to improve the conversation by not running certain articles we know won’t generate positive comments, developing a new badges system, highlighting top comments of the week in new products like a newsletter and even having journalists engage in certain conversations to help raise the level of debate.

Perspective API has rekindled our faith in the comments section and demonstrated a real value to our publication and our readers. This initial Spanish version of Perspective is available to anyone today and will continue to be developed so that any Spanish publisher can use Perspective in Spanish, for free.


Honoring J.S. Bach with our first AI-powered Doodle

Ever wondered what Johann Sebastian Bach would sound like if he rocked out? You can find out by exploring today’s AI-powered Google Doodle, which honors Bach’s birthday and legacy as one of the greatest composers of all time. A musician and composer during the Baroque period of the 18th century, Bach produced hundreds of compositions including cantatas, concertos, suites and chorales. In today’s Doodle, you can create your own melody, and through the magic of machine learning, the Doodle will harmonize your melody in Bach’s style. You can also explore inside the Doodle to see how the model Bach-ifys familiar tunes, or how your new collaboration might sound in a more modern rock style.

Today’s Doodle is the result of a collaboration between the Doodle, Magenta and PAIR teams at Google. The Magenta team aims to help people make music and art using machine learning, and PAIR produces tools or experiences to make machine learning enjoyable for everyone.

The first step in creating an AI-powered Doodle was building a machine learning model to power it. Machine learning is the process of teaching a computer to come up with its own answers by showing it a lot of examples, instead of giving it a set of rules to follow (as is done in traditional computer programming). Anna Huang, an AI Resident on Magenta, developed Coconet, a model that can be used in a wide range of musical tasks—such as harmonizing melodies, creating smooth transitions between disconnected fragments of music and composing from scratch (check out more of these technical details in today’s Magenta blog post).

Next, we personalized the model to match Bach’s musical style. To do this, we trained Coconet on 306 of Bach’s chorale harmonizations. His chorales always have four voices: each carries their own melodic line, creating a rich harmonic progression when played together. This concise structure makes the melodic lines good training data for a machine learning model. So when you create a melody of your own on the model in the Doodle, it harmonizes that melody in Bach's specific style.

Beyond the artistic and machine learning elements of the Doodle, we needed a lot of servers in order to make sure people around the world could use the Doodle. Historically, machine learning has been run on servers, which means that info is sent from a person’s computer to data centers, and then the results are sent back to the computer. Using this same approach for the Bach Doodle would have generated a lot of back-and-forth traffic.

To make this work, we used PAIR’s TensorFlow.js, which allows machine learning to happen entirely within an internet browser. However, for cases where someone’s computer or device might not be fast enough to run the Doodle using TensorFlow.js, the machine learning model is run on Google’s new Tensor Processing Units (TPUs), a way of quickly handling machine learning tasks in data centers. Today’s Doodle is the first one ever to use TPUs in this way.

Head over to today’s Doodle and find out what your collaboration with the famous composer sounds like!

This is the Future of Finance

Posted by Roy Glasberg, Head of Launchpad

Launchpad's mission is to accelerate innovation and to help startups build world-class technologies by leveraging the best of Google - its people, network, research, and technology.

In September 2018, the Launchpad team welcomed ten of the world's leading FinTech startups to join their accelerator program, helping them fast-track their application of advanced technology. Today, March 15th, we will see this cohort graduate from the program at the Launchpad team's inaugural event - The Future of Finance - a global discussion on the impact of applied ML/AI on the finance industry. These startups are ensuring that everyone has relevant insights at their fingertips and that all people, no matter where they are, have access to equitable money, banking, loans, and marketplaces.

Tune into the event from wherever you are via the livestream link

The Graduating Class of Launchpad FinTech Accelerator San Francisco'19

  • Alchemy (USA), bridging blockchain and the real world
  • Axinan (Singapore), providing smart insurance for the digital economy
  • Aye Finance (India), transforming financing in India
  • Celo (USA), increasing financial inclusion through a mobile-first cryptocurrency
  • Frontier Car Group (Germany), investing in the transformation of used-car marketplaces
  • GO-JEK (Indonesia), improving the welfare and livelihoods of informal sectors
  • GuiaBolso (Brazil), improving the financial lives of Brazilians
  • JUMO (South Africa), creating a transparent, fair money marketplace for mobile users to access loans
  • m.Paani (India), (em)powering local retailers and the next billion users in India
  • Starling Bank (UK), improving financial health with a 100% mobile-only bank

Since joining the accelerator, these startups have made great strides and are going from strength to strength. Some recent announcements from this cohort include:

  • JUMO have announced the launch of Opportunity Co, a 500M fund for credit where all the profits go back to the customers.
  • The team at Aye Finance have just closed $30m in Series D equity round.
  • Starling Bank has provided 150 new jobs in Southampton and have received a £100m grant from a fund aimed at increasing competition and innovation in the British banking sector, and also a £75m fundraise.
  • GuiaBolso ran a campaign to pay the bills of some its users (the beginning of the year in Brazil is a time of high expenses and debts) and is having a significant impact on credit with 80% of cases seeing interest rates on loans being cheaper than traditional banks.

We look forward to following the success of all our participating founders as they continue to make a significant impact on the global economy.

Want to know more about the Launchpad Accelerator? Visit our site, stay updated on developments and future opportunities by subscribing to the Google Developers newsletter and visit The Launchpad Blog.

The creative coder adding color to machine learning

Machine learning is already revolutionizing the way we solve problems across almost every industry and walk of life, from photo organization to cancer detection and flood prediction. But outside the tech world, most people don’t know what an algorithm is or how it works, let alone how they might start training one of their own.

Parisian coder Emil Wallner wants to change that. Passionate about making machine learning easier to get into, he came up with an idea that fused his fascination with machine learning with a love of art. He built a simple, playful program that learns how to add color to black-and-white photos.

Emil ML

Emil used TensorFlow, Google’s open-source machine learning platform, to build the simplest algorithm he could, forcing himself to simplify it until it was less than 100 lines of code.

The algorithm is programmed to study millions of color photos and use them to learn what color the objects of the world should be. It then hunts for similar patterns in a black-and-white photo. Over time, it learns that a black-and-white object shaped like a goldfish should very likely be gold.

The more distinctive the object, the easier the task. For example, bananas are easy because they’re almost always yellow and have a unique shape. Moons and planets can be more confusing because of similarities they share with each other, such as their shape and dark surroundings. In these instances, just like a child learning about the world for the first time, the algorithm needs a little more information and training.

ML banana moon

Emil’s algorithm brings the machine learning process to life in a way that makes it fun and visual. It helps us to understand what machines find easy, what they find tricky and how tweaks to the code or dataset affect results.

Thousands of budding coders and artists have now downloaded Emil’s code and are using it to understand the fundamentals of machine learning, without feeling like they’re in a classroom.

“Even the mistakes are beautiful, so it’s a satisfying algorithm to learn with,” Emil says.

Introducing Class II of Launchpad Accelerator India

https://lh6.googleusercontent.com/Gxl43TzIBGARTYC9VQmiY_1cbFn2_NSAuh0wL9GlaDG-dyr9P2hrFQuABDoN1ZrmVJuvTE8o4zfVEA87UgVveiHwJ00j_br_8Nxbe53FxqxLF6JYoShY3-zbPo75g0Qo8z8ceU4f
In December 2018, we opened applications for Class II of Launchpad Accelerator India and are thrilled to announce the start of the new class.
The second batch of the Launchpad Accelerator India announced
Similar to Class I, these 10 incredible startups will get access to the best of Google -- including mentorship from Google teams and industry experts, free support, cloud credits, and more. These startups will undergo an intensive 1-week mentorship bootcamp in March, followed by more engagements in April and May.  


At the bootcamp, they will meet with mentors both from Google and subject matter experts from the industry to set their goals for the upcoming three months. During the course of the program, the startups will receive insights and support on advanced technologies such as ML, in-depth design sprints for specifically identified challenges, guidance on focused tech projects, networking opportunities at industry events, and much more.


The first class kicks off today in Bangalore.


Meet the 10 startups of Class II:


(1) Opentalk Pvt Ltd: An app to talk to new people around the world, become a better speaker and make new friends
(2) THB: Helping healthcare providers organize and standardize healthcare information to drive clinical and commercial analytical applications and use cases
(3) Perceptiviti Data Solutions: An AI platform for insurance claim flagging, payment integrity, and fraud and abuse management
(4) DheeYantra: A cognitive conversational AI for Indian vernacular languages
(5) Kaleidofin: Customized financial solutions that combine multiple financial products such as savings, credit, and insurance in intuitive ways, to help customers achieve their real-life goals
(6) FinancePeer: A P2P lending company that connects lenders with borrowers online
(7) SmartCoin: An app for providing credit access to the vastly underserved lower- and middle-income segments through advanced AI/ML models
(8) HRBOT: Using AI and video analytics to find employable candidates in tier 2 and tier 3 cities, remotely.
(9) Savera.ai: A service that remotely maps your roof and helps you make an informed decision about having a solar panel, followed by chatbot-based support to help you learn about solar tech while enabling connections to local service providers
(10) Adiuvo Diagnostics: A rapid wound infection assessment and management device

By Paul Ravindranath, Program Manager, Launchpad Accelerator India

Introducing Coral: Our platform for development with local AI

Posted by Billy Rutledge (Director) and Vikram Tank (Product Mgr), Coral Team

AI can be beneficial for everyone, especially when we all explore, learn, and build together. To that end, Google's been developing tools like TensorFlow and AutoML to ensure that everyone has access to build with AI. Today, we're expanding the ways that people can build out their ideas and products by introducing Coral into public beta.

Coral is a platform for building intelligent devices with local AI.

Coral offers a complete local AI toolkit that makes it easy to grow your ideas from prototype to production. It includes hardware components, software tools, and content that help you create, train and run neural networks (NNs) locally, on your device. Because we focus on accelerating NN's locally, our products offer speedy neural network performance and increased privacy — all in power-efficient packages. To help you bring your ideas to market, Coral components are designed for fast prototyping and easy scaling to production lines.

Our first hardware components feature the new Edge TPU, a small ASIC designed by Google that provides high-performance ML inferencing for low-power devices. For example, it can execute state-of-the-art mobile vision models such as MobileNet V2 at 100+ fps, in a power efficient manner.

Coral Camera Module, Dev Board and USB Accelerator

For new product development, the Coral Dev Board is a fully integrated system designed as a system on module (SoM) attached to a carrier board. The SoM brings the powerful NXP iMX8M SoC together with our Edge TPU coprocessor (as well as Wi-Fi, Bluetooth, RAM, and eMMC memory). To make prototyping computer vision applications easier, we also offer a Camera that connects to the Dev Board over a MIPI interface.

To add the Edge TPU to an existing design, the Coral USB Accelerator allows for easy integration into any Linux system (including Raspberry Pi boards) over USB 2.0 and 3.0. PCIe versions are coming soon, and will snap into M.2 or mini-PCIe expansion slots.

When you're ready to scale to production we offer the SOM from the Dev Board and PCIe versions of the Accelerator for volume purchase. To further support your integrations, we'll be releasing the baseboard schematics for those who want to build custom carrier boards.

Our software tools are based around TensorFlow and TensorFlow Lite. TF Lite models must be quantized and then compiled with our toolchain to run directly on the Edge TPU. To help get you started, we're sharing over a dozen pre-trained, pre-compiled models that work with Coral boards out of the box, as well as software tools to let you re-train them.

For those building connected devices with Coral, our products can be used with Google Cloud IoT. Google Cloud IoT combines cloud services with an on-device software stack to allow for managed edge computing with machine learning capabilities.

Coral products are available today, along with product documentation, datasheets and sample code at g.co/coral. We hope you try our products during this public beta, and look forward to sharing more with you at our official launch.

Doing our part to share open data responsibly

This past weekend marked Open Data Day, an annual celebration of making data freely available to everyone. Communities around the world organized events, and we’re taking a moment here at Google to share our own perspective on the importance of open data. More accessible data can meaningfully help people and organizations, and we’re doing our part by opening datasets, providing access to APIs and aggregated product data, and developing tools to make data more accessible and useful.

Responsibly opening datasets

Sharing datasets is increasingly important as more people adopt machine learning through open frameworks like TensorFlow. We’ve released over 50 open datasets for other developers and researchers to use. These include YouTube 8M, a corpus of annotated videos used externally for video understanding; the HDR+ Burst Photography dataset, which helps others experiment with the technology that powers Pixel features like Portrait Mode; and Open Images, along with the Open Images Extended dataset which increases photo diversity.

Just because data is open doesn’t mean it will be useful, however. First, a dataset needs to be cleaned so that any insights developed from it are based on well-structured and accurate examples. Cleaning a large dataset is no small feat; before opening up our own, we spend hundreds of hours standardizing data and validating quality. Second, a dataset should be shared in a machine-readable format that’s easy for others to use, such as JSON rather than PDF. Finally, consider whether the dataset is representative of the intended content. Even if data is usable and representative of some situations, it may not be appropriate for every application. For instance, if a dataset contains mostly North American animal images, it may help you classify a deer, but not a giraffe. Tools like Facets can help you analyze the makeup of a dataset and evaluate the best ways to put it to use. We’re also working to build more representative datasets through interfaces like the Crowdsource application. To guide others’ use of your own dataset, consider publishing a data card which denotes authorship, composition and suggested use cases (here’s an example from our Open Images Extended release).

Making data findable and useful

It’s not enough to just make good data open, though--it also needs to be findable. Researchers, developers, journalists and other curious data-seekers often struggle to locate data scattered across the web’s thousands of repositories. Our Dataset Search tool helps people find data sources wherever they’re hosted, as long as the data is described in a way that search engines can locate. Since the tool launched a few months ago, we’ve seen the number of unique datasets on the platform double to 10 million, including contributions from the U.S. National Ocean and Atmospheric Administration (NOAA), National Institutes of Health (NIH), the Federal Reserve, the European Data Portal, the World Bank and government portals from every continent.

What makes data useful is how easily it can be analyzed. Though there’s more open data today, data scientists spend significant time analyzing it across multiple sources. To help solve that problem, we’ve created Data Commons. It’s a knowledge graph of data sources that lets users  treat various datasets of interest—regardless of source and format—as if they are all in a single local database. Anyone can contribute datasets or build applications powered by the infrastructure. For people using the platform, that means less time engineering data and more time generating insights. We’re already seeing exciting use cases of Data Commons. In one UC Berkeley data science course taught by Josh Hug and Fernando Perez, students used Census, CDC and Bureau of Labor Statistics data to correlate obesity levels across U.S. cities with other health and economic factors. Typically, that analysis would take days or weeks; using Data Commons, students were able to build high-fidelity models in less than an hour. We hope to partner with other educators and researchers—if you’re interested, reach out to collaborate@datacommons.org.

Balancing trade-offs

There are trade-offs to opening up data, and we aim to balance various sensitivities with the potential benefits of sharing. One consideration is that broad data openness can facilitate uses that don’t align with our AI Principles. For instance, we recently made synthetic speech data available only to researchers participating in the 2019 ASVspoof Challenge, to ensure that the data can be used to develop tools to detect deepfakes, while limiting misuse.

Extreme data openness can also risk exposing user or proprietary information, causing privacy breaches or threatening the security of our platforms. We allow third party developers to build on services like Maps, Gmail and more via APIs, so they can build their own products while user data is kept safe. We also publish aggregated product data like Search Trends to share information of public interest in a privacy-preserving way.

While there can be benefits to using sensitive data in controlled and principled ways, like predicting medical conditions or events, it’s critical that safeguards are in place so that training machine learning models doesn’t compromise individual privacy. Emerging research provides promising new avenues to learn from sensitive data. One is Federated Learning, a technique for training global ML models without data ever leaving a person’s device, which we’ve recently made available open-source with TensorFlow Federated. Another is Differential Privacy, which can offer strong guarantees that training data details aren’t inappropriately exposed in ML models. Additionally, researchers are experimenting more and more with using small training datasets and zero-shot learning, as we demonstrated in our recent prostate cancer detection research and work on Google Translate.

We hope that our efforts will help people access and learn from clean, useful, relevant and privacy-preserving open data from Google to solve the problems that matter to them. We also encourage other organizations to consider how they can contribute—whether by opening their own datasets, facilitating usability by cleaning them before release, using schema.org metadata standards to increase findability, enhancing transparency through data cards or considering trade-offs like user privacy and misuse. To everyone who has come together over the past week to celebrate open data: we look forward to seeing what you build.

Long-Range Robotic Navigation via Automated Reinforcement Learning



In the United States alone, there are 3 million people with a mobility impairment that prevents them from ever leaving their homes. Service robots that can autonomously navigate long distances can improve the independence of people with limited mobility, for example, by bringing them groceries, medicine, and packages. Research has demonstrated that deep reinforcement learning (RL) is good at mapping raw sensory input to actions, e.g. learning to grasp objects and for robot locomotion, but RL agents usually lack the understanding of large physical spaces needed to safely navigate long distances without human help and to easily adapt to new spaces.

In three recent papers, “Learning Navigation Behaviors End-to-End with AutoRL,” “PRM-RL: Long-Range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-based Planning”, and “Long-Range Indoor Navigation with PRM-RL”, we investigate easy-to-adapt robotic autonomy by combining deep RL with long-range planning. We train local planner agents to perform basic navigation behaviors, traversing short distances safely without collisions with moving obstacles. The local planners take noisy sensor observations, such as a 1D lidar that provides distances to obstacles, and output linear and angular velocities for robot control. We train the local planner in simulation with AutoRL, a method that automates the search for RL reward and neural network architecture. Despite their limited range of 10 - 15 meters, the local planners transfer well to both real robots and to new, previously unseen environments. This enables us to use them as building blocks for navigation in large spaces. We then build a roadmap, a graph where nodes are locations and edges connect the nodes only if local planners, which mimic real robots well with their noisy sensors and control, can traverse between them reliably.

Automating Reinforcement Learning (AutoRL)
In our first paper, we train the local planners in small, static environments. However, training with standard deep RL algorithms, such as Deep Deterministic Policy Gradient (DDPG), poses several challenges. For example, the true objective of the local planners is to reach the goal, which represents a sparse reward. In practice, this requires researchers to spend significant time iterating and hand-tuning the rewards. Researchers must also make decisions about the neural network architecture, without clear accepted best practices. And finally, algorithms like DDPG are unstable learners and often exhibit catastrophic forgetfulness.

To overcome those challenges, we automate the deep Reinforcement Learning (RL) training. AutoRL is an evolutionary automation layer around deep RL that searches for a reward and neural network architecture using large-scale hyperparameter optimization. It works in two phases, reward search and neural network architecture search. During the reward search, AutoRL trains a population of DDPG agents concurrently over several generations, each with a slightly different reward function optimizing for the local planner’s true objective: reaching the destination. At the end of the reward search phase, we select the reward that leads the agents to its destination most often. In the neural network architecture search phase, we repeat the process, this time using the selected reward and tuning the network layers, optimizing for the cumulative reward.
Automating reinforcement learning with reward and neural network architecture search.
However, this iterative process means AutoRL is not sample efficient. Training one agent takes 5 million samples; AutoRL training over 10 generations of 100 agents requires 5 billion samples - equivalent to 32 years of training! The benefit is that after AutoRL the manual training process is automated, and DDPG does not experience catastrophic forgetfulness. Most importantly, the resulting policies are higher quality — AutoRL policies are robust to sensor, actuator and localization noise, and generalize well to new environments. Our best policy is 26% more successful than other navigation methods across our test environments.
AutoRL (red) success over short distances (up to 10 meters) in several unseen buildings. Compared to hand-tuned DDPG (dark-red), artificial potential fields (light blue), dynamic window approach (blue), and behavior cloning (green).
AutoRL local planner policy transfer to robots in real, unstructured environments
While these policies only perform local navigation, they are robust to moving obstacles and transfer well to real robots, even in unstructured environments. Though they were trained in simulation with only static obstacles, they can also handle moving objects effectively. The next step is to combine the AutoRL policies with sampling-based planning to extend their reach and enable long-range navigation.

Achieving Long Range Navigation with PRM-RL
Sampling-based planners tackle long-range navigation by approximating robot motions. For example, probabilistic roadmaps (PRMs) sample robot poses and connect them with feasible transitions, creating roadmaps that capture valid movements of a robot across large spaces. In our second paper, which won Best Paper in Service Robotics at ICRA 2018, we combine PRMs with hand-tuned RL-based local planners (without AutoRL) to train robots once locally and then adapt them to different environments.

First, for each robot we train a local planner policy in a generic simulated training environment. Next, we build a PRM with respect to that policy, called a PRM-RL, over a floor plan for the deployment environment. The same floor plan can be used for any robot we wish to deploy in the building in a one time per robot+environment setup.

To build a PRM-RL we connect sampled nodes only if the RL-based local planner, which represents robot noise well, can reliably and consistently navigate between them. This is done via Monte Carlo simulation. The resulting roadmap is tuned to both the abilities and geometry of the particular robot. Roadmaps for robots with the same geometry but different sensors and actuators will have different connectivity. Since the agent can navigate around corners, nodes without clear line of sight can be included. Whereas nodes near walls and obstacles are less likely to be connected into the roadmap because of sensor noise. At execution time, the RL agent navigates from roadmap waypoint to waypoint.
Roadmap being built with 3 Monte Carlo simulations per randomly selected node pair.
The largest map was 288 meters by 163 meters and contains almost 700,000 edges, collected over 4 days using 300 workers in a cluster requiring 1.1 billion collision checks.
The third paper makes several improvements over the original PRM-RL. First, we replace the hand-tuned DDPG with AutoRL-trained local planners, which results in improved long-range navigation. Second, it adds Simultaneous Localization and Mapping (SLAM) maps, which robots use at execution time, as a source for building the roadmaps. Because SLAM maps are noisy, this change closes the “sim2real gap”, a phonomena in robotics where simulation-trained agents significantly underperform when transferred to real-robots. Our simulated success rates are the same as in on-robot experiments. Last, we added distributed roadmap building, resulting in very large scale roadmaps containing up to 700,000 nodes.

We evaluated the method using our AutoRL agent, building roadmaps using the floor maps of offices up to 200x larger than the training environments, accepting edges with at least 90% success over 20 trials. We compared PRM-RL to a variety of different methods over distances up to 100m, well beyond the local planner range. PRM-RL had 2 to 3 times the rate of success over baseline because the nodes were connected appropriately for the robot’s capabilities.
Navigation over 100 meters success rates in several buildings. First paper -AutoRL local planner only (blue); original PRMs (red); path-guided artificial potential fields (yellow); second paper (green); third paper - PRMs with AutoRL (orange).
We tested PRM-RL on multiple real robots and real building sites. One set of tests are shown below; the robot is very robust except near cluttered areas and off the edge of the SLAM map.
On-robot experiments
Conclusion
Autonomous robot navigation can significantly improve independence of people with limited mobility. We can achieve this by development of easy-to-adapt robotic autonomy, including methods that can be deployed in new environments using information that it is already available. This is done by automating the learning of basic, short-range navigation behaviors with AutoRL and using these learned policies in conjunction with SLAM maps to build roadmaps. These roadmaps consist of nodes connected by edges that robots can traverse consistently. The result is a policy that once trained can be used across different environments and can produce a roadmap custom-tailored to the particular robot.

Acknowledgements
The research was done by, in alphabetical order, Hao-Tien Lewis Chiang, James Davidson, Aleksandra Faust, Marek Fiser, Anthony Francis, Jasmine Hsu, J. Chase Kew, Tsang-Wei Edward Lee, Ken Oslund, Oscar Ramirez from Robotics at Google and Lydia Tapia from University of New Mexico. We thank Alexander Toshev, Brian Ichter, Chris Harris, and Vincent Vanhoucke for helpful discussions.

Source: Google AI Blog