Tag Archives: machine learning

Making France’s digital potential work for everyone

When people think of “digital champions,” it’s natural to think of a highly trained computer scientist creating new technology.  There are many other kinds of digital champions, however. They can be small business owners accelerating their growth online or people finding better ways to do their jobs. To do this, people now need to easily learn digital skills throughout their lives.  


That’s important for countries as well as individuals. According to the European Commission, France ranks just 16th in the EU’s Digital Economy and Society Index. Yet France has all the assets to succeed. It has top engineers, great entrepreneurs, one of the best education systems in the world, great infrastructure, and successful global companies. Studies suggest that if France fully seized its digital potential, it could earn up to 10 percent of GDP from digital technology by 2025, creating 200-250 billion euros’ worth of additional value per year.


Achieving this will take significant digital transformation for both France’s citizens and its businesses. With the right approach and infrastructure, that transformation doesn’t need to be hard. Over the last three years, we’ve trained more than 3 million Europeans in digital skills. In France alone, more than 230,000 French students and professionals have attended digital-skills training sessions given by our teams and partners. We now want to do more.  


Grow with Google in France—“Les Ateliers Numériques Google”

We will open four local Google Hubs called “Les Ateliers Numériques” across France, run by a network of local partners from the digital sector. These physical spaces will provide a long-term Google presence in French cities, with a dedicated team setting up free trainings in online skills and digital literacy. With our partners, we intend to help people find better jobs, keep their families safe online, and develop their businesses or careers.  Brittany will be our pilot region, with the opening of a Google Hub in Rennes during first half of 2018; three other hubs will follow. This will bring the best digital training within easy reach of more than 100,000 people every year.


A new research center dedicated to AI

France has produced some truly heroic figures of science—like Louis Pasteur, Marie Curie, Blaise Pascal and Sophie Germain—and its educational system still produces amazing researchers. So it’s only natural that we set up a new research team in Google France around the age’s defining technology: artificial intelligence. Our new research team will work closely with the AI research community in France on issues like health, science, art and the environment. They will publish their research and open-source the code they  produce, so that everyone can use these insights to solve their own problems, in their own way.


Oh, and we’re going to need a bigger office, too.

To keep pace with this digital growth, we need to expand our presence in France. We announced recently that our staff in France will increase by 50 percent, bringing our total workforce to more than 1,000 Googlers. Our offices will also grow by 6,000 m2, via new buildings connected to our office today.


More than ever, we’re committed to help France find new ways to grow in this digital era—whether through helping people retrain, or growing a business, or using amazing talent to research and build new products for the world. We hope these new investments will help the country, academia and local businesses turn France into a true digital champion.

Making France’s digital potential work for everyone

When people think of “digital champions,” it’s natural to think of a highly trained computer scientist creating new technology.  There are many other kinds of digital champions, however. They can be small business owners accelerating their growth online or people finding better ways to do their jobs. To do this, people now need to easily learn digital skills throughout their lives.  


That’s important for countries as well as individuals. According to the European Commission, France ranks just 16th in the EU’s Digital Economy and Society Index. Yet France has all the assets to succeed. It has top engineers, great entrepreneurs, one of the best education systems in the world, great infrastructure, and successful global companies. Studies suggest that if France fully seized its digital potential, it could earn up to 10 percent of GDP from digital technology by 2025, creating 200-250 billion euros’ worth of additional value per year.


Achieving this will take significant digital transformation for both France’s citizens and its businesses. With the right approach and infrastructure, that transformation doesn’t need to be hard. Over the last three years, we’ve trained more than 3 million Europeans in digital skills. In France alone, more than 230,000 French students and professionals have attended digital-skills training sessions given by our teams and partners. We now want to do more.  


Grow with Google in France—“Les Ateliers Numériques Google”

We will open four local Google Hubs called “Les Ateliers Numériques” across France, run by a network of local partners from the digital sector. These physical spaces will provide a long-term Google presence in French cities, with a dedicated team setting up free trainings in online skills and digital literacy. With our partners, we intend to help people find better jobs, keep their families safe online, and develop their businesses or careers.  Brittany will be our pilot region, with the opening of a Google Hub in Rennes during first half of 2018; three other hubs will follow. This will bring the best digital training within easy reach of more than 100,000 people every year.


A new research center dedicated to AI

France has produced some truly heroic figures of science—like Louis Pasteur, Marie Curie, Blaise Pascal and Sophie Germain—and its educational system still produces amazing researchers. So it’s only natural that we set up a new research team in Google France around the age’s defining technology: artificial intelligence. Our new research team will work closely with the AI research community in France on issues like health, science, art and the environment. They will publish their research and open-source the code they  produce, so that everyone can use these insights to solve their own problems, in their own way.


Oh, and we’re going to need a bigger office, too.

To keep pace with this digital growth, we need to expand our presence in France. We announced recently that our staff in France will increase by 50 percent, bringing our total workforce to more than 1,000 Googlers. Our offices will also grow by 6,000 m2, via new buildings connected to our office today.


More than ever, we’re committed to help France find new ways to grow in this digital era—whether through helping people retrain, or growing a business, or using amazing talent to research and build new products for the world. We hope these new investments will help the country, academia and local businesses turn France into a true digital champion.

Making France’s digital potential work for everyone

When people think of “digital champions,” it’s natural to think of a highly trained computer scientist creating new technology.  There are many other kinds of digital champions, however. They can be small business owners accelerating their growth online or people finding better ways to do their jobs. To do this, people now need to easily learn digital skills throughout their lives.  


That’s important for countries as well as individuals. According to the European Commission, France ranks just 16th in the EU’s Digital Economy and Society Index. Yet France has all the assets to succeed. It has top engineers, great entrepreneurs, one of the best education systems in the world, great infrastructure, and successful global companies. Studies suggest that if France fully seized its digital potential, it could earn up to 10 percent of GDP from digital technology by 2025, creating 200-250 billion euros’ worth of additional value per year.


Achieving this will take significant digital transformation for both France’s citizens and its businesses. With the right approach and infrastructure, that transformation doesn’t need to be hard. Over the last three years, we’ve trained more than 3 million Europeans in digital skills. In France alone, more than 230,000 French students and professionals have attended digital-skills training sessions given by our teams and partners. We now want to do more.  


Grow with Google in France—“Les Ateliers Numériques Google”

We will open four local Google Hubs called “Les Ateliers Numériques” across France, run by a network of local partners from the digital sector. These physical spaces will provide a long-term Google presence in French cities, with a dedicated team setting up free trainings in online skills and digital literacy. With our partners, we intend to help people find better jobs, keep their families safe online, and develop their businesses or careers.  Brittany will be our pilot region, with the opening of a Google Hub in Rennes during first half of 2018; three other hubs will follow. This will bring the best digital training within easy reach of more than 100,000 people every year.


A new research center dedicated to AI

France has produced some truly heroic figures of science—like Louis Pasteur, Marie Curie, Blaise Pascal and Sophie Germain—and its educational system still produces amazing researchers. So it’s only natural that we set up a new research team in Google France around the age’s defining technology: artificial intelligence. Our new research team will work closely with the AI research community in France on issues like health, science, art and the environment. They will publish their research and open-source the code they  produce, so that everyone can use these insights to solve their own problems, in their own way.


Oh, and we’re going to need a bigger office, too.

To keep pace with this digital growth, we need to expand our presence in France. We announced recently that our staff in France will increase by 50 percent, bringing our total workforce to more than 1,000 Googlers. Our offices will also grow by 6,000 m2, via new buildings connected to our office today.


More than ever, we’re committed to help France find new ways to grow in this digital era—whether through helping people retrain, or growing a business, or using amazing talent to research and build new products for the world. We hope these new investments will help the country, academia and local businesses turn France into a true digital champion.

Using TensorFlow to keep farmers happy and cows healthy

Editor’s Note: TensorFlow, our open source machine learning library, is just that—open to anyone. Companies, nonprofits, researchers and developers have used TensorFlow in some pretty cool ways, and we’re sharing those stories here on Keyword. Today we hear from Yasir Khokhar and Saad Ansari, founders of Connecterra, who are applying machine learning to an unexpected field: dairy farming.


Connecterra means “connected earth.” We formed the company based on a simple thesis: if we could use technology to make sense of data from the natural world, then we could make a real impact in solving the pressing problems of our time.


It all started when Yasir moved to a farm in the Netherlands, near Amsterdam. We had both spent many years working in the technology industry, and realized that the dairy industry was a sector where technology could make a dramatic impact. For instance, we saw that the only difference between cows that produce 30 liters of milk a day and those that produce 10 liters was the animal’s health. We wondered—could technology make cows healthier, and in doing so, help farmers grow their businesses?


That thinking spurred us to start working weekends and evenings on what would eventually become Ida—a product that uses TensorFlow, Google’s machine learning framework, to understand and interpret the behavior of cows and give farmers insights about their herds’ health.


Ida learns patterns about a cow’s movements from a wearable sensor. We use this data to train machine learning models in TensorFlow, and ultimately, Ida can detect activities from eating, drinking, resting, fertility, temperature and more. It’s not just tracking this information, though. We use Ida to predict problems early, detecting cases like lameness or digestive disorders, and provide recommendations to farmers on how to keep their cows healthy and improve the efficiency of their farms. Using these insights, we're already seeing a 30 percent increase in dairy production on our customers’ farms.


By 2050, the world will have 9 billion people, and we need a 60 percent increase in food production to feed them. Dairy farmer assistance is just one example of how AI could be used to help solve important issues like this. And at Connecterra, by using AI to create solutions to big problems, we think technology can make a real impact.


Cloud AutoML: Making AI accessible to every business

When we both joined Google Cloud just over a year ago, we embarked on a mission to democratize AI. Our goal was to lower the barrier of entry and make AI available to the largest possible community of developers, researchers and businesses.

Our Google Cloud AI team has been making good progress towards this goal. In 2017, we introduced Google Cloud Machine Learning Engine, to help developers with machine learning expertise easily build ML models that work on any type of data, of any size. We showed how modern machine learning services, i.e., APIs—including Vision, Speech, NLP, Translation and Dialogflow—could be built upon pre-trained models to bring unmatched scale and speed to business applications. Kaggle, our community of data scientists and ML researchers, has grown to more than one million members. And today, more than 10,000 businesses are using Google Cloud AI services, including companies like Box, Rolls Royce Marine, Kewpie and Ocado.

But there’s much more we can do. Currently, only a handful of businesses in the world have access to the talent and budgets needed to fully appreciate the advancements of ML and AI. There’s a very limited number of people that can create advanced machine learning models. And if you’re one of the companies that has access to ML/AI engineers, you still have to manage the time-intensive and complicated process of building your own custom ML model. While Google has offered pre-trained machine learning models via APIs that perform specific tasks, there's still a long road ahead if we want to bring AI to everyone.

To close this gap, and to make AI accessible to every business, we’re introducing Cloud AutoML. Cloud AutoML helps businesses with limited ML expertise start building their own high-quality custom models by using advanced techniques like learning2learn and transfer learning from Google. We believe Cloud AutoML will make AI experts even more productive, advance new fields in AI and help less-skilled engineers build powerful AI systems they previously only dreamed of.

Our first Cloud AutoML release will be Cloud AutoML Vision, a service that makes it faster and easier to create custom ML models for image recognition. Its drag-and-drop interface lets you easily upload images, train and manage models, and then deploy those trained models directly on Google Cloud. Early results using Cloud AutoML Vision to classify popular public datasets like ImageNet and CIFAR have shown more accurate results with fewer misclassifications than generic ML APIs.

Here’s a little more on what Cloud AutoML Vision has to offer:

  • Increased accuracy: Cloud AutoML Vision is built on Google’s leading image recognition approaches, including transfer learning and neural architecture search technologies. This means you’ll get a more accurate model even if your business has limited machine learning expertise.

  • Faster turnaround time to production-ready models: With Cloud AutoML, you can create a simple model in minutes to pilot your AI-enabled application, or build out a full, production-ready model in as little as a day.

  • Easy to use: AutoML Vision provides a simple graphical user interface that lets you specify data, then turns that data into a high quality model customized for your specific needs.

AutoML

Urban Outfitters is constantly looking for new ways to enhance our customers’ shopping experience," says Alan Rosenwinkel, Data Scientist at URBN. "Creating and maintaining a comprehensive set of product attributes is critical to providing our customers relevant product recommendations, accurate search results and helpful product filters; however, manually creating product attributes is arduous and time-consuming. To address this, our team has been evaluating Cloud AutoML to automate the product attribution process by recognizing nuanced product characteristics like patterns and neckline styles. Cloud AutoML has great promise to help our customers with better discovery, recommendation and search experiences."

Mike White, CTO and SVP, for Disney Consumer Products and Interactive Media, says: “Cloud AutoML’s technology is helping us build vision models to annotate our products with Disney characters, product categories and colors. These annotations are being integrated into our search engine to enhance the impact on Guest experience through more relevant search results, expedited discovery and product recommendations on shopDisney.”

And Sophie Maxwell, Conservation Technology Lead at the Zoological Society of London, tells us: "ZSL is an international conservation charity devoted to the worldwide conservation of animals and their habitats. A key requirement to deliver on this mission is to track wildlife populations to learn more about their distribution and better understand the impact humans are having on these species. In order to achieve this, ZSL has deployed a series of camera traps in the wild that take pictures of passing animals when triggered by heat or motion. The millions of images captured by these devices are then manually analysed and annotated with the relevant species, such as elephants, lions and giraffes, etc., which is a labour-intensive and expensive process. ZSL’s dedicated Conservation Technology Unit has been collaborating closely with Google’s Cloud ML team to help shape the development of this exciting technology, which ZSL aims to use to automate the tagging of these images—cutting costs, enabling wider-scale deployments and gaining a deeper understanding of how to conserve the world’s wildlife effectively."

If you’re interested in trying out AutoML Vision, you can request access via this form.

AutoML Vision is the result of our close collaboration with Google Brain and other Google AI teams, and is the first of several Cloud AutoML products in development. While we’re still at the beginning of our journey to make AI more accessible, we’ve been deeply inspired by what our 10,000+ customers using Cloud AI products have been able to achieve. We hope the release of Cloud AutoML will help even more businesses discover what’s possible through AI.

References

Cloud AutoML: Making AI accessible to every business

When we both joined Google Cloud just over a year ago, we embarked on a mission to democratize AI. Our goal was to lower the barrier of entry and make AI available to the largest possible community of developers, researchers and businesses.

Our Google Cloud AI team has been making good progress towards this goal. In 2017, we introduced Google Cloud Machine Learning Engine, to help developers with machine learning expertise easily build ML models that work on any type of data, of any size. We showed how modern machine learning services, i.e., APIs—including Vision, Speech, NLP, Translation and Dialogflow—could be built upon pre-trained models to bring unmatched scale and speed to business applications. Kaggle, our community of data scientists and ML researchers, has grown to more than one million members. And today, more than 10,000 businesses are using Google Cloud AI services, including companies like Box, Rolls Royce Marine, Kewpie and Ocado.

But there’s much more we can do. Currently, only a handful of businesses in the world have access to the talent and budgets needed to fully appreciate the advancements of ML and AI. There’s a very limited number of people that can create advanced machine learning models. And if you’re one of the companies that has access to ML/AI engineers, you still have to manage the time-intensive and complicated process of building your own custom ML model. While Google has offered pre-trained machine learning models via APIs that perform specific tasks, there's still a long road ahead if we want to bring AI to everyone.

To close this gap, and to make AI accessible to every business, we’re introducing Cloud AutoML. Cloud AutoML helps businesses with limited ML expertise start building their own high-quality custom models by using advanced techniques like learning2learn and transfer learning from Google. We believe Cloud AutoML will make AI experts even more productive, advance new fields in AI and help less-skilled engineers build powerful AI systems they previously only dreamed of.

Our first Cloud AutoML release will be Cloud AutoML Vision, a service that makes it faster and easier to create custom ML models for image recognition. Its drag-and-drop interface lets you easily upload images, train and manage models, and then deploy those trained models directly on Google Cloud. Early results using Cloud AutoML Vision to classify popular public datasets like ImageNet and CIFAR have shown more accurate results with fewer misclassifications than generic ML APIs.

Here’s a little more on what Cloud AutoML Vision has to offer:

  • Increased accuracy: Cloud AutoML Vision is built on Google’s leading image recognition approaches, including transfer learning and neural architecture search technologies. This means you’ll get a more accurate model even if your business has limited machine learning expertise.

  • Faster turnaround time to production-ready models: With Cloud AutoML, you can create a simple model in minutes to pilot your AI-enabled application, or build out a full, production-ready model in as little as a day.

  • Easy to use: AutoML Vision provides a simple graphical user interface that lets you specify data, then turns that data into a high quality model customized for your specific needs.

AutoML

Urban Outfitters is constantly looking for new ways to enhance our customers’ shopping experience," says Alan Rosenwinkel, Data Scientist at URBN. "Creating and maintaining a comprehensive set of product attributes is critical to providing our customers relevant product recommendations, accurate search results and helpful product filters; however, manually creating product attributes is arduous and time-consuming. To address this, our team has been evaluating Cloud AutoML to automate the product attribution process by recognizing nuanced product characteristics like patterns and neckline styles. Cloud AutoML has great promise to help our customers with better discovery, recommendation and search experiences."

Mike White, CTO and SVP, for Disney Consumer Products and Interactive Media, says: “Cloud AutoML’s technology is helping us build vision models to annotate our products with Disney characters, product categories and colors. These annotations are being integrated into our search engine to enhance the impact on Guest experience through more relevant search results, expedited discovery and product recommendations on shopDisney.”

And Sophie Maxwell, Conservation Technology Lead at the Zoological Society of London, tells us: "ZSL is an international conservation charity devoted to the worldwide conservation of animals and their habitats. A key requirement to deliver on this mission is to track wildlife populations to learn more about their distribution and better understand the impact humans are having on these species. In order to achieve this, ZSL has deployed a series of camera traps in the wild that take pictures of passing animals when triggered by heat or motion. The millions of images captured by these devices are then manually analysed and annotated with the relevant species, such as elephants, lions and giraffes, etc., which is a labour-intensive and expensive process. ZSL’s dedicated Conservation Technology Unit has been collaborating closely with Google’s Cloud ML team to help shape the development of this exciting technology, which ZSL aims to use to automate the tagging of these images—cutting costs, enabling wider-scale deployments and gaining a deeper understanding of how to conserve the world’s wildlife effectively."

If you’re interested in trying out AutoML Vision, you can request access via this form.

AutoML Vision is the result of our close collaboration with Google Brain and other Google AI teams, and is the first of several Cloud AutoML products in development. While we’re still at the beginning of our journey to make AI more accessible, we’ve been deeply inspired by what our 10,000+ customers using Cloud AI products have been able to achieve. We hope the release of Cloud AutoML will help even more businesses discover what’s possible through AI.

References

Source: Google Cloud


Introducing NIMA: Neural Image Assessment



Quantification of image quality and aesthetics has been a long-standing problem in image processing and computer vision. While technical quality assessment deals with measuring pixel-level degradations such as noise, blur, compression artifacts, etc., aesthetic assessment captures semantic level characteristics associated with emotions and beauty in images. Recently, deep convolutional neural networks (CNNs) trained with human-labelled data have been used to address the subjective nature of image quality for specific classes of images, such as landscapes. However, these approaches can be limited in their scope, as they typically categorize images to two classes of low and high quality. Our proposed method predicts the distribution of ratings. This leads to a more accurate quality prediction with higher correlation to the ground truth ratings, and is applicable to general images.

In “NIMA: Neural Image Assessment” we introduce a deep CNN that is trained to predict which images a typical user would rate as looking good (technically) or attractive (aesthetically). NIMA relies on the success of state-of-the-art deep object recognition networks, building on their ability to understand general categories of objects despite many variations. Our proposed network can be used to not only score images reliably and with high correlation to human perception, but also it is useful for a variety of labor intensive and subjective tasks such as intelligent photo editing, optimizing visual quality for increased user engagement, or minimizing perceived visual errors in an imaging pipeline.

Background
In general, image quality assessment can be categorized into full-reference and no-reference approaches. If a reference “ideal” image is available, image quality metrics such as PSNR, SSIM, etc. have been developed. When a reference image is not available, “blind” (or no-reference) approaches rely on statistical models to predict image quality. The main goal of both approaches is to predict a quality score that correlates well with human perception. In a deep CNN approach to image quality assessment, weights are initialized by training on object classification related datasets (e.g. ImageNet), and then fine-tuned on annotated data for perceptual quality assessment tasks.

NIMA
Typical aesthetic prediction methods categorize images as low/high quality. This is despite the fact that each image in the training data is associated to a histogram of human ratings, rather than a single binary score. A histogram of ratings is an indicator of overall quality of an image, as well as agreements among raters. In our approach, instead of classifying images a low/high score or regressing to the mean score, the NIMA model produces a distribution of ratings for any given image — on a scale of 1 to 10, NIMA assigns likelihoods to each of the possible scores. This is more directly in line with how training data is typically captured, and it turns out to be a better predictor of human preferences when measured against other approaches (more details are available in our paper).

Various functions of the NIMA vector score (such as the mean) can then be used to rank photos aesthetically. Some test photos from the large-scale database for Aesthetic Visual Analysis (AVA) dataset, as ranked by NIMA, are shown below. Each AVA photo is scored by an average of 200 people in response to photography contests. After training, the aesthetic ranking of these photos by NIMA closely matches the mean scores given by human raters. We find that NIMA performs equally well on other datasets, with predicted quality scores close to human ratings.
Ranking some examples labelled with the “landscape” tag from AVA dataset using NIMA. Predicted NIMA (and ground truth) scores are shown below each image.
NIMA scores can also be used to compare the quality of images of the same subject which may have been distorted in various ways. Images shown in the following example are part of the TID2013 test set, which contain various types and levels of distortions.
Ranking some examples from TID2013 dataset using NIMA. Predicted NIMA scores are shown below each image.
Perceptual Image Enhancement
As we’ve shown in another recent paper, quality and aesthetic scores can also be used to perceptually tune image enhancement operators. In other words, maximizing NIMA score as part of a loss function can increase the likelihood of enhancing perceptual quality of an image. The following example shows that NIMA can be used as a training loss to tune a tone enhancement algorithm. We observed that the baseline aesthetic ratings can be improved by contrast adjustments directed by the NIMA score. Consequently, our model is able to guide a deep CNN filter to find aesthetically near-optimal settings of its parameters, such as brightness, highlights and shadows.

NIMA can be used as a training loss to enhance images. In this example, local tone and contrast of images is enhanced by training a deep CNN with NIMA as its loss. Test images are obtained from the MIT-Adobe FiveK dataset.
Looking Ahead
Our work on NIMA suggests that quality assessment models based on machine learning may be capable of a wide range of useful functions. For instance, we may enable users to easily find the best pictures among many; or to even enable improved picture-taking with real-time feedback to the user. On the post-processing side, these models may be used to guide enhancement operators to produce perceptually superior results. In a direct sense, the NIMA network (and others like it) can act as reasonable, though imperfect, proxies for human taste in photos and possibly videos. We’re excited to share these results, though we know that the quest to do better in understanding what quality and aesthetics mean is an ongoing challenge — one that will involve continuing retraining and testing of our models.


Earth to exoplanet: Hunting for planets with machine learning

For thousands of years, people have looked up at the stars, recorded observations, and noticed patterns. Some of the first objects early astronomers identified were planets, which the Greeks called “planētai,” or “wanderers,” for their seemingly irregular movement through the night sky. Centuries of study helped people understand that the Earth and other planets in our solar system orbit the sun—a star like many others.

Today, with the help of technologies like telescope optics, space flight, digital cameras, and computers, it’s possible for us to extend our understanding beyond our own sun and detect planets around other stars. Studying these planets—called exoplanets—helps us explore some of our deepest human inquiries about the universe. What else is out there? Are there other planets and solar systems like our own?

Though technology has aided the hunt, finding exoplanets isn’t easy. Compared to their host stars, exoplanets are cold, small and dark—about as tricky to spot as a firefly flying next to a searchlight … from thousands of miles away. But with the help of machine learning, we’ve recently made some progress.

One of the main ways astrophysicists search for exoplanets is by analyzing large amounts of data from NASA’s Kepler mission with both automated software and manual analysis. Kepler observed about 200,000 stars for four years, taking a picture every 30 minutes, creating about 14 billion data points. Those 14 billion data points translate to about 2 quadrillion possible planet orbits! It’s a huge amount of information for even the most powerful computers to analyze, creating a laborious, time-intensive process. To make this process faster and more effective, we turned to machine learning.

NASA_PlanetsPart1_v03_1000px.gif
The measured brightness of a star decreases ever so slightly when an orbiting planet blocks some of the light. The Kepler space telescope observed the brightness of 200,000 stars for 4 years to hunt for these characteristic signals caused by transiting planets.

Machine learning is a way of teaching computers to recognize patterns, and it’s particularly useful in making sense of large amounts of data. The key idea is to let a computer learn by example instead of programming it with specific rules.

I’m a machine learning researcher on the Google AI team with an interest in space, and started this work as a 20 percent project (an opportunity at Google to work something that interests you for 20 percent of your time). In the process, I reached out to Andrew, an astrophysicist from UT Austin, to collaborate. Together, we took this technique to the skies and taught a machine learning system how to identify planets around faraway stars.

Using a dataset of more than 15,000 labeled Kepler signals, we created a TensorFlow model to distinguish planets from non-planets. To do this, it had to recognize patterns caused by actual planets, versus patterns caused by other objects like starspots and binary stars. When we tested our model on signals it had never seen before, it correctly identified which signals were planets and which signals were not planets 96 percent of the time. So we knew it worked!

Kepler 90i is the eighth planet discovered orbiting the Kepler 90 star, making it the first known 8-planet system outside of our own.

Armed with our working model, we shot for the stars, using it to hunt for new planets in Kepler data. To narrow the search, we looked at the 670 stars that were already known to host two or more exoplanets. In doing so, we discovered two new planets: Kepler 80g and Kepler 90i. Significantly, Kepler 90i is the eighth planet discovered orbiting the Kepler 90 star, making it the first known 8-planet system outside of our own.

NASA_PlanetsPart2_v05_750px.gif
We used 15,000 labeled Kepler signals to train our machine learning model to identify planet signals. We used this model to hunt for new planets in data from 670 stars, and discovered two planets missed in previous searches.

Some fun facts about our newly discovered planet: it’s 30 percent larger than Earth, and with a surface temperature of approximately 800°F—not ideal for your next vacation. It also orbits its star every 14 days, meaning you’d have a birthday there just about every two weeks.

sol-&-kepler-2.gif
Kepler 90 is the first known 8-planet system outside of our own. In this system, planets orbit closer to their star, and Kepler 90i orbits once every 14 days. (Note that planet sizes and distances from stars are not to scale.)

The sky is the limit (so to speak) when it comes to the possibilities of this technology. So far, we’ve only used our model to search 670 stars out of 200,000. There may be many exoplanets still unfound in Kepler data, and new ideas and techniques like machine learning will help fuel celestial discoveries for many years to come. To infinity, and beyond!

A Summary of the First Conference on Robot Learning



Whether in the form of autonomous vehicles, home assistants or disaster rescue units, robotic systems of the future will need to be able to operate safely and effectively in human-centric environments. In contrast to to their industrial counterparts, they will require a very high level of perceptual awareness of the world around them, and to adapt to continuous changes in both their goals and their environment. Machine learning is a natural answer to both the problems of perception and generalization to unseen environments, and with the recent rapid progress in computer vision and learning capabilities, applying these new technologies to the field of robotics is becoming a very central research question.

This past November, Google helped kickstart and host the first Conference on Robot Learning (CoRL) at our campus in Mountain View. The goal of CoRL was to bring machine learning and robotics experts together for the first time in a single-track conference, in order to foster new research avenues between the two disciplines. The sold-out conference attracted 350 researchers from many institutions worldwide, who collectively presented 74 original papers, along with 5 keynotes by some of the most innovative researchers in the field.
Prof. Sergey Levine, CoRL 2017 co-chair, answering audience questions.
Sayna Ebrahimi (UC Berkeley) presenting her research.
Videos of the inaugural CoRL are available on the conference website. Additionally, we are delighted to announce that next year, CoRL moves to Europe! CoRL 2018 will be chaired by Professor Aude Billard from the École Polytechnique Fédérale de Lausanne, and will tentatively be held in the Eidgenössische Technische Hochschule (ETH) in Zürich on October 29th-31st, 2018. Looking forward to seeing you there!
Prof. Ken Goldberg, CoRL 2017 co-chair, and Jeffrey Mahler (UC Berkeley) during a break.

Opening the Google AI China Center

Since becoming a professor 12 years ago and joining Google a year ago, I’ve had the good fortune to work with many talented Chinese engineers, researchers and technologists. China is home to many of the world's top experts in artificial intelligence (AI) and machine learning. All three winning teams of the ImageNet Challenge in the past three years have been largely composed of Chinese researchers. Chinese authors contributed 43 percent of all content in the top 100 AI journals in 2015—and when the Association for the Advancement of AI discovered that their annual meeting overlapped with Chinese New Year this year, they rescheduled.

I believe AI and its benefits have no borders. Whether a breakthrough occurs in Silicon Valley, Beijing or anywhere else, it has the potential to make everyone’s life better for the entire world. As an AI first company, this is an important part of our collective mission. And we want to work with the best AI talent, wherever that talent is, to achieve it.

That’s why I am excited to launch the Google AI China Center, our first such center in Asia, at our Google Developer Days event in Shanghai today. This Center joins other AI research groups we have all over the world, including in New York, Toronto, London and Zurich, all contributing towards the same goal of finding ways to make AI work better for everyone.

Focused on basic AI research, the Center will consist of a team of AI researchers in Beijing, supported by Google China’s strong engineering teams. We’ve already hired some top experts, and will be working to build the team in the months ahead (check our jobs site for open roles!). Along with Dr. Jia Li, Head of Research and Development at Google Cloud AI, I’ll be leading and coordinating the research. Besides publishing its own work, the Google AI China Center will also support the AI research community by funding and sponsoring AI conferences and workshops, and working closely with the vibrant Chinese AI research community.

Humanity is going through a huge transformation thanks to the phenomenal growth of computing and digitization. In just a few years, automatic image classification in photo apps has become a standard feature. And we’re seeing rapid adoption of natural language as an interface with voice assistants like Google Home. At Cloud, we see our enterprise partners using AI to transform their businesses in fascinating ways at an astounding pace. As technology starts to shape human life in more profound ways, we will need to work together to ensure that the AI of tomorrow benefits all of us. 

The Google AI China Center is a small contribution to this goal. We look forward to working with the brightest AI researchers in China to help find solutions to the world’s problems. 

Once again, the science of AI has no borders, neither do its benefits.