Tag Archives: Powered by TensorFlow

5 ways to celebrate TensorFlow’s 5th birthday

Five years ago, we open-sourced TensorFlow, our machine learning framework for research and production. Our goal was to expand access to state-of-the-art machine learning tools so anyone could use them.

Since then, TensorFlow has become the most popular machine learning library in the world, with over 160 million downloads. Seeing so many people use TensorFlow is an incredible and humbling experience, and we’re thankful for the thousands of people outside of Google who have contributed code, created educational content and organized developer events around the world to support TensorFlow and the growing machine learning community.

To celebrate five years of TensorFlow, we’d like to point out a few interactive demos you can try from your browser with a single click, as well as some tutorials that can help you create your own projects. If you’re new to TensorFlow, these are a great way to get a feel for what it can do. And if you like what you see and want to dive a bit deeper, check out the TensorFlow Blog.

Try out some interactive demos powered by machine learning

TensorFlow supports multiple programming languages and environments. Let’s start with a quick tour of JavaScript, and three interactive demos you can try with a click.

TensorFlow.js enables you to write and run machine learning models entirely in the browser. This has important applications for privacy preserving applications (no data needs to be sent to a server), and for interactive machine learning programs. 

One great example of this is this iris landmark-tracking program which supports hands-free interfaces and assistive technologies; you can try the model yourself in your browser (be patient—it may take a few moments to load!).
Animated gif showing a woman tilting her head and the software tracking this by analyzing her iris.

Similarly to eye-tracking, you can also use TensorFlow.js to track hand motions

Animated gif showing a hand counting out numbers and the tracking software tracing this movement.

You only need a webcam for both of these demos, and no data leaves your machine.

Train your own model, no coding necessary

You can train your own model (with no coding required) using the Teachable Machine. It’s a fast, fun, and easy way to create a machine learning model right in your browser. For instance, you could teach a model to recognize images, or sounds that you record using your microphone.


Screenshot of three projects you can use teachable machine to do: image project, audio project, or pose project.

Go deeper with tutorials

TensorFlow includes a powerful Python library. To get started using it, here are some tutorials for beginners and experts alike. These tutorials (which contain complete, end-to-end code) span topics from machine learning fundamentals, to computer vision and machine translation—and even show you how to generate artwork with machine learning.
Images shows pink roses.

Image CC-BY by Virginia McMillan.

Bring TensorFlow to mobile apps 

TensorFlow Lite enables you to build machine learning-powered apps on mobile and small embedded devices. A group of engineering students in India used TensorFlow Lite to develop an Android app that provides local air quality information using a smartphone camera.
Photo shows a person holding out their smartphone against a landscape of green trees to analyze air quality.

You can go even smaller, too: TensorFlow Lite Micro lets you run machine learning models on microcontrollers (tiny computers that can fit in the palm of your hand).

Understand how to build responsibly

As billions of people around the world continue to use products and services with machine learning at their core, it’s become increasingly important to design and deploy these systems responsibly. TensorFlow includes a large set of tools and best practices for Responsible AI, including the What-If Tool which tests how machine learning models will work for different people in hypothetical situations.

And there’s much more you can do as well. TensorFlow includes a complete set of tools to power production ML systems, and even supports the latest research in Quantum computing

This is only the beginning, and we’re excited to see what the next five years bring. To learn more about TensorFlow, check out tensorflow.org, read the blog, follow us on social or subscribe to our YouTube Channel.


How a college student became a planet hunter

I didn't grow up thinking I was going to be an astronomer. There wasn’t a moment when I looked up at the moon and realized my destiny. I grew up loving math and science and in college, I gradually discovered that I loved learning everything I could about stars and planets. When I started studying and doing research in astronomy, I felt like I was given secrets about the universe.

During my junior year, I took a class on planets. My professor was away for a week, so we had a guest lecturer come in. That’s when I met Andrew Vanderburg and heard about his work with former Google engineer Chris Shallue (he recently left to pursue his PhD at Harvard in astrophysics). A few years ago, Andrew and Chris built an AI system with TensorFlow that sifted through the approximately 14 billion data points captured from NASA’s Kepler mission. In doing so, they discovered two new planets: Kepler 80g and Kepler 90i. 

When I walked into that classroom, I couldn't have imagined that it would lead to the discovery of two new planets. 

When I started, I had zero experience with machine learning. I had no idea what a neural network was or how I could build one. I learned everything as I went along using YouTube tutorials and TensorFlow and collaborating with incredible people. Using TensorFlow, I built a way to look through space telescope data and identify signs that planets could be around those stars. By the end of the summer, my neural network was successful and could recognize planets we already knew about, and discover new ones.

I discovered two new planets, but I also created a method that makes it possible for people to find many more. (If you want to learn how to hunt for planets, you can read my tutorial). Accessible technologies and open-source data allowed me to do this work, and because of that, it’s never been easier to discover not only planets, but also other mysteries of the universe. The possibilities for what we might find are endless.

Machine learning meets African agriculture

In 2016, a crop-destroying caterpillar, Fall Armyworm (FAW) was first detected in Africa. The crop pest has since devastated agriculture by infecting millions of corn fields, which threatens food security on the continent. Farmers who rely on harvests for food need to combat the pest, which has now spread to India and China.

That’s where Nazirini Siraji comes in. She is one of several developers working to provide farmers with new tools to fight FAW. After codelabs hosted by a Google developer group in Mbale, Uganda, she created the “Farmers Companion App” using TensorFlow, Google’s open-source machine learning platform. It’s a free app that identifies when a crop has FAW and which stage the worm is in its lifecycle (and therefore how threatening it is and how far it is likely to spread). It also advises on which pesticides or treatments are best to stop the worm spreading any further. The app is already working in the field, helping farmers around Mbale to identify FAW. 

They continue to improve the app so it can identify more pests and diseases. Nazirini shows the impact that developers can have on agricultural issues like FAW and across other sectors, too. We visited Nazirini and her team this year, here’s more about their story:

Learn more about how others are using TensorFlow to solve all kinds of problems.

Teachable Machine 2.0 makes AI easier for everyone

People are using AI to explore all kinds of ideas—identifying the roots of bad traffic in Los Angeles, improving recycling rates in Singapore, and even experimenting with dance. Getting started with your own machine learning projects might seem intimidating, but Teachable Machine is a web-based tool that makes it fast, easy, and accessible to everyone. 

The first version of Teachable Machine let anyone teach their computer to recognize images using a webcam. For a lot of people, it was their first time experiencing what it’s like to train their own machine learning model: teaching the computer how to recognize patterns in data (images, in this case) and assign new data to categories.

Since then, we’ve heard from lots of people who want to take their Teachable Machine models  one step further and use them in their own projects. Teachable Machine 2.0 lets you train your own machine learning model with the click of a button, no coding required, and export it to websites, apps, physical machines and more. Teachable Machine 2.0 can also recognize sounds and poses, like whether you're standing or sitting down. 

We collaborated with educators, artists, students and makers of all kinds to figure out how to make the tool useful for them. For example, education researcher Blakeley H. Payne and her teammates have been using Teachable Machine as part of open-source curriculum that teaches middle-schoolers about AI through a hands on learning experience. 

“Parents—especially of girls—often tell me their child is nervous to learn about AI because they have never coded before,” Blakeley said. “I love using Teachable Machine in the classroom because it empowers these students to be designers of technology without the fear of ‘I've never done this before.’”

But it’s not just for teaching. Steve Saling is an accessibility technology expert who used it to explore improve communication for people with impaired speech. Yining Shi has been using Teachable Machine with her students in the Interactive Telecommunications Program at NYU to explore its potential for game design. And at Google, we’ve been using it make physical sorting machines easier for anyone to build. Here’s how it all works: 

Gather examples

You can use Teachable Machine to recognize images, sounds or poses. Upload your own image files, or capture them live with a mic or webcam. These examples stay on-device, never leaving your computer unless you choose to save your project to Google Drive.

Gather-small.gif

Gathering image examples.

Train your model

With the click of a button, Teachable Machine will train a model based on the examples you provided. All the training happens in your browser, so everything stays in your computer.

Training-small.gif

Training a model with the click of a button.

Test and tweak

Play with your model on the site to see how it performs. Not to your liking? Tweak the examples and see how it does.

Test-small.gif

Testing out the model instantly using a webcam.

Use your model

The model you created is powered by Tensorflow.js, an open-source library for machine learning from Google. You can export it to use in websites, apps, and more. You can also save your project to Google Drive so you can pick up where you left off.

Ready to dive in? Here’s some helpful links and inspiration:

Drop us a line with your thoughts and ideas, and post what you make, or follow along with #teachablemachine. We can’t wait to see what you create. Try it out atg.co/teachablemachine.

Teachable Machine 2.0 makes AI easier for everyone

People are using AI to explore all kinds of ideas—identifying the roots of bad traffic in Los Angeles, improving recycling rates in Singapore, and even experimenting with dance. Getting started with your own machine learning projects might seem intimidating, but Teachable Machine is a web-based tool that makes it fast, easy, and accessible to everyone. 

The first version of Teachable Machine let anyone teach their computer to recognize images using a webcam. For a lot of people, it was their first time experiencing what it’s like to train their own machine learning model: teaching the computer how to recognize patterns in data (images, in this case) and assign new data to categories.

Since then, we’ve heard from lots of people who want to take their Teachable Machine models  one step further and use them in their own projects. Teachable Machine 2.0 lets you train your own machine learning model with the click of a button, no coding required, and export it to websites, apps, physical machines and more. Teachable Machine 2.0 can also recognize sounds and poses, like whether you're standing or sitting down. 

We collaborated with educators, artists, students and makers of all kinds to figure out how to make the tool useful for them. For example, education researcher Blakeley H. Payne and her teammates have been using Teachable Machine as part of open-source curriculum that teaches middle-schoolers about AI through a hands on learning experience. 

“Parents—especially of girls—often tell me their child is nervous to learn about AI because they have never coded before,” Blakeley said. “I love using Teachable Machine in the classroom because it empowers these students to be designers of technology without the fear of ‘I've never done this before.’”

But it’s not just for teaching. Steve Saling is an accessibility technology expert who used it to explore improve communication for people with impaired speech. Yining Shi has been using Teachable Machine with her students in the Interactive Telecommunications Program at NYU to explore its potential for game design. And at Google, we’ve been using it make physical sorting machines easier for anyone to build. Here’s how it all works: 

Gather examples

You can use Teachable Machine to recognize images, sounds or poses. Upload your own image files, or capture them live with a mic or webcam. These examples stay on-device, never leaving your computer unless you choose to save your project to Google Drive.

Gather-small.gif

Gathering image examples.

Train your model

With the click of a button, Teachable Machine will train a model based on the examples you provided. All the training happens in your browser, so everything stays in your computer.

Training-small.gif

Training a model with the click of a button.

Test and tweak

Play with your model on the site to see how it performs. Not to your liking? Tweak the examples and see how it does.

Test-small.gif

Testing out the model instantly using a webcam.

Use your model

The model you created is powered by Tensorflow.js, an open-source library for machine learning from Google. You can export it to use in websites, apps, and more. You can also save your project to Google Drive so you can pick up where you left off.

Ready to dive in? Here’s some helpful links and inspiration:

Drop us a line with your thoughts and ideas, and post what you make, or follow along with #teachablemachine. We can’t wait to see what you create. Try it out atg.co/teachablemachine.

The Singapore students using Cloud for smarter recycling

Coming up with big ideas in technology used to take the kind of time and money that only large companies had.  Now open source tools—like TensorFlow, which provides access to Google’s machine learning technology—mean anyone with a smart concept has the opportunity to make it a reality. Just ask Arjun Taneja and Vayun Mathur, two friends and high school students from Singapore with a big ambition to improve recycling rates.  

Arjun and Vayun realized that separating waste is sometimes confusing and cumbersome—something that can derail people's good intentions to recycle. Using TensorFlow, they built a “Smart Bin” that can identify types of trash and sort them automatically. The Smart Bin uses a camera to take a picture of the object inserted in the tray, then analyzes the picture with a Convolutional Neural Network, a type of machine learning algorithm designed to recognize visual objects.  

To train the algorithm, Arjun and Vayun took around 500 pictures of trash like glass bottles, plastic bottles, metal cans and paper. It’s a process that would normally be laborious and expensive. But by using Google’s Colab platform for sharing resources and advice, the students could access a high powered graphics processor (GPU) in the cloud for free. They were also able to access Tensor Processing Units, Google’s machine learning processors which power services like Translate, Photos, Search, Assistant and Gmail. These tools helped their system analyze large amounts of data at once, so the students could correct the model if it didn't recognize an object. As a result, the model learned to classify the objects even more quickly. Once the Smart Bin was trained, all they had to do was place an object in the tray, and the system could predict whether it was metal, plastic, glass or paper—with the answer popping up on a screen. 

Building on their successful trials at home, Arjun and Vayun showcased the Smart Bin with a stall at last week’s Singapore Maker Faire, and they continue to work on other projects. It’s a great example of how tools available in the cloud are cutting out processes and costs that might have held back this kind of invention in the past.

The creative coder adding color to machine learning

Machine learning is already revolutionizing the way we solve problems across almost every industry and walk of life, from photo organization to cancer detection and flood prediction. But outside the tech world, most people don’t know what an algorithm is or how it works, let alone how they might start training one of their own.

Parisian coder Emil Wallner wants to change that. Passionate about making machine learning easier to get into, he came up with an idea that fused his fascination with machine learning with a love of art. He built a simple, playful program that learns how to add color to black-and-white photos.

Emil ML

Emil used TensorFlow, Google’s open-source machine learning platform, to build the simplest algorithm he could, forcing himself to simplify it until it was less than 100 lines of code.

The algorithm is programmed to study millions of color photos and use them to learn what color the objects of the world should be. It then hunts for similar patterns in a black-and-white photo. Over time, it learns that a black-and-white object shaped like a goldfish should very likely be gold.

The more distinctive the object, the easier the task. For example, bananas are easy because they’re almost always yellow and have a unique shape. Moons and planets can be more confusing because of similarities they share with each other, such as their shape and dark surroundings. In these instances, just like a child learning about the world for the first time, the algorithm needs a little more information and training.

ML banana moon

Emil’s algorithm brings the machine learning process to life in a way that makes it fun and visual. It helps us to understand what machines find easy, what they find tricky and how tweaks to the code or dataset affect results.

Thousands of budding coders and artists have now downloaded Emil’s code and are using it to understand the fundamentals of machine learning, without feeling like they’re in a classroom.

“Even the mistakes are beautiful, so it’s a satisfying algorithm to learn with,” Emil says.

When Iowa’s snow piles up, TensorFlow can keep roads safe

Iowa may be heaven, but it’s a snowy one. With an average of around 33 inches of snow every year, keeping roads open and safe is an important challenge. Car accidents tend to spike during the winter months each year in Iowa, as do costly delays. And dangerous commutes can mean hazards for people and commerce alike: the state is one of the country’s largest producers of agricultural output, and much of that is moved on roads.

To improve road safety and efficiency, the Iowa Department of Transportation has teamed up with researchers at Iowa State University to use machine learning, including our TensorFlow framework, to provide insights into traffic behavior. Iowa State’s technology helps analyze the visual data gathered from stationary cameras and cameras mounted on snow plows. They also capture traffic information using radar detectors. Machine learning transforms that data into conclusions about road conditions, like identifying congestion and getting first responders to the scenes of accidents faster..

This is just one recent example of TensorFlow being used to make drivers’ lives easier across the United States. In California, snow may not be an issue, but traffic certainly is, and college students there used TensorFlow to identify pot holes and dangerous road cracks in Los Angeles.

Officials in Iowa say machine learning could also be used to predict crash risks and travel speeds, and better understand drivers’ reactions or failures behind the wheel. But that doesn’t mean drivers will be off the hook. Iowa’s transportation and public safety departments constantly spread the same message: when it’s winter, slow down. Add some time onto your daily commute, and don’t use cruise control during a storm. That way, both drivers and state officials can work together to make winter travel less dreary—and a lot safer.

How machine learning can drive change in traffic-packed L.A.

There's nothing quite like driving through Los Angeles on a perfectly sunny day. But for drivers, the beauty of Southern California’s great weather and scenery is ruined by one thing: traffic.

According to a report by INRIX, my hometown is the worst city in the world for traffic, with a record of 102 hours of congestion during peak hours in 2017. My classmate, Ericson Hernandez, comes from New York City, which is ranked third globally for its traffic woes. Together, we decided to use machine learning to figure out the roots of bad traffic, including elements like road damage from potholes and cracks, and make rides around our beautiful cities enjoyable again.

As Ericson and I started studying electrical engineering at Loyola Marymount University, we began to develop an interest in a relatively new topic to the engineering world: machine learning. Our professor, Dr. Lei Huang, encouraged us to pick a project that we were passionate about, and Ericson and I wanted to use technology to tackle problems in the real world—such as helping the communities around us with road development.

This summer, we looked at previous research projects on detecting road cracks, and pondered how we could improve the algorithm and apply it to Los Angeles communities. We decided to use TensorFlow, Google’s open-source machine learning platform, to train a model that could quickly identify potholes and dangerous road cracks from camera footage of L.A. roads.

Students mount their camera before heading out to collect data.

Students mount their camera before heading out to collect data. 

Construction companies and cities could use this technology to identify which roads need fixing the most. With safer driving conditions and efficient road-work repairs, traffic in major cities could dramatically decrease, allowing for people to travel in a quick, safe and enjoyable manner. 

And that way, driving through Los Angeles can be about enjoying the view, not grumbling at the traffic.