Tag Archives: Powered by TensorFlow

Machine learning meets African agriculture

In 2016, a crop-destroying caterpillar, Fall Armyworm (FAW) was first detected in Africa. The crop pest has since devastated agriculture by infecting millions of corn fields, which threatens food security on the continent. Farmers who rely on harvests for food need to combat the pest, which has now spread to India and China.

That’s where Nazirini Siraji comes in. She is one of several developers working to provide farmers with new tools to fight FAW. After codelabs hosted by a Google developer group in Mbale, Uganda, she created the “Farmers Companion App” using TensorFlow, Google’s open-source machine learning platform. It’s a free app that identifies when a crop has FAW and which stage the worm is in its lifecycle (and therefore how threatening it is and how far it is likely to spread). It also advises on which pesticides or treatments are best to stop the worm spreading any further. The app is already working in the field, helping farmers around Mbale to identify FAW. 

They continue to improve the app so it can identify more pests and diseases. Nazirini shows the impact that developers can have on agricultural issues like FAW and across other sectors, too. We visited Nazirini and her team this year, here’s more about their story:

Learn more about how others are using TensorFlow to solve all kinds of problems.

Teachable Machine 2.0 makes AI easier for everyone

People are using AI to explore all kinds of ideas—identifying the roots of bad traffic in Los Angeles, improving recycling rates in Singapore, and even experimenting with dance. Getting started with your own machine learning projects might seem intimidating, but Teachable Machine is a web-based tool that makes it fast, easy, and accessible to everyone. 

The first version of Teachable Machine let anyone teach their computer to recognize images using a webcam. For a lot of people, it was their first time experiencing what it’s like to train their own machine learning model: teaching the computer how to recognize patterns in data (images, in this case) and assign new data to categories.

Since then, we’ve heard from lots of people who want to take their Teachable Machine models  one step further and use them in their own projects. Teachable Machine 2.0 lets you train your own machine learning model with the click of a button, no coding required, and export it to websites, apps, physical machines and more. Teachable Machine 2.0 can also recognize sounds and poses, like whether you're standing or sitting down. 

We collaborated with educators, artists, students and makers of all kinds to figure out how to make the tool useful for them. For example, education researcher Blakeley H. Payne and her teammates have been using Teachable Machine as part of open-source curriculum that teaches middle-schoolers about AI through a hands on learning experience. 

“Parents—especially of girls—often tell me their child is nervous to learn about AI because they have never coded before,” Blakeley said. “I love using Teachable Machine in the classroom because it empowers these students to be designers of technology without the fear of ‘I've never done this before.’”

But it’s not just for teaching. Steve Saling is an accessibility technology expert who used it to explore improve communication for people with impaired speech. Yining Shi has been using Teachable Machine with her students in the Interactive Telecommunications Program at NYU to explore its potential for game design. And at Google, we’ve been using it make physical sorting machines easier for anyone to build. Here’s how it all works: 

Gather examples

You can use Teachable Machine to recognize images, sounds or poses. Upload your own image files, or capture them live with a mic or webcam. These examples stay on-device, never leaving your computer unless you choose to save your project to Google Drive.

Gather-small.gif

Gathering image examples.

Train your model

With the click of a button, Teachable Machine will train a model based on the examples you provided. All the training happens in your browser, so everything stays in your computer.

Training-small.gif

Training a model with the click of a button.

Test and tweak

Play with your model on the site to see how it performs. Not to your liking? Tweak the examples and see how it does.

Test-small.gif

Testing out the model instantly using a webcam.

Use your model

The model you created is powered by Tensorflow.js, an open-source library for machine learning from Google. You can export it to use in websites, apps, and more. You can also save your project to Google Drive so you can pick up where you left off.

Ready to dive in? Here’s some helpful links and inspiration:

Drop us a line with your thoughts and ideas, and post what you make, or follow along with #teachablemachine. We can’t wait to see what you create. Try it out atg.co/teachablemachine.

Teachable Machine 2.0 makes AI easier for everyone

People are using AI to explore all kinds of ideas—identifying the roots of bad traffic in Los Angeles, improving recycling rates in Singapore, and even experimenting with dance. Getting started with your own machine learning projects might seem intimidating, but Teachable Machine is a web-based tool that makes it fast, easy, and accessible to everyone. 

The first version of Teachable Machine let anyone teach their computer to recognize images using a webcam. For a lot of people, it was their first time experiencing what it’s like to train their own machine learning model: teaching the computer how to recognize patterns in data (images, in this case) and assign new data to categories.

Since then, we’ve heard from lots of people who want to take their Teachable Machine models  one step further and use them in their own projects. Teachable Machine 2.0 lets you train your own machine learning model with the click of a button, no coding required, and export it to websites, apps, physical machines and more. Teachable Machine 2.0 can also recognize sounds and poses, like whether you're standing or sitting down. 

We collaborated with educators, artists, students and makers of all kinds to figure out how to make the tool useful for them. For example, education researcher Blakeley H. Payne and her teammates have been using Teachable Machine as part of open-source curriculum that teaches middle-schoolers about AI through a hands on learning experience. 

“Parents—especially of girls—often tell me their child is nervous to learn about AI because they have never coded before,” Blakeley said. “I love using Teachable Machine in the classroom because it empowers these students to be designers of technology without the fear of ‘I've never done this before.’”

But it’s not just for teaching. Steve Saling is an accessibility technology expert who used it to explore improve communication for people with impaired speech. Yining Shi has been using Teachable Machine with her students in the Interactive Telecommunications Program at NYU to explore its potential for game design. And at Google, we’ve been using it make physical sorting machines easier for anyone to build. Here’s how it all works: 

Gather examples

You can use Teachable Machine to recognize images, sounds or poses. Upload your own image files, or capture them live with a mic or webcam. These examples stay on-device, never leaving your computer unless you choose to save your project to Google Drive.

Gather-small.gif

Gathering image examples.

Train your model

With the click of a button, Teachable Machine will train a model based on the examples you provided. All the training happens in your browser, so everything stays in your computer.

Training-small.gif

Training a model with the click of a button.

Test and tweak

Play with your model on the site to see how it performs. Not to your liking? Tweak the examples and see how it does.

Test-small.gif

Testing out the model instantly using a webcam.

Use your model

The model you created is powered by Tensorflow.js, an open-source library for machine learning from Google. You can export it to use in websites, apps, and more. You can also save your project to Google Drive so you can pick up where you left off.

Ready to dive in? Here’s some helpful links and inspiration:

Drop us a line with your thoughts and ideas, and post what you make, or follow along with #teachablemachine. We can’t wait to see what you create. Try it out atg.co/teachablemachine.

The Singapore students using Cloud for smarter recycling

Coming up with big ideas in technology used to take the kind of time and money that only large companies had.  Now open source tools—like TensorFlow, which provides access to Google’s machine learning technology—mean anyone with a smart concept has the opportunity to make it a reality. Just ask Arjun Taneja and Vayun Mathur, two friends and high school students from Singapore with a big ambition to improve recycling rates.  

Arjun and Vayun realized that separating waste is sometimes confusing and cumbersome—something that can derail people's good intentions to recycle. Using TensorFlow, they built a “Smart Bin” that can identify types of trash and sort them automatically. The Smart Bin uses a camera to take a picture of the object inserted in the tray, then analyzes the picture with a Convolutional Neural Network, a type of machine learning algorithm designed to recognize visual objects.  

To train the algorithm, Arjun and Vayun took around 500 pictures of trash like glass bottles, plastic bottles, metal cans and paper. It’s a process that would normally be laborious and expensive. But by using Google’s Colab platform for sharing resources and advice, the students could access a high powered graphics processor (GPU) in the cloud for free. They were also able to access Tensor Processing Units, Google’s machine learning processors which power services like Translate, Photos, Search, Assistant and Gmail. These tools helped their system analyze large amounts of data at once, so the students could correct the model if it didn't recognize an object. As a result, the model learned to classify the objects even more quickly. Once the Smart Bin was trained, all they had to do was place an object in the tray, and the system could predict whether it was metal, plastic, glass or paper—with the answer popping up on a screen. 

Building on their successful trials at home, Arjun and Vayun showcased the Smart Bin with a stall at last week’s Singapore Maker Faire, and they continue to work on other projects. It’s a great example of how tools available in the cloud are cutting out processes and costs that might have held back this kind of invention in the past.

The creative coder adding color to machine learning

Machine learning is already revolutionizing the way we solve problems across almost every industry and walk of life, from photo organization to cancer detection and flood prediction. But outside the tech world, most people don’t know what an algorithm is or how it works, let alone how they might start training one of their own.

Parisian coder Emil Wallner wants to change that. Passionate about making machine learning easier to get into, he came up with an idea that fused his fascination with machine learning with a love of art. He built a simple, playful program that learns how to add color to black-and-white photos.

Emil ML

Emil used TensorFlow, Google’s open-source machine learning platform, to build the simplest algorithm he could, forcing himself to simplify it until it was less than 100 lines of code.

The algorithm is programmed to study millions of color photos and use them to learn what color the objects of the world should be. It then hunts for similar patterns in a black-and-white photo. Over time, it learns that a black-and-white object shaped like a goldfish should very likely be gold.

The more distinctive the object, the easier the task. For example, bananas are easy because they’re almost always yellow and have a unique shape. Moons and planets can be more confusing because of similarities they share with each other, such as their shape and dark surroundings. In these instances, just like a child learning about the world for the first time, the algorithm needs a little more information and training.

ML banana moon

Emil’s algorithm brings the machine learning process to life in a way that makes it fun and visual. It helps us to understand what machines find easy, what they find tricky and how tweaks to the code or dataset affect results.

Thousands of budding coders and artists have now downloaded Emil’s code and are using it to understand the fundamentals of machine learning, without feeling like they’re in a classroom.

“Even the mistakes are beautiful, so it’s a satisfying algorithm to learn with,” Emil says.

When Iowa’s snow piles up, TensorFlow can keep roads safe

Iowa may be heaven, but it’s a snowy one. With an average of around 33 inches of snow every year, keeping roads open and safe is an important challenge. Car accidents tend to spike during the winter months each year in Iowa, as do costly delays. And dangerous commutes can mean hazards for people and commerce alike: the state is one of the country’s largest producers of agricultural output, and much of that is moved on roads.

To improve road safety and efficiency, the Iowa Department of Transportation has teamed up with researchers at Iowa State University to use machine learning, including our TensorFlow framework, to provide insights into traffic behavior. Iowa State’s technology helps analyze the visual data gathered from stationary cameras and cameras mounted on snow plows. They also capture traffic information using radar detectors. Machine learning transforms that data into conclusions about road conditions, like identifying congestion and getting first responders to the scenes of accidents faster..

This is just one recent example of TensorFlow being used to make drivers’ lives easier across the United States. In California, snow may not be an issue, but traffic certainly is, and college students there used TensorFlow to identify pot holes and dangerous road cracks in Los Angeles.

Officials in Iowa say machine learning could also be used to predict crash risks and travel speeds, and better understand drivers’ reactions or failures behind the wheel. But that doesn’t mean drivers will be off the hook. Iowa’s transportation and public safety departments constantly spread the same message: when it’s winter, slow down. Add some time onto your daily commute, and don’t use cruise control during a storm. That way, both drivers and state officials can work together to make winter travel less dreary—and a lot safer.

How machine learning can drive change in traffic-packed L.A.

There's nothing quite like driving through Los Angeles on a perfectly sunny day. But for drivers, the beauty of Southern California’s great weather and scenery is ruined by one thing: traffic.

According to a report by INRIX, my hometown is the worst city in the world for traffic, with a record of 102 hours of congestion during peak hours in 2017. My classmate, Ericson Hernandez, comes from New York City, which is ranked third globally for its traffic woes. Together, we decided to use machine learning to figure out the roots of bad traffic, including elements like road damage from potholes and cracks, and make rides around our beautiful cities enjoyable again.

As Ericson and I started studying electrical engineering at Loyola Marymount University, we began to develop an interest in a relatively new topic to the engineering world: machine learning. Our professor, Dr. Lei Huang, encouraged us to pick a project that we were passionate about, and Ericson and I wanted to use technology to tackle problems in the real world—such as helping the communities around us with road development.

This summer, we looked at previous research projects on detecting road cracks, and pondered how we could improve the algorithm and apply it to Los Angeles communities. We decided to use TensorFlow, Google’s open-source machine learning platform, to train a model that could quickly identify potholes and dangerous road cracks from camera footage of L.A. roads.

Students mount their camera before heading out to collect data.

Students mount their camera before heading out to collect data. 

Construction companies and cities could use this technology to identify which roads need fixing the most. With safer driving conditions and efficient road-work repairs, traffic in major cities could dramatically decrease, allowing for people to travel in a quick, safe and enjoyable manner. 

And that way, driving through Los Angeles can be about enjoying the view, not grumbling at the traffic.