Tag Archives: AIY Projects

Coral updates: Project tutorials, a downloadable compiler, and a new distributor

Posted by Vikram Tank (Product Manager), Coral Team

coral hardware

We’re committed to evolving Coral to make it even easier to build systems with on-device AI. Our team is constantly working on new product features, and content that helps ML practitioners, engineers, and prototypers create the next generation of hardware.

To improve our toolchain, we're making the Edge TPU Compiler available to users as a downloadable binary. The binary works on Debian-based Linux systems, allowing for better integration into custom workflows. Instructions on downloading and using the binary are on the Coral site.

We’re also adding a new section to the Coral site that showcases example projects you can build with your Coral board. For instance, Teachable Machine is a project that guides you through building a machine that can quickly learn to recognize new objects by re-training a vision classification model directly on your device. Minigo shows you how to create an implementation of AlphaGo Zero and run it on the Coral Dev Board or USB Accelerator.

Our distributor network is growing as well: Arrow will soon sell Coral products.

Updates from Coral: A new compiler and much more

Posted by Vikram Tank (Product Manager), Coral Team

Coral has been public for about a month now, and we’ve heard some great feedback about our products. As we evolve the Coral platform, we’re making our products easier to use and exposing more powerful tools for building devices with on-device AI.

Today, we're updating the Edge TPU model compiler to remove the restrictions around specific architectures, allowing you to submit any model architecture that you want. This greatly increases the variety of models that you can run on the Coral platform. Just be sure to review the TensorFlow ops supported on Edge TPU and model design requirements to take full advantage of the Edge TPU at runtime.

We're also releasing a new version of Mendel OS (3.0 Chef) for the Dev Board with a new board management tool called Mendel Development Tool (MDT).

To help with the developer workflow, our new C++ API works with the TensorFlow Lite C++ API so you can execute inferences on an Edge TPU. In addition, both the Python and C++ APIs now allow you to run multiple models in parallel, using multiple Edge TPU devices.

In addition to these updates, we’re adding new capabilities to Coral with the release of the Environmental Sensor Board. It’s an accessory board for the Coral Dev Platform (and Raspberry Pi) that brings sensor input to your models. It has integrated light, temperature, humidity, and barometric sensors, and the ability to add more sensors via it's four Grove connectors. The secure element on-board also allows for easy communication with the Google Cloud IOT Core.

The team has also been working with partners to help them evaluate whether Coral is the right fit for their products. We’re excited that Oivi has chosen us to be the base platform of their new handheld AI-camera. This product will help prevent blindness among diabetes patients by providing early, automated detection of diabetic retinopathy. Anders Eikenes, CEO of Oivi, says “Oivi is dedicated towards providing patient-centric eye care for everyone - including emerging markets. We were honoured to be selected by Google to participate in their Coral alpha program, and are looking forward to our continued cooperation. The Coral platform gives us the ability to run our screening ML models inside a handheld device; greatly expanding the access and ease of diabetic retinopathy screening.”

Finally, we’re expanding our distributor network to make it easier to get Coral boards into your hands around the world. This month, Seeed and NXP will begin to sell Coral products, in addition to Mouser.

We're excited to keep evolving the Coral platform, please keep sending us feedback at [email protected].

You can see the full release notes on Coral site.

New AIY Edge TPU Boards

Posted by Billy Rutledge, Director of AIY Projects

Over the past year and a half, we've seen more than 200K people build, modify, and create with our Voice Kit and Vision Kit products. Today at Cloud Next we announced two new devices to help professional engineers build new products with on-device machine learning(ML) at their core: the AIY Edge TPU Dev Board and the AIY Edge TPU Accelerator. Both are powered by Google's Edge TPU and represent our first steps towards expanding AIY into a platform for experimentation with on-device ML.

The Edge TPU is Google's purpose-built ASIC chip designed to run TensorFlow Lite ML models on your device. We've learned that performance-per-watt and performance-per-dollar are critical benchmarks when processing neural networks within a small footprint. The Edge TPU delivers both in a package that's smaller than the head of a penny. It can accelerate ML inferencing on device, or can pair with Google Cloud to create a full cloud-to-edge ML stack. In either configuration, by processing data directly on-device, a local ML accelerator increases privacy, removes the need for persistent connections, reduces latency, and allows for high performance using less power.

The AIY Edge TPU Dev Board is an all-in-one development board that allows you to prototype embedded systems that demand fast ML inferencing. The baseboard provides all the peripheral connections you need to effectively prototype your device — including a 40-pin GPIO header to integrate with various electrical components. The board also features a removable System-on-module (SOM) daughter board can be directly integrated into your own hardware once you're ready to scale.

The AIY Edge TPU Accelerator is a neural network coprocessor for your existing system. This small USB-C stick can connect to any Linux-based system to perform accelerated ML inferencing. The casing includes mounting holes for attachment to host boards such as a Raspberry Pi Zero or your custom device.

On-device ML is still in its early days, and we're excited to see how these two products can be applied to solve real world problems — such as increasing manufacturing equipment reliability, detecting quality control issues in products, tracking retail foot-traffic, building adaptive automotive sensing systems, and more applications that haven't been imagined yet.

Both devices will be available online this fall in the US with other countries to follow shortly.

For more product information visit g.co/aiy and sign up to be notified as products become available.

Introducing the AIY Vision Kit: Add computer vision to your maker projects

Posted by Billy Rutledge, Director, AIY Projects

Since we released AIY Voice Kit, we've been inspired by the thousands of amazing builds coming in from the maker community. Today, the AIY Team is excited to announce our next project: the AIY Vision Kit — an affordable, hackable, intelligent camera.

Much like the Voice Kit, our Vision Kit is easy to assemble and connects to a Raspberry Pi computer. Based on user feedback, this new kit is designed to work with the smaller Raspberry Pi Zero W computer and runs its vision algorithms on-device so there's no cloud connection required.

Build intelligent devices that can perceive, not just see

The kit materials list includes a VisionBonnet, a cardboard outer shell, an RGB arcade-style button, a piezo speaker, a macro/wide lens kit, flex cables, standoffs, a tripod mounting nut and connecting components.

The VisionBonnet is an accessory board for Raspberry Pi Zero W that features the Intel® Movidius™ MA2450, a low-power vision processing unit capable of running neural networks. This will give makers visual perception instead of image sensing. It can run at speeds of up to 30 frames per second, providing near real-time performance.

Bundled with the software image are three neural network models:

  • A model based on MobileNetsthat can recognize a thousand common objects.
  • A model for face detection capable of not only detecting faces in the image, but also scoring facial expressions on a "joy scale" that ranges from "sad" to "laughing."
  • A model for the important task of discerning between cats, dogs and people.

For those of you who have your own models in mind, we've included the original TensorFlow code and a compiler. Take a new model you have (or train) and run it on the the Intel® Movidius™ MA2450.

Extend the kit to solve your real-world problems

The AIY Vision Kit is completely hackable:

  • Want to prototype your own product? The Vision Kit and the Raspberry Pi Zero W can fit into any number of tiny enclosures.
  • Want to change the way the camera reacts? Use the Python API to write new software to customize the RGB button colors, piezo element sounds and GPIO pins.
  • Want to add more lights, buttons, or servos? Use the 4 GPIO expansion pins to connect your own hardware.

We hope you'll use it to solve interesting challenges, such as:

  • Build "hotdog/not hotdog" (or any other food recognizer)
  • Turn music on when someone walks through the door
  • Send a text when your car leaves the driveway
  • Open the dog door when she wants to get back in the house

Ready to get your hands on one?

AIY Vision Kits will be available in December, with online pre-sales at Micro Center starting today.

*** Please note that AIY Vision Kit requires Raspberry Pi Zero W, Raspberry Pi Camera V2 and a micro SD card, which must be purchased separately.

Tell us what you think!

We're listening — let us know how we can improve our kits and share what you're making using the #AIYProjects hashtag on social media. We hope AIY Vision Kit inspires you to build all kinds of creative devices.

AIY Projects update: new maker projects, new partners, new kits

Posted by Billy Rutledge, Director, AIY Projects

Makers are hands-on when it comes to making change. We're explorers, hackers and problem solvers who build devices, ecosystems, art (sometimes a combination of the three) on the basis of our own (often unconventional) ideas. So when my team first sought out to empower makers of all types and ages with the AI technology we've honed at Google, we knew whatever we built had to be open and accessible. We stayed clear of limitations that come from platform and software stack requirements, high cost and complex set up, and fixed our focus on the curiosity and inventiveness that inspire makers around the world.

When we launched our Voice Kit with help from our partner Raspberry Pi in May and sold out globally in just a few hours, we got the message loud and clear. There is a genuine demand among do-it-yourselfers for artificial intelligence that makes human-to-machine interaction more like natural human interaction.

Last week we announced the Speech Commands Dataset, a collaboration between the TensorFlow and AIY teams. The dataset has 65,000 one-second long utterances of 30 short words by thousands of different contributors of the AIY websiteand allows you to build simple voice interfaces for applications. We're currently in the process of integrating the dataset with the next release of the Voice Kit, so makers could build devices that respond to simple voice commands without the press of a button or an internet connection.

Today, you can pre-order your Voice Kit, which will be available for purchase in stores and online through Micro Center.

Or you may have to resort to the hackthat maker Shivasiddarthcreated when Voice Kit with MagPi #57 sold out in May, and then again (within 17 minutes) earlier this month.

Cool ways that makers are already using the Voice Kit

Martin Mander created a retro-inspired intercom that he calls 1986 Google Pi Intercom. He describes it as "a wall-mounted Google voice assistant using a Raspberry PI 3 and the Google AIY (Artificial Intelligence Yourself) [voice] kit." He used a mid-80s intercom that he bought on sale for £4. It cleaned up well!

Get the full story from Martin and see what Slashgear had to say about the project.

(This one's for Dr. Who fans) Tom Minnich created a Dalek-voiced assistant.

He offers a tutorialon how you can modify the Voice Kit to do something similar — perhaps create a Drogon-voiced assistant?

Victor Van Heeused the Voice Kit to create a voice-activated internet streaming radio that can play other types of audio files as well. He provides instructions, so you can do the same.

The Voice Kit is currently available in the U.S. We'll be expanding globally by the end of this year. Stay tuned here, where we'll share the latest updates. The strong demand for the Voice Kit drives us to keep the momentum going on AIY Projects.

Inspiring makers with kits that understand human speech, vision and movement

What we build next will include vision and motion detection and will go hand in hand with our existing Voice Kit. AIY Project kits will soon offer makers the "eyes," "ears," "voice" and sense of "balance" to allow simple yet powerful device interfaces.

We'd love to bake your input into our next releases. Go to hackster.io or leave a comment to start up a conversation with us. Show us and the maker community what you're working on by using hashtag #AIYprojects on social media.

Running Android Things on the AIY Voice Kit

Posted by Ryan Bae, Android Things

A major benefit of using Android Things is the ability to prototype connected devices and quickly scale to full commercial products. To further that goal, the Android Things team is partnering with AIY Projects, a new initiative to bring do-it-yourself artificial intelligence to makers. Today, the AIY Projects team launched their first open source reference project: a Raspberry Pi-based Voice Kit with instructions to build a Voice User Interface (VUI) that can use cloud services (like the new Google Assistant SDK or Cloud Speech API) or run completely on-device with TensorFlow. We are releasing a special Android Things Developer Preview 3.1 build for Raspberry Pi 3 to support the Voice Kit. Developers can run Android Things on the Voice Kit with full functionality, including integration with the Google Assistant SDK. To get started, visit the AIY website, download the latest Android Things Developer Preview, and follow the instructions.

The Voice Kit ships out to all MagPi Magazine subscribers on May 4, 2017, and the parts list, assembly instructions, source code, as well as suggested extensions are available on AIY Projects website. The complete kit is also for sale at over 500 Barnes & Noble stores nationwide, as well as UK retailers WH Smith, Tesco, Sainsburys, and Asda.

We are excited to see what you build with the Voice Kit on Android Things. We also encourage you to join Google's IoT Developers Community and Google Assistant SDK Developers on Google+, a great resource to keep up to date and discuss ideas with other developers.

AIY Projects: Do-it-yourself AI for Makers

Posted by Billy Rutledge, Director of AIY Projects
Our teams are continually inspired by how Makers use Google technology to do crazy, cool new things. Things we would've never imagined doing ourselves, things that solve real world problems. After talking to Maker community members, we learned that many were interested in using artificial intelligence in projects, but didn't know where to begin. To address this gap, we're launching AIY Projects: do-it-yourself artificial intelligence for Makers.
With AIY Projects, Makers can use artificial intelligence to make human-to-machine interaction more like human-to-human interactions. We'll be releasing a series of reference kits, starting with voice recognition. The speech recognition capability in our first project could be used to:
  • Replace physical buttons and digital displays (those are so 90's) on household appliances and consumer electronics (imagine a coffee machine with no buttons or screen -- just talk to it)
  • Replace smartphone apps to control devices (those are so 2000's) on connected devices (imagine a connected light bulb or thermostat -- just talk to them)
  • Add voice recognition to assistive robotics (e.g. for accessibility) -- just talk to the robot as a simplified programming interface, e.g. "tell me what's in this room or "tell me when you see the mail-carrier come to the door"
Fully assembled Voice Kit.
The first open source reference project is the Voice Kit: instructions to build a Voice User Interface (VUI) that can use cloud services (like the new Google Assistant SDK or Cloud Speech API) or run completely on-device. This project extends the functionality of the most popular single board computer used for digital making - the Raspberry Pi.
Everything that comes in the Voice Kit.

The included Voice Hardware Accessory on Top (HAT) contains hardware for audio capture and playback: easy-to-use connectors for the dual mic daughter board and speaker, GPIO pins to connect low-voltage components like micro-servos and sensors, and an optional barrel connector for dedicated power supply. It was designed and tested with the Raspberry Pi 3 Model B.
Alternately, Developers can run Android Things on the Voice Kit with full functionality - making it easy to prototype Internet-of-Things devices and scale to full commercial products with several turnkey hardware solutions available (including Intel Edison, NXP Pico, and Raspberry Pi 3). Download the latest Android Things developer preview to get started.
Close up of the Voice HAT accessory board.


Making with the Google Assistant SDK
The Google Assistant SDK developer preview was released last week. It's enabled by default, and brings the Google Assistant to your Voice Kit: including voice control, natural language understanding, Google's smarts, and more.
In combination with the rest of the Voice Kit, we think the Google Assistant SDK will provide you many creative opportunities to build fun and engaging projects. Makers have already started experimenting with the SDK - including building a mocktail maker.


The Voice Kit ships out to all MagPi Magazine subscribers on May 4, 2017, and we've published a parts list, assembly instructions, source code and suggested extensions to our website: aiyprojects.withgoogle.com. The complete kit is also for sale at over 500 Barnes & Noble stores nationwide, as well as UK retailers WH Smith, Tesco, Sainsburys, and Asda.
This is just the first AIY Project. There are more in the works, but we need to know how you'd like to incorporate AI into your own projects. Visit hackster.io to share your experiences and discuss future projects. Use #AIYprojects on social media to help us find your inventions. And if you happen to be at the San Mateo Maker Faire on May 19-21, 2017, stop by the Google pavilion to give us feedback.