Tag Archives: AIY

Updates from Coral: A new compiler and much more

Posted by Vikram Tank (Product Manager), Coral Team

Coral has been public for about a month now, and we’ve heard some great feedback about our products. As we evolve the Coral platform, we’re making our products easier to use and exposing more powerful tools for building devices with on-device AI.

Today, we're updating the Edge TPU model compiler to remove the restrictions around specific architectures, allowing you to submit any model architecture that you want. This greatly increases the variety of models that you can run on the Coral platform. Just be sure to review the TensorFlow ops supported on Edge TPU and model design requirements to take full advantage of the Edge TPU at runtime.

We're also releasing a new version of Mendel OS (3.0 Chef) for the Dev Board with a new board management tool called Mendel Development Tool (MDT).

To help with the developer workflow, our new C++ API works with the TensorFlow Lite C++ API so you can execute inferences on an Edge TPU. In addition, both the Python and C++ APIs now allow you to run multiple models in parallel, using multiple Edge TPU devices.

In addition to these updates, we’re adding new capabilities to Coral with the release of the Environmental Sensor Board. It’s an accessory board for the Coral Dev Platform (and Raspberry Pi) that brings sensor input to your models. It has integrated light, temperature, humidity, and barometric sensors, and the ability to add more sensors via it's four Grove connectors. The secure element on-board also allows for easy communication with the Google Cloud IOT Core.

The team has also been working with partners to help them evaluate whether Coral is the right fit for their products. We’re excited that Oivi has chosen us to be the base platform of their new handheld AI-camera. This product will help prevent blindness among diabetes patients by providing early, automated detection of diabetic retinopathy. Anders Eikenes, CEO of Oivi, says “Oivi is dedicated towards providing patient-centric eye care for everyone - including emerging markets. We were honoured to be selected by Google to participate in their Coral alpha program, and are looking forward to our continued cooperation. The Coral platform gives us the ability to run our screening ML models inside a handheld device; greatly expanding the access and ease of diabetic retinopathy screening.”

Finally, we’re expanding our distributor network to make it easier to get Coral boards into your hands around the world. This month, Seeed and NXP will begin to sell Coral products, in addition to Mouser.

We're excited to keep evolving the Coral platform, please keep sending us feedback at [email protected].

You can see the full release notes on Coral site.

Introducing Coral: Our platform for development with local AI

Posted by Billy Rutledge (Director) and Vikram Tank (Product Mgr), Coral Team

AI can be beneficial for everyone, especially when we all explore, learn, and build together. To that end, Google's been developing tools like TensorFlow and AutoML to ensure that everyone has access to build with AI. Today, we're expanding the ways that people can build out their ideas and products by introducing Coral into public beta.

Coral is a platform for building intelligent devices with local AI.

Coral offers a complete local AI toolkit that makes it easy to grow your ideas from prototype to production. It includes hardware components, software tools, and content that help you create, train and run neural networks (NNs) locally, on your device. Because we focus on accelerating NN's locally, our products offer speedy neural network performance and increased privacy — all in power-efficient packages. To help you bring your ideas to market, Coral components are designed for fast prototyping and easy scaling to production lines.

Our first hardware components feature the new Edge TPU, a small ASIC designed by Google that provides high-performance ML inferencing for low-power devices. For example, it can execute state-of-the-art mobile vision models such as MobileNet V2 at 100+ fps, in a power efficient manner.

Coral Camera Module, Dev Board and USB Accelerator

For new product development, the Coral Dev Board is a fully integrated system designed as a system on module (SoM) attached to a carrier board. The SoM brings the powerful NXP iMX8M SoC together with our Edge TPU coprocessor (as well as Wi-Fi, Bluetooth, RAM, and eMMC memory). To make prototyping computer vision applications easier, we also offer a Camera that connects to the Dev Board over a MIPI interface.

To add the Edge TPU to an existing design, the Coral USB Accelerator allows for easy integration into any Linux system (including Raspberry Pi boards) over USB 2.0 and 3.0. PCIe versions are coming soon, and will snap into M.2 or mini-PCIe expansion slots.

When you're ready to scale to production we offer the SOM from the Dev Board and PCIe versions of the Accelerator for volume purchase. To further support your integrations, we'll be releasing the baseboard schematics for those who want to build custom carrier boards.

Our software tools are based around TensorFlow and TensorFlow Lite. TF Lite models must be quantized and then compiled with our toolchain to run directly on the Edge TPU. To help get you started, we're sharing over a dozen pre-trained, pre-compiled models that work with Coral boards out of the box, as well as software tools to let you re-train them.

For those building connected devices with Coral, our products can be used with Google Cloud IoT. Google Cloud IoT combines cloud services with an on-device software stack to allow for managed edge computing with machine learning capabilities.

Coral products are available today, along with product documentation, datasheets and sample code at g.co/coral. We hope you try our products during this public beta, and look forward to sharing more with you at our official launch.

New AIY Edge TPU Boards

Posted by Billy Rutledge, Director of AIY Projects

Over the past year and a half, we've seen more than 200K people build, modify, and create with our Voice Kit and Vision Kit products. Today at Cloud Next we announced two new devices to help professional engineers build new products with on-device machine learning(ML) at their core: the AIY Edge TPU Dev Board and the AIY Edge TPU Accelerator. Both are powered by Google's Edge TPU and represent our first steps towards expanding AIY into a platform for experimentation with on-device ML.

The Edge TPU is Google's purpose-built ASIC chip designed to run TensorFlow Lite ML models on your device. We've learned that performance-per-watt and performance-per-dollar are critical benchmarks when processing neural networks within a small footprint. The Edge TPU delivers both in a package that's smaller than the head of a penny. It can accelerate ML inferencing on device, or can pair with Google Cloud to create a full cloud-to-edge ML stack. In either configuration, by processing data directly on-device, a local ML accelerator increases privacy, removes the need for persistent connections, reduces latency, and allows for high performance using less power.

The AIY Edge TPU Dev Board is an all-in-one development board that allows you to prototype embedded systems that demand fast ML inferencing. The baseboard provides all the peripheral connections you need to effectively prototype your device — including a 40-pin GPIO header to integrate with various electrical components. The board also features a removable System-on-module (SOM) daughter board can be directly integrated into your own hardware once you're ready to scale.

The AIY Edge TPU Accelerator is a neural network coprocessor for your existing system. This small USB-C stick can connect to any Linux-based system to perform accelerated ML inferencing. The casing includes mounting holes for attachment to host boards such as a Raspberry Pi Zero or your custom device.

On-device ML is still in its early days, and we're excited to see how these two products can be applied to solve real world problems — such as increasing manufacturing equipment reliability, detecting quality control issues in products, tracking retail foot-traffic, building adaptive automotive sensing systems, and more applications that haven't been imagined yet.

Both devices will be available online this fall in the US with other countries to follow shortly.

For more product information visit g.co/aiy and sign up to be notified as products become available.

Showcase your innovations at the 2018 China-US Young Makers Competition

Posted by Bill Luan, Senior Program Manager & Greater China Regional Lead, Developer Relations

The 2018 China-U.S. Young Maker Competition launched this week by the event co-organizer Hackster.IO. Project submissions are now open to all makers, developers, and students ages 18-40 in both China and the United States. Google is the corporate sponsor for this year's competition.

Since 2014, this competition has been running annually in supporting the U.S.-China High-Level Consultation on People-to-People Exchange program. The competition encourages makers in both countries to create innovative products focusing on community development, education, environmental protection, health & fitness, energy, transportation and sustainable development.

Participants have the freedom to choose appropriate technologies to enable their innovations, and we encourage makers to consider open source technologies, such as TensorFlow and AIY Projects for artificial intelligence use cases, Android Studio for mobile applications, as well as Android Things for IoT solutions.

The top 10 projects in the U.S. will win an all-expenses-paid trip to Beijing, to compete against Chinese makers on August 13-17 for the chance at $30,000 in prizes. Further, there are 35 additional chances to win Google prizes! So join the competition, and let your innovation shine on the global stage!

For more details, please see the event announcement on Hackster.IO.

AIY Projects: Updated kits for 2018

Posted by Billy Rutledge, Director of AIY Projects

Last year, AIY Projects launched to give makers the power to build AI into their projects with two do-it-yourself kits. We're seeing continued demand for the kits, especially from the STEM audience where parents and teachers alike have found the products to be great tools for the classroom. The changing nature of work in the future means students may have jobs that haven't yet been imagined, and we know that computer science skills, like analytical thinking and creative problem solving, will be crucial.

We're taking the first of many steps to help educators integrate AIY into STEM lesson plans and help prepare students for the challenges of the future by launching a new version of our AIY kits. The Voice Kit lets you build a voice controlled speaker, while the Vision Kit lets you build a camera that learns to recognize people and objects (check it out here). The new kits make getting started a little easier with clearer instructions, a new app and all the parts in one box.

To make setup easier, both kits have been redesigned to work with the new Raspberry Pi Zero WH, which comes included in the box, along with the USB connector cable and pre-provisioned SD card. Now users no longer need to download the software image and can get running faster. The updated AIY Vision Kit v1.1 also includes the Raspberry Pi Camera v2.

AIY Voice Kit v2 includes Raspberry Pi Zero WH and pre-provisioned SD card

AIY Voice Kit v1.1 includes Raspberry Pi Zero WH, Raspberry Pi Cam 2 and pre-provisioned SD card

We're also introducing the AIY companion app for Android, available here in Google Play, to make wireless setup and configuration a snap. The kits still work with monitor, keyboard and mouse as an alternate path and we're working on iOS and Chrome companions which will be coming soon.

The AIY website has been refreshed with improved documentation, now easier for young makers to get started and learn as they build. It also includes a new AIY Models area, showcasing a collection of neural networks designed to work with AIY kits. While we've solved one barrier to entry for the STEM audience, we recognize that there are many other things that we can do to make our kits even more useful. We'll once again be at #MakerFaire events to gather feedback from our users and in June we'll be working with teachers from all over the world at the ISTE conference in Chicago.

The new AIY Voice Kit and Vision Kit have arrived at Target Stores and Target.com (US) this month and we're working to make them globally available through retailers worldwide. Sign up on our mailing list to be notified when our products become available.

We hope you'll pick up one of the new AIY kits and learn more about how to build your own smart devices. Be sure to share your recipes on Hackster.io and social media using #aiyprojects.

Running Android Things on the AIY Voice Kit

Posted by Ryan Bae, Android Things

A major benefit of using Android Things is the ability to prototype connected devices and quickly scale to full commercial products. To further that goal, the Android Things team is partnering with AIY Projects, a new initiative to bring do-it-yourself artificial intelligence to makers. Today, the AIY Projects team launched their first open source reference project: a Raspberry Pi-based Voice Kit with instructions to build a Voice User Interface (VUI) that can use cloud services (like the new Google Assistant SDK or Cloud Speech API) or run completely on-device with TensorFlow. We are releasing a special Android Things Developer Preview 3.1 build for Raspberry Pi 3 to support the Voice Kit. Developers can run Android Things on the Voice Kit with full functionality, including integration with the Google Assistant SDK. To get started, visit the AIY website, download the latest Android Things Developer Preview, and follow the instructions.

The Voice Kit ships out to all MagPi Magazine subscribers on May 4, 2017, and the parts list, assembly instructions, source code, as well as suggested extensions are available on AIY Projects website. The complete kit is also for sale at over 500 Barnes & Noble stores nationwide, as well as UK retailers WH Smith, Tesco, Sainsburys, and Asda.

We are excited to see what you build with the Voice Kit on Android Things. We also encourage you to join Google's IoT Developers Community and Google Assistant SDK Developers on Google+, a great resource to keep up to date and discuss ideas with other developers.

AIY Projects: Do-it-yourself AI for Makers

Posted by Billy Rutledge, Director of AIY Projects
Our teams are continually inspired by how Makers use Google technology to do crazy, cool new things. Things we would've never imagined doing ourselves, things that solve real world problems. After talking to Maker community members, we learned that many were interested in using artificial intelligence in projects, but didn't know where to begin. To address this gap, we're launching AIY Projects: do-it-yourself artificial intelligence for Makers.
With AIY Projects, Makers can use artificial intelligence to make human-to-machine interaction more like human-to-human interactions. We'll be releasing a series of reference kits, starting with voice recognition. The speech recognition capability in our first project could be used to:
  • Replace physical buttons and digital displays (those are so 90's) on household appliances and consumer electronics (imagine a coffee machine with no buttons or screen -- just talk to it)
  • Replace smartphone apps to control devices (those are so 2000's) on connected devices (imagine a connected light bulb or thermostat -- just talk to them)
  • Add voice recognition to assistive robotics (e.g. for accessibility) -- just talk to the robot as a simplified programming interface, e.g. "tell me what's in this room or "tell me when you see the mail-carrier come to the door"
Fully assembled Voice Kit.
The first open source reference project is the Voice Kit: instructions to build a Voice User Interface (VUI) that can use cloud services (like the new Google Assistant SDK or Cloud Speech API) or run completely on-device. This project extends the functionality of the most popular single board computer used for digital making - the Raspberry Pi.
Everything that comes in the Voice Kit.

The included Voice Hardware Accessory on Top (HAT) contains hardware for audio capture and playback: easy-to-use connectors for the dual mic daughter board and speaker, GPIO pins to connect low-voltage components like micro-servos and sensors, and an optional barrel connector for dedicated power supply. It was designed and tested with the Raspberry Pi 3 Model B.
Alternately, Developers can run Android Things on the Voice Kit with full functionality - making it easy to prototype Internet-of-Things devices and scale to full commercial products with several turnkey hardware solutions available (including Intel Edison, NXP Pico, and Raspberry Pi 3). Download the latest Android Things developer preview to get started.
Close up of the Voice HAT accessory board.


Making with the Google Assistant SDK
The Google Assistant SDK developer preview was released last week. It's enabled by default, and brings the Google Assistant to your Voice Kit: including voice control, natural language understanding, Google's smarts, and more.
In combination with the rest of the Voice Kit, we think the Google Assistant SDK will provide you many creative opportunities to build fun and engaging projects. Makers have already started experimenting with the SDK - including building a mocktail maker.


The Voice Kit ships out to all MagPi Magazine subscribers on May 4, 2017, and we've published a parts list, assembly instructions, source code and suggested extensions to our website: aiyprojects.withgoogle.com. The complete kit is also for sale at over 500 Barnes & Noble stores nationwide, as well as UK retailers WH Smith, Tesco, Sainsburys, and Asda.
This is just the first AIY Project. There are more in the works, but we need to know how you'd like to incorporate AI into your own projects. Visit hackster.io to share your experiences and discuss future projects. Use #AIYprojects on social media to help us find your inventions. And if you happen to be at the San Mateo Maker Faire on May 19-21, 2017, stop by the Google pavilion to give us feedback.