Tag Archives: Maker

Coral updates: Project tutorials, a downloadable compiler, and a new distributor

Posted by Vikram Tank (Product Manager), Coral Team

coral hardware

We’re committed to evolving Coral to make it even easier to build systems with on-device AI. Our team is constantly working on new product features, and content that helps ML practitioners, engineers, and prototypers create the next generation of hardware.

To improve our toolchain, we're making the Edge TPU Compiler available to users as a downloadable binary. The binary works on Debian-based Linux systems, allowing for better integration into custom workflows. Instructions on downloading and using the binary are on the Coral site.

We’re also adding a new section to the Coral site that showcases example projects you can build with your Coral board. For instance, Teachable Machine is a project that guides you through building a machine that can quickly learn to recognize new objects by re-training a vision classification model directly on your device. Minigo shows you how to create an implementation of AlphaGo Zero and run it on the Coral Dev Board or USB Accelerator.

Our distributor network is growing as well: Arrow will soon sell Coral products.

Updates from Coral: A new compiler and much more

Posted by Vikram Tank (Product Manager), Coral Team

Coral has been public for about a month now, and we’ve heard some great feedback about our products. As we evolve the Coral platform, we’re making our products easier to use and exposing more powerful tools for building devices with on-device AI.

Today, we're updating the Edge TPU model compiler to remove the restrictions around specific architectures, allowing you to submit any model architecture that you want. This greatly increases the variety of models that you can run on the Coral platform. Just be sure to review the TensorFlow ops supported on Edge TPU and model design requirements to take full advantage of the Edge TPU at runtime.

We're also releasing a new version of Mendel OS (3.0 Chef) for the Dev Board with a new board management tool called Mendel Development Tool (MDT).

To help with the developer workflow, our new C++ API works with the TensorFlow Lite C++ API so you can execute inferences on an Edge TPU. In addition, both the Python and C++ APIs now allow you to run multiple models in parallel, using multiple Edge TPU devices.

In addition to these updates, we’re adding new capabilities to Coral with the release of the Environmental Sensor Board. It’s an accessory board for the Coral Dev Platform (and Raspberry Pi) that brings sensor input to your models. It has integrated light, temperature, humidity, and barometric sensors, and the ability to add more sensors via it's four Grove connectors. The secure element on-board also allows for easy communication with the Google Cloud IOT Core.

The team has also been working with partners to help them evaluate whether Coral is the right fit for their products. We’re excited that Oivi has chosen us to be the base platform of their new handheld AI-camera. This product will help prevent blindness among diabetes patients by providing early, automated detection of diabetic retinopathy. Anders Eikenes, CEO of Oivi, says “Oivi is dedicated towards providing patient-centric eye care for everyone - including emerging markets. We were honoured to be selected by Google to participate in their Coral alpha program, and are looking forward to our continued cooperation. The Coral platform gives us the ability to run our screening ML models inside a handheld device; greatly expanding the access and ease of diabetic retinopathy screening.”

Finally, we’re expanding our distributor network to make it easier to get Coral boards into your hands around the world. This month, Seeed and NXP will begin to sell Coral products, in addition to Mouser.

We're excited to keep evolving the Coral platform, please keep sending us feedback at coral-support@google.com.

You can see the full release notes on Coral site.

Introducing Coral: Our platform for development with local AI

Posted by Billy Rutledge (Director) and Vikram Tank (Product Mgr), Coral Team

AI can be beneficial for everyone, especially when we all explore, learn, and build together. To that end, Google's been developing tools like TensorFlow and AutoML to ensure that everyone has access to build with AI. Today, we're expanding the ways that people can build out their ideas and products by introducing Coral into public beta.

Coral is a platform for building intelligent devices with local AI.

Coral offers a complete local AI toolkit that makes it easy to grow your ideas from prototype to production. It includes hardware components, software tools, and content that help you create, train and run neural networks (NNs) locally, on your device. Because we focus on accelerating NN's locally, our products offer speedy neural network performance and increased privacy — all in power-efficient packages. To help you bring your ideas to market, Coral components are designed for fast prototyping and easy scaling to production lines.

Our first hardware components feature the new Edge TPU, a small ASIC designed by Google that provides high-performance ML inferencing for low-power devices. For example, it can execute state-of-the-art mobile vision models such as MobileNet V2 at 100+ fps, in a power efficient manner.

Coral Camera Module, Dev Board and USB Accelerator

For new product development, the Coral Dev Board is a fully integrated system designed as a system on module (SoM) attached to a carrier board. The SoM brings the powerful NXP iMX8M SoC together with our Edge TPU coprocessor (as well as Wi-Fi, Bluetooth, RAM, and eMMC memory). To make prototyping computer vision applications easier, we also offer a Camera that connects to the Dev Board over a MIPI interface.

To add the Edge TPU to an existing design, the Coral USB Accelerator allows for easy integration into any Linux system (including Raspberry Pi boards) over USB 2.0 and 3.0. PCIe versions are coming soon, and will snap into M.2 or mini-PCIe expansion slots.

When you're ready to scale to production we offer the SOM from the Dev Board and PCIe versions of the Accelerator for volume purchase. To further support your integrations, we'll be releasing the baseboard schematics for those who want to build custom carrier boards.

Our software tools are based around TensorFlow and TensorFlow Lite. TF Lite models must be quantized and then compiled with our toolchain to run directly on the Edge TPU. To help get you started, we're sharing over a dozen pre-trained, pre-compiled models that work with Coral boards out of the box, as well as software tools to let you re-train them.

For those building connected devices with Coral, our products can be used with Google Cloud IoT. Google Cloud IoT combines cloud services with an on-device software stack to allow for managed edge computing with machine learning capabilities.

Coral products are available today, along with product documentation, datasheets and sample code at g.co/coral. We hope you try our products during this public beta, and look forward to sharing more with you at our official launch.

New AIY Edge TPU Boards

Posted by Billy Rutledge, Director of AIY Projects

Over the past year and a half, we've seen more than 200K people build, modify, and create with our Voice Kit and Vision Kit products. Today at Cloud Next we announced two new devices to help professional engineers build new products with on-device machine learning(ML) at their core: the AIY Edge TPU Dev Board and the AIY Edge TPU Accelerator. Both are powered by Google's Edge TPU and represent our first steps towards expanding AIY into a platform for experimentation with on-device ML.

The Edge TPU is Google's purpose-built ASIC chip designed to run TensorFlow Lite ML models on your device. We've learned that performance-per-watt and performance-per-dollar are critical benchmarks when processing neural networks within a small footprint. The Edge TPU delivers both in a package that's smaller than the head of a penny. It can accelerate ML inferencing on device, or can pair with Google Cloud to create a full cloud-to-edge ML stack. In either configuration, by processing data directly on-device, a local ML accelerator increases privacy, removes the need for persistent connections, reduces latency, and allows for high performance using less power.

The AIY Edge TPU Dev Board is an all-in-one development board that allows you to prototype embedded systems that demand fast ML inferencing. The baseboard provides all the peripheral connections you need to effectively prototype your device — including a 40-pin GPIO header to integrate with various electrical components. The board also features a removable System-on-module (SOM) daughter board can be directly integrated into your own hardware once you're ready to scale.

The AIY Edge TPU Accelerator is a neural network coprocessor for your existing system. This small USB-C stick can connect to any Linux-based system to perform accelerated ML inferencing. The casing includes mounting holes for attachment to host boards such as a Raspberry Pi Zero or your custom device.

On-device ML is still in its early days, and we're excited to see how these two products can be applied to solve real world problems — such as increasing manufacturing equipment reliability, detecting quality control issues in products, tracking retail foot-traffic, building adaptive automotive sensing systems, and more applications that haven't been imagined yet.

Both devices will be available online this fall in the US with other countries to follow shortly.

For more product information visit g.co/aiy and sign up to be notified as products become available.

Making spaces: supporting makerspaces in education



Today marks the first day of the National Week of Making, a celebration of making and makers across the US. We like to think of ourselves as a company composed of makers, which is why we’re so committed to supporting making in our offices and in our communities. We’re taking this commitment even further today through a new collaboration with the Maker Education Initiative and the Children’s Museum of Pittsburgh. Together we will be working closely with 10 science museums and nonprofits across the country, providing each of them with tools and resources to support hands-on training for a fleet of new makerspaces in their community. Through this partnership we hope to help create 100 new makerspaces around the country in the next year.
Educators at a professional development session at the Children's Museum of Pittsburgh. Photo by Renee Rosensteel, 2015
As part of the program, schools, soon libraries, and community centers around the world will have access to the same fundraising toolkit, professional development resources, and support from other maker educators online through Maker Ed.

Our work with Maker Ed and the Children’s Museum of Pittsburgh is part of a broader set of programs designed to support making and makerspaces in schools and community organizations. We’ve worked with Stanford University’s FabLearn program by funding pilot labs and research. We’ve supported research on making in education at Indiana University. And as part of the Maker Promise, we’ll be working with Digital Promise and Maker Ed to provide 1,000 sets of safety gear to schools around the country. You can learn more about our programs and technology for Making & Science at makingscience.withgoogle.com.

Inspiring future makers and scientists with Science Journal



We believe that anyone can be a maker. Making doesn't just mean coding or working with electronics. It can be building or cooking, fixing a broken salad spinner or re-sewing a button on a teddy bear. Making is about looking at the world around you and creating - or, you guessed it, making - ways to improve it.

Science is also fundamentally about improving the world around you. It’s not just memorizing facts, wearing a lab coat or listening to a lecture. It’s observing the world around us to figure out how it works and how we can make things better through experimentation and discovery.

To bring out that inner scientist in all of us, today we’re introducing Science Journal: a digital science notebook that helps kids (and adults!) measure and explore the world around them. With this app, you can record data from sensors on your Android phone (or connected via an Arduino), take notes, observe, interpret and predict. Fundamentally, we think this application will help you learn how to think like a scientist!
Use Science Journal and the light sensor in your Android phone to collect data and run experiments
Since we know that hands-on projects increase engagement, cultivate curiosity and spark a lifelong interest in learning, we also teamed up with the Exploratorium - a leader in science education - to develop and assemble creative hands-on learning activity kits to accompany the Science Journal app. These Science Journal kits include inexpensive sensors, microcontrollers and craft supplies that bring science to life in new ways. The kits are available for purchase in the US or can even be assembled yourself.
Build and measure your own wind spinners using Science Journal activities and kits 

See science in action as Imagination Foundation chapters around the world put these activities to use
We’re excited to nurture an open ecosystem where people everywhere can use Science Journal to create their own activities, integrate their own sensors and even build kits of their own. To that end, we have released the microcontroller firmware code on GitHub and will be open sourcing the Android app later this summer. We’re eager to work with hardware vendors, science educators and the open source community to continue improving Science Journal.
Science Journal lets you visualize and graph data from your phone's accelerometer, light sensor, microphone and more. You can record data and set up trials, experiments and projects in the app.

But our goal to inspire budding scientists and makers goes beyond Science Journal. We’ve sent over 120,000 kids to their local science museum as part of Google Field Trip Days, encouraged and supported future changemakers through Google Science Fair and sponsored organizations such as NOVA, FIRST Robotics and Lick Observatory who are pushing science forward for all of us. And to help keep our young scientists safe, we’ve also distributed over 350,000 pairs of safety glasses at schools, makerspaces and Maker Faires around the world.

Many of the Google products used today by billions of people wouldn’t exist if not for the makers, scientists and engineers who wanted to create projects that could help improve our world. If you want to join in, come meet us today through Sunday at the Bay Area Maker Faire 2016, check out the Making & Science initiative and go subscribe to our YouTube channel. Let’s all make science, together.