Tag Archives: IoT

Coral updates: Project tutorials, a downloadable compiler, and a new distributor

Posted by Vikram Tank (Product Manager), Coral Team

coral hardware

We’re committed to evolving Coral to make it even easier to build systems with on-device AI. Our team is constantly working on new product features, and content that helps ML practitioners, engineers, and prototypers create the next generation of hardware.

To improve our toolchain, we're making the Edge TPU Compiler available to users as a downloadable binary. The binary works on Debian-based Linux systems, allowing for better integration into custom workflows. Instructions on downloading and using the binary are on the Coral site.

We’re also adding a new section to the Coral site that showcases example projects you can build with your Coral board. For instance, Teachable Machine is a project that guides you through building a machine that can quickly learn to recognize new objects by re-training a vision classification model directly on your device. Minigo shows you how to create an implementation of AlphaGo Zero and run it on the Coral Dev Board or USB Accelerator.

Our distributor network is growing as well: Arrow will soon sell Coral products.

Updates from Coral: A new compiler and much more

Posted by Vikram Tank (Product Manager), Coral Team

Coral has been public for about a month now, and we’ve heard some great feedback about our products. As we evolve the Coral platform, we’re making our products easier to use and exposing more powerful tools for building devices with on-device AI.

Today, we're updating the Edge TPU model compiler to remove the restrictions around specific architectures, allowing you to submit any model architecture that you want. This greatly increases the variety of models that you can run on the Coral platform. Just be sure to review the TensorFlow ops supported on Edge TPU and model design requirements to take full advantage of the Edge TPU at runtime.

We're also releasing a new version of Mendel OS (3.0 Chef) for the Dev Board with a new board management tool called Mendel Development Tool (MDT).

To help with the developer workflow, our new C++ API works with the TensorFlow Lite C++ API so you can execute inferences on an Edge TPU. In addition, both the Python and C++ APIs now allow you to run multiple models in parallel, using multiple Edge TPU devices.

In addition to these updates, we’re adding new capabilities to Coral with the release of the Environmental Sensor Board. It’s an accessory board for the Coral Dev Platform (and Raspberry Pi) that brings sensor input to your models. It has integrated light, temperature, humidity, and barometric sensors, and the ability to add more sensors via it's four Grove connectors. The secure element on-board also allows for easy communication with the Google Cloud IOT Core.

The team has also been working with partners to help them evaluate whether Coral is the right fit for their products. We’re excited that Oivi has chosen us to be the base platform of their new handheld AI-camera. This product will help prevent blindness among diabetes patients by providing early, automated detection of diabetic retinopathy. Anders Eikenes, CEO of Oivi, says “Oivi is dedicated towards providing patient-centric eye care for everyone - including emerging markets. We were honoured to be selected by Google to participate in their Coral alpha program, and are looking forward to our continued cooperation. The Coral platform gives us the ability to run our screening ML models inside a handheld device; greatly expanding the access and ease of diabetic retinopathy screening.”

Finally, we’re expanding our distributor network to make it easier to get Coral boards into your hands around the world. This month, Seeed and NXP will begin to sell Coral products, in addition to Mouser.

We're excited to keep evolving the Coral platform, please keep sending us feedback at [email protected].

You can see the full release notes on Coral site.

Introducing Coral: Our platform for development with local AI

Posted by Billy Rutledge (Director) and Vikram Tank (Product Mgr), Coral Team

AI can be beneficial for everyone, especially when we all explore, learn, and build together. To that end, Google's been developing tools like TensorFlow and AutoML to ensure that everyone has access to build with AI. Today, we're expanding the ways that people can build out their ideas and products by introducing Coral into public beta.

Coral is a platform for building intelligent devices with local AI.

Coral offers a complete local AI toolkit that makes it easy to grow your ideas from prototype to production. It includes hardware components, software tools, and content that help you create, train and run neural networks (NNs) locally, on your device. Because we focus on accelerating NN's locally, our products offer speedy neural network performance and increased privacy — all in power-efficient packages. To help you bring your ideas to market, Coral components are designed for fast prototyping and easy scaling to production lines.

Our first hardware components feature the new Edge TPU, a small ASIC designed by Google that provides high-performance ML inferencing for low-power devices. For example, it can execute state-of-the-art mobile vision models such as MobileNet V2 at 100+ fps, in a power efficient manner.

Coral Camera Module, Dev Board and USB Accelerator

For new product development, the Coral Dev Board is a fully integrated system designed as a system on module (SoM) attached to a carrier board. The SoM brings the powerful NXP iMX8M SoC together with our Edge TPU coprocessor (as well as Wi-Fi, Bluetooth, RAM, and eMMC memory). To make prototyping computer vision applications easier, we also offer a Camera that connects to the Dev Board over a MIPI interface.

To add the Edge TPU to an existing design, the Coral USB Accelerator allows for easy integration into any Linux system (including Raspberry Pi boards) over USB 2.0 and 3.0. PCIe versions are coming soon, and will snap into M.2 or mini-PCIe expansion slots.

When you're ready to scale to production we offer the SOM from the Dev Board and PCIe versions of the Accelerator for volume purchase. To further support your integrations, we'll be releasing the baseboard schematics for those who want to build custom carrier boards.

Our software tools are based around TensorFlow and TensorFlow Lite. TF Lite models must be quantized and then compiled with our toolchain to run directly on the Edge TPU. To help get you started, we're sharing over a dozen pre-trained, pre-compiled models that work with Coral boards out of the box, as well as software tools to let you re-train them.

For those building connected devices with Coral, our products can be used with Google Cloud IoT. Google Cloud IoT combines cloud services with an on-device software stack to allow for managed edge computing with machine learning capabilities.

Coral products are available today, along with product documentation, datasheets and sample code at g.co/coral. We hope you try our products during this public beta, and look forward to sharing more with you at our official launch.

New AIY Edge TPU Boards

Posted by Billy Rutledge, Director of AIY Projects

Over the past year and a half, we've seen more than 200K people build, modify, and create with our Voice Kit and Vision Kit products. Today at Cloud Next we announced two new devices to help professional engineers build new products with on-device machine learning(ML) at their core: the AIY Edge TPU Dev Board and the AIY Edge TPU Accelerator. Both are powered by Google's Edge TPU and represent our first steps towards expanding AIY into a platform for experimentation with on-device ML.

The Edge TPU is Google's purpose-built ASIC chip designed to run TensorFlow Lite ML models on your device. We've learned that performance-per-watt and performance-per-dollar are critical benchmarks when processing neural networks within a small footprint. The Edge TPU delivers both in a package that's smaller than the head of a penny. It can accelerate ML inferencing on device, or can pair with Google Cloud to create a full cloud-to-edge ML stack. In either configuration, by processing data directly on-device, a local ML accelerator increases privacy, removes the need for persistent connections, reduces latency, and allows for high performance using less power.

The AIY Edge TPU Dev Board is an all-in-one development board that allows you to prototype embedded systems that demand fast ML inferencing. The baseboard provides all the peripheral connections you need to effectively prototype your device — including a 40-pin GPIO header to integrate with various electrical components. The board also features a removable System-on-module (SOM) daughter board can be directly integrated into your own hardware once you're ready to scale.

The AIY Edge TPU Accelerator is a neural network coprocessor for your existing system. This small USB-C stick can connect to any Linux-based system to perform accelerated ML inferencing. The casing includes mounting holes for attachment to host boards such as a Raspberry Pi Zero or your custom device.

On-device ML is still in its early days, and we're excited to see how these two products can be applied to solve real world problems — such as increasing manufacturing equipment reliability, detecting quality control issues in products, tracking retail foot-traffic, building adaptive automotive sensing systems, and more applications that haven't been imagined yet.

Both devices will be available online this fall in the US with other countries to follow shortly.

For more product information visit g.co/aiy and sign up to be notified as products become available.

Android Things Release Candidate

Posted by Dave Smith, Developer Advocate for IoT

Earlier this year at CES, we showcased consumer products powered by Android Things from partners like Lenovo, LG, JBL, iHome, and Sony. We are excited to see Android Things enable the wider developer ecosystem as well. Today we are announcing the final preview release of Android Things, Developer Preview 8, before the upcoming stable release.

Feature complete SDK

Developer Preview 8 represents the final API surface exposed in the Android Things support library for the upcoming stable release. There will be no more breaking API changes before the stable v1.0 release of the SDK. For details on all the API changes included in DP8, see the release notes. Refer to the updated SDK reference to review the classes and methods in the final SDK.

This release also brings new features in the Android Things developer console to make building and managing production devices easier. Here are some notable updates:

Production-focused console enhancements

With an eye towards building and shipping production devices with the upcoming LTS release, we have made several updates to the Android Things developer console:

  • Enhanced OTA: Unpublish the current OTA build when issues are discovered in the field.
  • Visual storage layout: Configure the device storage allocated to apps and data for each build, and get an overview of how much storage your apps require.
  • Font/locale controls: Configure the set of supported fonts and locales packaged into each build.
  • Group sharing: Product sharing has been extended to include support for Google Groups.

App library

The new app library enables you to manage APKs more easily without the need to package them together in a separate zipped bundle. Track individual versions, review permissions, and share your apps with other console users. See the app library documentation for more details.

Permissions

On mobile devices, apps request permissions at runtime and the end user grants them. In earlier previews, Android Things granted these same permissions automatically to apps on device boot. Beginning in DP8, these permissions are granted using a new interface in the developer console, giving developers more control of the permissions used by the apps on their device.

This change does not affect development, as Android Studio grants all permissions by default. Developers using the command line can append the -g flag to the adb install command to get the same behavior. To test how apps on your device behave with certain permissions revoked, use the pm command:

$ adb shell pm [grant|revoke] <permission-name> ...

App launch behavior

Embedded devices need to launch their primary application automatically after the device boots, and relaunch it if the app terminates unexpectedly. In earlier previews, the main app on the device could listen for a custom IOT_LAUNCHER intent to enable this behavior. Beginning in DP8, this category is replaced by the standard CATEGORY_HOME intent.

<activity android:name=".HomeActivity">
    ...

    <!-- Launch activity automatically on boot, relaunch on termination. -->
    <intent-filter>
        <action android:name="android.intent.action.MAIN"/>
        <category android:name="android.intent.category.HOME"/>
        <category android:name="android.intent.category.DEFAULT"/>
    </intent-filter>
</activity>

Apps that contain an IOT_LAUNCHER intent filter will no longer be triggered on boot. Update your apps to use CATEGORY_HOME instead.

Feedback

Thanks to all of you in the developer community for sharing your feedback with us throughout developer preview. Join Google's IoT Developers Community on Google+ to let us know what you're building with Android Things and how we can improve the platform in future releases to help you build connected devices at scale!

AIY Projects: Updated kits for 2018

Posted by Billy Rutledge, Director of AIY Projects

Last year, AIY Projects launched to give makers the power to build AI into their projects with two do-it-yourself kits. We're seeing continued demand for the kits, especially from the STEM audience where parents and teachers alike have found the products to be great tools for the classroom. The changing nature of work in the future means students may have jobs that haven't yet been imagined, and we know that computer science skills, like analytical thinking and creative problem solving, will be crucial.

We're taking the first of many steps to help educators integrate AIY into STEM lesson plans and help prepare students for the challenges of the future by launching a new version of our AIY kits. The Voice Kit lets you build a voice controlled speaker, while the Vision Kit lets you build a camera that learns to recognize people and objects (check it out here). The new kits make getting started a little easier with clearer instructions, a new app and all the parts in one box.

To make setup easier, both kits have been redesigned to work with the new Raspberry Pi Zero WH, which comes included in the box, along with the USB connector cable and pre-provisioned SD card. Now users no longer need to download the software image and can get running faster. The updated AIY Vision Kit v1.1 also includes the Raspberry Pi Camera v2.

AIY Voice Kit v2 includes Raspberry Pi Zero WH and pre-provisioned SD card

AIY Voice Kit v1.1 includes Raspberry Pi Zero WH, Raspberry Pi Cam 2 and pre-provisioned SD card

We're also introducing the AIY companion app for Android, available here in Google Play, to make wireless setup and configuration a snap. The kits still work with monitor, keyboard and mouse as an alternate path and we're working on iOS and Chrome companions which will be coming soon.

The AIY website has been refreshed with improved documentation, now easier for young makers to get started and learn as they build. It also includes a new AIY Models area, showcasing a collection of neural networks designed to work with AIY kits. While we've solved one barrier to entry for the STEM audience, we recognize that there are many other things that we can do to make our kits even more useful. We'll once again be at #MakerFaire events to gather feedback from our users and in June we'll be working with teachers from all over the world at the ISTE conference in Chicago.

The new AIY Voice Kit and Vision Kit have arrived at Target Stores and Target.com (US) this month and we're working to make them globally available through retailers worldwide. Sign up on our mailing list to be notified when our products become available.

We hope you'll pick up one of the new AIY kits and learn more about how to build your own smart devices. Be sure to share your recipes on Hackster.io and social media using #aiyprojects.

IoT Developer Story: Deeplocal

Posted by Dave Smith, Developer Advocate for IoT

Deeplocal is a Pittsburgh-based innovation studio that makes inventions as marketing to help the world's most loved brands tell their stories. The team at Deeplocal built several fun and engaging robotics projects using Android Things. Leveraging the developer ecosystem surrounding the Android platform and the compute power of Android Things hardware, they were able to quickly and easily create robots powered by computer vision and machine learning.

DrawBot

DrawBot is a DIY drawing robot that transforms your selfies into physical works of art.

"The Android Things platform helped us move quickly from an idea, to prototype, to final product. Switching from phone apps to embedded code was easy in Android Studio, and we were able to pull in OpenCV modules, motor drivers, and other libraries as needed. The final version of our prototype was created two weeks after unboxing our first Android Things developer kit."

- Brian Bourgeois, Producer, Deeplocal

Want to build your own DrawBot? See the Hackster.io project for all the source code, schematics, and 3D models.

HandBot

A robotic hand that learns and reacts to hand gestures, HandBot visually recognizes gestures and applies machine learning.

"The Android Things platform made integration work for Handbot a breeze. Using TensorFlow, we were able to train a neural network to recognize hand gestures. Once this was created, we were able to use Android Things drivers to implement games in easy-to-read Android code. In a matter of weeks, we went from a fresh developer kit to competing against a robot hand in Rock, Paper, Scissors."

- Mike Derrick, Software Engineer, Deeplocal

Want to build your own HandBot? See the Hackster.io project for all the source code, schematics, and 3D models.

Visit the Google Hackster community to explore more inspiring ideas just like these, and join Google's IoT Developers Community on Google+ to get the latest platform updates, ask questions, and discuss ideas.

Actions On Google Best Practices Video Series

Posted by Ido Green (@greenido), Developer Advocate

We recently launched a new YouTube video series focused on teaching developers best practices for the Actions on Google platform.

Apps for the Google Assistant are the gateway for users to engage with your services through Google Home, Android phones, iPhones, and in the future, through every experience where the Google Assistant is available.

The goal of the video series is to show you how to use the Google Assistant platform in the best way. You will learn more from Ido Green, Developer Advocate at Google, who will touch on topics like:

Tune in to learn how to build, or improve your apps for the Google Assistant so your users can benefit from more meaningful, interactive experiences.

And if you'd like to keep the conversation going, please join our developer community at: https://g.co/actionsdev or @actionsongoogle

See you!

Introducing our new developer YouTube Series: “Build Out”

Posted by Reto Meier & Colt McAnlis: Developer Advocates

Ever found yourself trying to figure out the right way to combine mobile, cloud, and web technologies, only to be lost in the myriad of available offerings? It can be challenging to know the best way to combine all the options to build products that solve problems for your users.

That's why we created Build Out, a new YouTube series where real engineers face-off building fake products.

Each month we, (Reto Meier and Colt McAnlis), will present competing architectures to help show how Google's developer products can be combined to solve challenging problems for your users. Each solution incorporates a wide range of technologies, including Google Cloud, Android, Firebase, and Tensorflow (just to name a few).

Since we're engineers at heart, we enjoy a challenge—so each solution goes well past minimum viable product, and explores some of the more advanced possibilities available to solve the problem creatively.

Now, here's the interesting part. When we're done presenting, you get to decide which of us solved the problem better, by posting a comment to the video on YouTube. If you've already got a better solution—or think you know one—tell us about it in the comments, or respond with your own Build Out video to show us how it's done!

Episode #1: The Smart Garden.

In which we explore designs for gardens that care for themselves. Each design must be fully autonomous, learn from experience, and scale from backyard up to large-scale commercial gardens.

You can get the full technical details on each Smart Garden solution in this Medium article, including alternative approaches and best practices.

You can also listen to the Build Out Rewound Podcast, to hear us discuss our choices.

Android Things Hackster Community

Posted by Dave Smith, Developer Advocate for IoT

Android Things makes building connected embedded devices easy by providing the same Android development tools, best-in-class Android framework, and Google APIs that make developers successful on mobile. Since the initial preview launch back in December, the community has turned some amazing ideas into exciting prototypes using the platform.

To empower these makers and developers using Android Things to share and learn from each other, we have partnered with Hackster.io to create a community where aspiring IoT developers can go to showcase their projects and get inspired by the work of others. Hackster.io is a community of 200,000 engineers and developers dedicated to building internet-connected hardware projects. They also seek to educate and challenge members through live workshops and design contests.

We are eager to see the projects that you come up with. More importantly, we're excited to see how your work can inspire other developers to create something great with Android Things. Visit our Hackster.io community to see the amazing projects others have already built and join the community today!

Android Things Webinar

We will be hosting a webinar in cooperation with Hackster.io on July 7th, 2017 at 10AM PST titled Bootstrapping IoT Products with Android Things. During this time, you will learn how we have designed Android Things to address many of the pain points experienced by developers attempting to build IoT products. You will also have the opportunity to send in questions you have regarding the platform and ecosystem. Register today to join us for this exciting event!