Tag Archives: Internet of Things

Android Things Developer Preview 4

Posted by Wayne Piekarski, Developer Advocate for IoT

Today, we are releasing the next Developer Preview 4 (DP4) of Android Things, bringing new supported hardware, features, and bug fixes to the platform. The goal of Android Things is to enable Android Developers to quickly build smart devices, and seamlessly scale from prototype to production using a Board Support Package (BSP) provided by Google.
AIY Projects and Google Assistant SDK
Earlier this month, we announced a partnership with AIY Projects, enabling Android Things support for the Raspberry Pi-based Voice Kit. And now with DP4, the necessary drivers are provided to support the Google Assistant SDK on all Android Things certified development boards. Learn more from the instructions in the sample.
New hardware and driver support
We are now adding a new Board Support Package for the NXP i.MX7D, which supports higher performance than the i.MX6UL while still using a low power System on Module (SoM) design. Support for Inter-IC Sound Bus (I2S) has been added to the Peripheral I/O API, now enabling audio drivers to be written in user space for sound hardware connected via an I2S bus. The AIY Voice Kit sample demonstrates how to use I2S support for audio. We have also provided the ability for developers to enable/disable Bluetooth profiles at run time.
NXP i.MX7D System on Module
Production hardware sample
Android Things is very focused on helping developers build production-ready devices that they can bring to market. This means building custom hardware, in addition to the software running on the Android Things system-on-module (SoM). As a part of this effort, we have released Edison Candle, the first in a series of production samples showcasing hardware and software designed to work together. The code is hosted on GitHub and the hardware design files are on CircuitHub, and can be easily fabricated by many 3rd party companies.
Edison Candle sample with source and schematics
Thank you to all the developers who submitted feedback for the previous developer previews. Please continue sending us your feedback by filing bug reports and feature requests, and asking any questions on stackoverflow. To download images for DP4, visit the Android Things download page and find the changes in the release notes. You can also join Google's IoT Developers Community on Google+, a great resource to get updates and discuss ideas, with over 4,900 members. We also have a number of great talks about Android Things and IoT at Google I/O, which you can view via live stream or as a recording later.




Android Things Developer Preview 3

Posted by Wayne Piekarski, Developer Advocate for IoT

Today, we are releasing the Developer Preview 3 (DP3) of Android Things, bringing new features and bug fixes to the platform. This preview is part of our commitment to provide regular updates to developers who are building Internet of Things (IoT) products with our platform. Android developers can quickly build smart devices using Android APIs and Google services, while staying secure with updates directly from Google. The System-on-Module (SoM) architecture supports prototyping with development boards, and then scaling them to large production runs while using the same Board Support Package (BSP) from Google.

Android Bluetooth APIs


DP3 now includes support for all Android Bluetooth APIs in android.bluetooth and android.bluetooth.le, across all Android Things supported hardware. You can now write code that interacts with both Bluetooth classic and low energy (LE) devices just like a regular Android phone. Existing samples such as Bluetooth LE advertisements and scanning and Bluetooth LE GATT can be used unmodified on Android Things. We have also provided two new samples, Bluetooth LE GATT server and Bluetooth audio sink.

USB Host support


Android version 3.1 and later supports USB Host, which allows a regular user space application to communicate with USB devices without root privileges or support needed from the Linux kernel. This functionality is now supported in Android Things, to enable interfacing with custom USB devices. Any existing code supporting USB Host will work on Android Things, and an extra sample USB Enumerator is available that demonstrates how to iterate over and print the interfaces and endpoints for each USB device.

Feedback


Once again, thank you to all the developers who submitted feedback for the previous developer previews. Please continue to send us your feedback by filing bug reports and feature requests, and ask any questions on stackoverflow. To download images for Developer Preview 3, visit the Android Things download page, and find the changes in the release notes. You can also join Google's IoT Developers Community on Google+, a great resource to keep up to date and discuss ideas, with over 4100 new members.

On-Device Machine Intelligence



To build the cutting-edge technologies that enable conversational understanding and image recognition, we often apply combinations of machine learning technologies such as deep neural networks and graph-based machine learning. However, the machine learning systems that power most of these applications run in the cloud and are computationally intensive and have significant memory requirements. What if you want machine intelligence to run on your personal phone or smartwatch, or on IoT devices, regardless of whether they are connected to the cloud?

Yesterday, we announced the launch of Android Wear 2.0, along with brand new wearable devices, that will run Google's first entirely “on-device” ML technology for powering smart messaging. This on-device ML system, developed by the Expander research team, enables technologies like Smart Reply to be used for any application, including third-party messaging apps, without ever having to connect with the cloud…so now you can respond to incoming chat messages directly from your watch, with a tap.
The research behind this began last year while our team was developing the machine learning systems that enable conversational understanding capability in Allo and Inbox. The Android Wear team reached out to us and was interested to know whether it would be possible to deploy this Smart Reply technology directly onto a smart device. Because of the limited computing power and memory on smart devices, we quickly realized that it was not possible to do so. Our product manager, Patrick McGregor, realized that this presented a unique challenge and an opportunity for the Expander team to return to the drawing board to design a completely new, lightweight, machine learning architecture — not only to enable Smart Reply on Android Wear, but also to power a wealth of other on-device mobile applications. Together with Tom Rudick, Nathan Beach, and other colleagues from the Android Wear team, we set out to build the new system.

Learning with Projections
A simple strategy to build lightweight conversational models might be to create a small dictionary of common rules (input → reply mappings) on the device and use a naive look-up strategy at inference time. This can work for simple prediction tasks involving a small set of classes using a handful of features (such as binary sentiment classification from text, e.g. “I love this movie” conveys a positive sentiment whereas the sentence “The acting was horrible” is negative). But, it does not scale to complex natural language tasks involving rich vocabularies and the wide language variability observed in chat messages. On the other hand, machine learning models like recurrent neural networks (such as LSTMs), in conjunction with graph learning, have proven to be extremely powerful tools for complex sequence learning in natural language understanding tasks, including Smart Reply. However, compressing such rich models to fit in device memory and produce robust predictions at low computation cost (rapidly on-demand) is extremely challenging. Early experiments with restricting the model to predict only a small handful of replies or using other techniques like quantization or character-level models did not produce useful results.

Instead, we built a different solution for the on-device ML system. We first use a fast, efficient mechanism to group similar incoming messages and project them to similar (“nearby”) bit vector representations. While there are several ways to perform this projection step, such as using word embeddings or encoder networks, we employ a modified version of locality sensitive hashing (LSH) to reduce dimension from millions of unique words to a short, fixed-length sequence of bits. This allows us to compute a projection for an incoming message very fast, on-the-fly, with a small memory footprint on the device since we do not need to store the incoming messages, word embeddings, or even the full model used for training.
Projection step: Similar messages are grouped together and projected to nearby vectors. For example, the messages "hey, how's it going?" and "How's it going buddy?" share similar content and might be projected to the same vector 11100011. Another related message “Howdy, everything going well?” is mapped to a nearby vector 11100110 that differs only in 2 bits.
Next, our system takes the incoming message along with its projections and jointly trains a “message projection model” that learns to predict likely replies using our semi-supervised graph learning framework. The graph learning framework enables training a robust model by combining semantic relationships from multiple sources — message/reply interactions, word/phrase similarity, semantic cluster information — learning useful projection operations that can be mapped to good reply predictions.
Learning step: (Top) Messages along with projections and corresponding replies, if available, are used in a machine learning framework to jointly learn a “message projection model”. (Bottom) The message projection model learns to associate replies with the projections of the corresponding incoming messages. For example, the model projects two different messages “Howdy, everything going well?” and “How’s it going buddy?” (bottom center) to nearby bit vectors and learns to map these to relevant replies (bottom right).
It’s worth noting that while the message projection model can be trained using complex machine learning architectures and the power of the cloud, as described above, the model itself resides and performs inference completely on device. Apps running on the device can pass a user’s incoming messages and receive reply predictions from the on-device model without data leaving the device. The model can also be adapted to cater to the user’s writing style and individual preferences to provide a personalized experience.
Inference step: The model applies the learned projections to an incoming message (or sequence of messages) and suggests relevant and diverse replies. Inference is performed on the device, allowing the model to adapt to user data and personal writing styles.
To get the on-device system to work out of the box, we had to make a few additional improvements such as optimizing for speeding up computations on device and generating rich, diverse replies from the model. We will have a forthcoming scientific publication that describes the on-device machine learning work in more detail.

Converse from Your Wrist
When we embarked on our journey to build this technology from scratch, we weren’t sure if the predictions would be useful or of sufficient quality. We’re quite surprised and excited about how well it works even on Android wearable devices with very limited computation and memory resources. We look forward to continuing to improve the models to provide users with more delightful conversational experiences, and we will be leveraging this on-device ML platform to enable completely new applications in the months to come.

You can now use this feature to respond to your messages directly from your Google watches or any watch that runs Android Wear 2.0. It is already enabled on Google Hangouts, Google Messenger, and many third-party messaging apps. We also provide an API for developers of third-party Wear apps.

Acknowledgements
On behalf of the Google Expander team, I would also like to thank the following people who helped make this technology a success: Andrei Broder, Andrew Tomkins, David Singleton, Mirko Ranieri, Robin Dua and Yicheng Fan.

Announcing the Google Internet of Things (IoT) Technology Research Award Pilot



Over the past year, Google engineers have experimented and developed a set of building blocks for the Internet of Things - an ecosystem of connected devices, services and “things” that promises direct and efficient support of one’s daily life. While there has been significant progress in this field, there remain significant challenges in terms of (1) interoperability and a standardized modular systems architecture, (2) privacy, security and user safety, as well as (3) how users interact with, manage and control an ensemble of devices in this connected environment.

It is in this context that we are happy to invite university researchers1 to participate in the Internet of Things (IoT) Technology Research Award Pilot. This pilot provides selected researchers in-kind gifts of Google IoT related technologies (listed below), with the goal of fostering collaboration with the academic community on small-scale (~4-8 week) experiments, discovering what they can do with our software and devices.

We invite you to submit proposals in which Google IoT technologies are used to (1) explore interesting use cases and innovative user interfaces, (2) address technical challenges as well as interoperability between devices and applications, or (3) experiment with new approaches to privacy, safety and security. Proposed projects should make use of one or a combination of these Google technologies:
  • Google beacon platform - consisting of the open beacon format Eddystone and various client and cloud APIs, this platform allows developers to mark up the world to make your apps and devices work smarter by providing timely, contextual information.
  • Physical Web - based on the Eddystone URL beacon format, the Physical Web is an approach designed to allow any smart device to interact with real world objects - a vending machine, a poster, a toy, a bus stop, a rental car - and not have to download an app first.
  • Nearby Messages API - a publish-subscribe API that lets you pass small binary payloads between internet-connected Android and iOS devices as well as with beacons registered with Google's proximity beacon service.
  • Brillo & Weave - Brillo is an Android-based embedded OS that brings the simplicity and speed of mobile software development to IoT hardware to make it cost-effective to build a secure smart device, and to keep it updated over time. Weave is an open communications and interoperability platform for IoT devices that allows for easy connections to networks, smartphones (both Android and iOS), mobile apps, cloud services, and other smart devices.
  • OnHub router - a communication hub for the Internet of Things supporting Bluetooth® Smart Ready, 802.15.4 and 802.11a/b/g/n/ac. It also allows you to quickly create a guest network and control the devices you want to share (see On.Here).
  • Google Cloud Platform IoT Solutions - tools to scale connections, gather and make sense of data, and provide the reliable customer experiences that IoT hardware devices require.
  • Chrome Boxes & Kiosk Apps - provides custom full screen apps for a purpose-built Chrome device, such as a guest registration desk, a library catalog station, or a point-of-sale system in a store.
  • Vanadium - an open-source framework designed to make it easier to develop, secure, multi-device user experiences, with or without an Internet connection.
Check out the Ubiquity Dev Summit playlist for more information on these platforms and their best practices.

Please submit your proposal here by February 29th in order to be considered for a award. Proposals will be reviewed by researchers and product teams within Google. In addition to looking for impact and interesting ideas, priority will be given to research that can make immediate use of the available technologies. Selected proposals will be notified by the end of March 2016. If selected, the award will be subject to Google’s terms, and your use of Google technologies will be subject to the applicable Google terms of service.

To connect our physical world to the Internet is a broad and long-term challenge, one we hope to address by working with researchers across many disciplines and work practices. We are looking forward to the collaborative opportunity provided by this pilot, and learning about innovative applications you create for these new technologies.



1 The same eligibility conditions as for the Faculty Research Award Program apply - see here.