Monthly Archives: November 2017

Precious cargo: Securing containers with Kubernetes Engine 1.8



With every new release of Kubernetes and Google Kubernetes Engine, we add new security features, strengthen existing security controls and move to stronger default configurations. We strive to improve Kubernetes security in general, and to make Kubernetes Engine more secure by default so that you don’t have to apply these configurations yourself.

With the speed of development in Kubernetes, there are often new features and security configurations for you to know about. This post will guide you through implementing our current guidance for hardening your Kubernetes Engine cluster. If you’re feeling adventurous, we’ll also discuss new security features that you can test on alpha clusters (which are not recommended for production use).

Security best practices for your Kubernetes cluster

When running a Kubernetes cluster, there are several best practices we recommend you follow:
  •  Use least privilege service accounts on your nodes
  •  Disable the Kubernetes web UI 
  •  Disable legacy authorization (now disabled by default for new clusters in Kubernetes 1.8) But before you can do that, you’ll need to set a few environment variables first:
#Your project ID
PROJECT_ID=
#Your Zone. E.g. us-west1-c
ZONE=
#New service account we will create. Can be any string that isn't an existing service account. E.g. min-priv-sa
SA_NAME=
#Name for your cluster we will create or modify. E.g. example-secure-cluster
CLUSTER_NAME=
#Name for a node-pool we will create. Can be any string that isn't an existing node-pool. E.g. example-node-pool
NODE_POOL=

Use least privilege service accounts on your nodes


The principle of least privilege helps to reduce the "blast radius" of a potential compromise, by granting each component only the minimum permissions required to perform its function. Should one component become compromised, least privilege makes it much more difficult to chain attacks together and escalate permissions.

Each Kubernetes Engine node has a Service Account associated with it. You’ll see the Service Account user listed in the IAM section of the Cloud Console as “Compute Engine default service account.” This account has broad access by default, making it useful to wide variety of applications, but has more permissions than you need to run your Kubernetes Engine cluster.

We recommend you create and use a minimally privileged service account to run your Kubernetes Engine Cluster instead of the Compute Engine default service account.

Kubernetes Engine requires, at a minimum, the service account to have the monitoring.viewer, monitoring.metricWriter, and logging.logWriter roles.

The following commands will create a GCP service account for you with the minimum permissions required to operate Kubernetes Engine:

gcloud iam service-accounts create "${SA_NAME}" \
  --display-name="${SA_NAME}"

gcloud projects add-iam-policy-binding "${PROJECT_ID}" \
  --member "serviceAccount:${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" \
  --role roles/logging.logWriter

gcloud projects add-iam-policy-binding "${PROJECT_ID}" \
  --member "serviceAccount:${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" \
  --role roles/monitoring.metricWriter

gcloud projects add-iam-policy-binding "${PROJECT_ID}" \
  --member "serviceAccount:${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" \
  --role roles/monitoring.viewer

#if your cluster already exists, you can now create a new node pool with this new service account.
gcloud container node-pools create "${NODE_POOL}" \
  --service-account="${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" \
  --cluster="${CLUSTER_NAME}"

If you need your Kubernetes Engine cluster to have access to other Google Cloud services, we recommend that you create an additional role and provision it to workloads via Kubernetes secrets, rather than re-use this one.

Note: We’re currently designing a system to make obtaining GCP credentials in your Kubernetes cluster much easier and will completely replace this workflow. Join the Kubernetes Container Identity Working Group to participate.

Disable the Kubernetes Web UI

We recommend you disable the Kubernetes Web UI when running on Kubernetes Engine. The Kubernetes Web UI (aka KubernetesDashboard) is backed by a highly privileged Kubernetes Service Account. The Cloud Console provides much of the same functionality, so you don't need these permissions if you're running on Kubernetes Engine.

The following command disables the Kubernetes Web UI:
gcloud container clusters update "${CLUSTER_NAME}" \
    --update-addons=KubernetesDashboard=DISABLED

Disable legacy authorization

Starting with Kubernetes 1.8, Attribute-Based Access Control (ABAC) is disabled by default in Kubernetes Engine. Role-Based Access Control (RBAC) was released as beta in Kubernetes 1.6, and ABAC was kept enabled until 1.8 to give users time to migrate. RBAC has significant security advantages and is now stable, so it’s time to disable ABAC. If you're still relying on ABAC, review the Prerequisites for using RBAC before continuing. If you upgraded your cluster from an older version and are using ABAC, you should update your access controls configuration:
gcloud container clusters update "${CLUSTER_NAME}" \
  --no-enable-legacy-authorization

To create a new cluster with all of the above recommendations, run:
gcloud container clusters create "${CLUSTER_NAME}" \
  --service-account="${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" \
  --no-enable-legacy-authorization \
  --disable-addons=KubernetesDashboard


Create a cluster network policy


In addition to the aforementioned best practices, we recommend you create network policies to control the communication between your cluster's Pods and Services. Kubernetes Engine's Network Policy enforcement, currently in beta, makes it much more difficult for attackers to propagate inside your cluster. You can also use the Kubernetes Network Policy API to create Pod-level firewall rules in Kubernetes Engine. These firewall rules determine which Pods and Services can access one another inside your cluster.

To enable network policy enforcement when creating a new cluster, specify the --enable-network-policy flag using gcloud beta:

gcloud beta container clusters create "${CLUSTER_NAME}" \
  --project="${PROJECT_ID}" \
  --zone="${ZONE}" \
  --enable-network-policy

Once Network Policy has been enabled, you'll have to actually define a policy. Since this is specific to your exact topology, we can’t provide a detailed walkthrough. The Kubernetes documentation, however, has an excellent overview and walkthrough for a simple nginx deployment.

Note: Alpha and beta features such as Kubernetes Engine’s Network Policy API represent meaningful security improvements in the GKE APIs. Be aware that alpha and beta features are not covered by any SLA or deprecation policy, and may be subject to breaking changes in future releases. We don't recommend you use these features for production clusters.

Closing thoughts


Many of the same lessons we learned from traditional information security apply to Containers, Kubernetes, and Kubernetes Engine; we just have new ways to apply them. Adhere to least privilege, minimize your attack surface by disabling legacy or unnecessary functionality, and the most traditional of all: write good firewall policies. To learn more, visit the Kubernetes Engine webpage and documentation. If you’re just getting started with containers and Google Cloud Platform (GCP), be sure to sign up for a free trial.

Please Welcome Diane Bryant to Google Cloud

I am happy and excited to announce that Diane Bryant, former Group President at Intel, will be joining Google Cloud as our Chief Operating Officer. I can’t think of a person with more relevant experience and talents. She is an engineer with tremendous business focus and an outstanding thirty-year career in technology.

Most recently, Diane was head of Intel’s Data Center Group, which generated $17 billion in revenue in 2016. Over her five years as Group President, Diane expanded the business to additionally focus on pervasive cloud computing, network virtualization and the adoption of artificial intelligence solutions. Previously, Bryant was Intel’s Corporate Vice President and Chief Information Officer, where she was responsible for corporate-wide information technology solutions and services.  

Diane serves on the board of United Technologies. Throughout her career, Diane has worked to mentor and sponsor women in technology.

Google Cloud is the most technologically advanced, most highly available, and most open cloud in the world. We are growing at an extraordinary rate as we enable businesses to become smarter with data, increase their agility, collaborate and secure their information. Diane’s strategic acumen, technical knowledge and client focus will prove invaluable as we accelerate the scale and reach of Google Cloud.

I am personally looking forward to working closely with Diane Bryant as we enter what promises to be a great 2018 for Google Cloud.

Source: Google Cloud


Data Journalism Awards 2018: call for entries

Data Journalism—the skill of combining reporting with data—is becoming an increasingly important part of every journalist’s toolkit. That’s not just anecdotal: a recent study commissioned by the Google News Lab found that half of all news outlets have at least one dedicated data journalist.


So, for the seventh consecutive year, we’re proud to support the 2018 Data Journalism Awards.

These are the only global awards recognizing work that brings together data, visualization and storytelling. It’s a part of our commitment to supporting innovative journalism around the world.


Data journalists, editors and publishers are encouraged to submit their work for consideration using this form by March 29, 2018. But don’t get too comfortable with that deadline, early applications are encouraged.


Last year there were 573 entries from 51 countries across five continents. Past winners of the $1,801 prizes include include BuzzFeed, The Wall Street Journal, The New York Times, FiveThirtyEight, ProPublica, and La Nación, as well as smaller organizations such as Rutas Del Conflicto, Civio Foundation and Convoca. And if you’re wondering why the prize is $1,801, It’s because William Playfair invented the pie chart in 1801.


Aimed at newsrooms and journalists in organizations of all sizes, the 2018 awards will recognize the best work in key categories, including:

  • Data visualization of the year
  • Investigation of the year
  • News data app of the year
  • Data journalism website of the year
  • Best use of data in a breaking news story, within first 36 hours
  • Innovation in data journalism
  • Open data award
  • Small newsrooms (one or more winners)
  • Student and young data journalist of the year
  • Best individual and team portfolio

The competition is organized by the Global Editors Network: a cross-platform community of editors-in-chief and media professionals committed to high-quality journalism, with the support of Google and the Knight Foundation.


The Data Journalism Awards offer another way to foster innovation by partnering with the news industry, in addition to our efforts with the Digital News Initiative. A jury of peers from the publishing community will decide on the winners.


Winners will be announced in May 2018 at a ceremony in Lisbon. Good luck!

Please Welcome Diane Bryant to Google Cloud

I am happy and excited to announce that Diane Bryant, former Group President at Intel, will be joining Google Cloud as our Chief Operating Officer. I can’t think of a person with more relevant experience and talents. She is an engineer with tremendous business focus and an outstanding thirty-year career in technology.

Most recently, Diane was head of Intel’s Data Center Group, which generated $17 billion in revenue in 2016. Over her five years as Group President, Diane expanded the business to additionally focus on pervasive cloud computing, network virtualization and the adoption of artificial intelligence solutions. Previously, Bryant was Intel’s Corporate Vice President and Chief Information Officer, where she was responsible for corporate-wide information technology solutions and services.  

Diane serves on the board of United Technologies. Throughout her career, Diane has worked to mentor and sponsor women in technology.

Google Cloud is the most technologically advanced, most highly available, and most open cloud in the world. We are growing at an extraordinary rate as we enable businesses to become smarter with data, increase their agility, collaborate and secure their information. Diane’s strategic acumen, technical knowledge and client focus will prove invaluable as we accelerate the scale and reach of Google Cloud.

I am personally looking forward to working closely with Diane Bryant as we enter what promises to be a great 2018 for Google Cloud.

Data Journalism Awards 2018: call for entries

Data Journalism—the skill of combining reporting with data—is becoming an increasingly important part of every journalist’s toolkit. That’s not just anecdotal: a recent study commissioned by the Google News Lab found that half of all news outlets have at least one dedicated data journalist.


So, for the seventh consecutive year, we’re proud to support the 2018 Data Journalism Awards.

These are the only global awards recognizing work that brings together data, visualization and storytelling. It’s a part of our commitment to supporting innovative journalism around the world.


Data journalists, editors and publishers are encouraged to submit their work for consideration using this form by March 29, 2018. But don’t get too comfortable with that deadline, early applications are encouraged.


Last year there were 573 entries from 51 countries across five continents. Past winners of the $1,801 prizes include include BuzzFeed, The Wall Street Journal, The New York Times, FiveThirtyEight, ProPublica, and La Nación, as well as smaller organizations such as Rutas Del Conflicto, Civio Foundation and Convoca. And if you’re wondering why the prize is $1,801, It’s because William Playfair invented the pie chart in 1801.


Aimed at newsrooms and journalists in organizations of all sizes, the 2018 awards will recognize the best work in key categories, including:

  • Data visualization of the year
  • Investigation of the year
  • News data app of the year
  • Data journalism website of the year
  • Best use of data in a breaking news story, within first 36 hours
  • Innovation in data journalism
  • Open data award
  • Small newsrooms (one or more winners)
  • Student and young data journalist of the year
  • Best individual and team portfolio

The competition is organized by the Global Editors Network: a cross-platform community of editors-in-chief and media professionals committed to high-quality journalism, with the support of Google and the Knight Foundation.


The Data Journalism Awards offer another way to foster innovation by partnering with the news industry, in addition to our efforts with the Digital News Initiative. A jury of peers from the publishing community will decide on the winners.


Winners will be announced in May 2018 at a ceremony in Lisbon. Good luck!

Poly API: 3D objects on demand

Today we're making it even easier for developers to find and use 3D objects and scenes for their VR and AR apps with the Poly API.

Poly lets creators and developers browse, find, and download 3D objects and scenes for use in their apps. It’s fully integrated with Blocks and Tilt Brush, and even allows you to upload your own models, so there are plenty of options to choose from.

We want to make the process of finding the right 3D assets for your projects faster and more flexible. With the new Poly API, you can access our growing collection of Creative Commons 3D assets and interact directly with Poly to search, download, and import objects dynamically across desktop, mobile, virtual reality, and augmented reality.

If you’re using Unity or Unreal Engine to develop your apps, we also created the Poly Toolkit, an evolution of Tilt Brush Toolkit. With it, you can import 3D objects and scenes from Poly directly into a project, thanks to the API.

And with samples for both ARCore and ARKit, our developer site provides you with everything you need to use Poly assets in your AR experiences.

Poly Toolkit - Cloister Gardens

Credit: Cloister Gardens by Bruno Oliveira

To put the Poly API and Toolkit to the test, we partnered with a few talented developers to show just how compelling their apps can become with a Poly API integration. Check out how Mindshow, TheWaveVR, Unity EditorXR, and many others have already integrated with the API:

Poly API video

See the Poly API in action in apps from Normal, TheWaveVR, Mindshow, AnimVR, Unity EditorXR, High Fidelity, and Modbox.

Starting today, you can find all types of assets for your applications, and easily search for remixable, free assets licensed under a Creative Commons license by keyword, category, format, popularity or date uploaded. You can even filter by model complexity, or give people a personalized experience by letting them sign into your app with their Google account to access any assets they’ve uploaded or liked on Poly.

Ready to get started? Visit our developer page to see instructions on how to use the API and download our sample apps and toolkits.

Introducing AIY Vision Kit: Make devices that see

Earlier this year, we kicked off AIY Projects to help makers experiment with and learn about artificial intelligence. Our first release, AIY Voice Kit, was a huge hit! People built many amazing projects, showing what was possible with voice recognition in maker projects.

Today, we’re excited to announce our latest AIY Project, the Vision Kit. It’s our first project that features on-device neural network acceleration, providing powerful computer vision without a cloud connection.  

vision-kit-assembly
AIY Vision Kit's do-it-yourself assembly

What’s in the AIY Vision Kit?

Like AIY Voice Kit (released in May), Vision Kit is a do-it-yourself build. You’ll need to add a Raspberry Pi Zero W, a Raspberry Pi Camera, an SD card and a power supply, which must be purchased separately.

The kit includes a cardboard outer shell, the VisionBonnet circuit board, an RGB arcade-style button, a piezo speaker, a macro/wide lens kit, a tripod mounting nut and other connecting components.

vision-kit-exploded
AIY Vision Kit components

The main component of AIY Vision Kit is the VisionBonnet board for Raspberry Pi. The bonnet features the Intel® Movidius™ MA2450, a low-power vision processing unit capable of running neural network models on-device.

vision-kit-bonnet
AIY Vision Kit's VisionBonnet accessory for Raspberry Pi

The provided software includes three TensorFlow-based neural network models for different vision applications. One based on MobileNets can recognize a thousand common objects, a second can recognize faces and their expressions and the third is a person, cat and dog detector. We've also included a tool to compile models for Vision Kit, so you can train and retrain models with TensorFlow on your workstation or any cloud service.

We also provide a Python API that gives you the ability to change the RGB button colors, adjust the piezo element sounds and access the four GPIO pins.

With all of these features, you can explore many creative builds that use computer vision. For example, you can:


  • Identify all kinds of plant and animal species

  • See when your dog is at the back door

  • See when your car left the driveway

  • See that your guests are delighted by your holiday decorations

  • See when your little brother comes into your room (sound the alarm!)

Where can you get it?

AIY Vision Kit will be available in stores in early December. Pre-order your kit today through Micro Center.

** Please note that full assembly requires Raspberry Pi Zero W, Raspberry Pi Camera and a micro SD card, which must be purchased separately.

We're listening

Please let us know how we can improve on future kits and show us what you’re building by using the #AIYProjects hashtag on social media.

We’re excited to see what you build!

Introducing AIY Vision Kit: Make devices that see

Earlier this year, we kicked off AIY Projects to help makers experiment with and learn about artificial intelligence. Our first release, AIY Voice Kit, was a huge hit! People built many amazing projects, showing what was possible with voice recognition in maker projects.

Today, we’re excited to announce our latest AIY Project, the Vision Kit. It’s our first project that features on-device neural network acceleration, providing powerful computer vision without a cloud connection.  

vision-kit-assembly
AIY Vision Kit's do-it-yourself assembly

What’s in the AIY Vision Kit?

Like AIY Voice Kit (released in May), Vision Kit is a do-it-yourself build. You’ll need to add a Raspberry Pi Zero W, a Raspberry Pi Camera, an SD card and a power supply, which must be purchased separately.

The kit includes a cardboard outer shell, the VisionBonnet circuit board, an RGB arcade-style button, a piezo speaker, a macro/wide lens kit, a tripod mounting nut and other connecting components.

vision-kit-exploded
AIY Vision Kit components

The main component of AIY Vision Kit is the VisionBonnet board for Raspberry Pi. The bonnet features the Intel® Movidius™ MA2450, a low-power vision processing unit capable of running neural network models on-device.

vision-kit-bonnet
AIY Vision Kit's VisionBonnet accessory for Raspberry Pi

The provided software includes three TensorFlow-based neural network models for different vision applications. One based on MobileNets can recognize a thousand common objects, a second can recognize faces and their expressions and the third is a person, cat and dog detector. We've also included a tool to compile models for Vision Kit, so you can train and retrain models with TensorFlow on your workstation or any cloud service.

We also provide a Python API that gives you the ability to change the RGB button colors, adjust the piezo element sounds and access the four GPIO pins.

With all of these features, you can explore many creative builds that use computer vision. For example, you can:


  • Identify all kinds of plant and animal species

  • See when your dog is at the back door

  • See when your car left the driveway

  • See that your guests are delighted by your holiday decorations

  • See when your little brother comes into your room (sound the alarm!)

Where can you get it?

AIY Vision Kit will be available in stores in early December. Pre-order your kit today through Micro Center.

** Please note that full assembly requires Raspberry Pi Zero W, Raspberry Pi Camera and a micro SD card, which must be purchased separately.

We're listening

Please let us know how we can improve on future kits and show us what you’re building by using the #AIYProjects hashtag on social media.

We’re excited to see what you build!

Source: Education


The new maker toolkit: IoT, AI and Google Cloud Platform

Voice interaction is everywhere these days—via phones, TVs, laptops and smart home devices that use technology like the Google Assistant. And with the availability of maker-friendly offerings like Google AIY’s Voice Kit, the maker community has been getting in on the action and adding voice to their Internet of Things (IoT) projects.

As avid makers ourselves, we wrote an open-source, maker-friendly tutorial to show developers how to piggyback on a Google Assistant-enabled device (Google Home, Pixel, Voice Kit, etc.) and add voice to their own projects. We also created an example application to help you connect your project with GCP-hosted web and mobile applications, or tap into sophisticated AI frameworks that can provide more natural conversational flow.

Let’s take a look at what this tutorial, and our example application, can help you do.

Particle Photon: the brains of the operation

The Photon microcontroller from Particle is an easy-to-use IoT prototyping board that comes with onboard Wi-Fi and USB support, and is compatible with the popular Arduino ecosystem. It’s also a great choice for internet-enabled projects: every Photon gets its own webhook in Particle Cloud, and Particle provides a host of additional integration options with its web-based IDE, JavaScript SDK and command-line interface. Most importantly for the maker community, Particle Photons are super affordable, starting at just $19.

voice-kit-gcp-particle

Connecting the Google Assistant and Photon: Actions on Google and Dialogflow

The Google Assistant (via Google Home, Pixel, Voice Kit, etc.) responds to your voice input, and the Photon (through Particle Cloud) reacts to your application’s requests (in this case, turning an LED on and off). But how do you tie the two together? Let’s take a look at all the moving parts:


  • Actions on Google is the developer platform for the Google Assistant. With Actions on Google, developers build apps to help answer specific queries and connect users to products and services. Users interact with apps for the Assistant through a conversational, natural-sounding back-and-forth exchange, and your Action passes those user requests on to your app.

  • Dialogflow (formerly API.AI) lets you build even more engaging voice and text-based conversational interfaces powered by AI, and sends out request data via a webhook.

  • A server (or service) running Node.js handles the resulting user queries.


Along with some sample applications, our guide includes a Dialogflow agent, which lets you parse queries and route actions back to users (by voice and/or text) or to other applications. Dialogflow provides a variety of interface options, from an easy-to-use web-based GUI to a robust Node.js-powered SDK for interacting with both your queries and the outside world. In addition, its powerful machine learning tools add intelligence and natural language processing. Your applications can learn queries and intents over time, exposing even more powerful options for making and providing better results along the way. (The recently announced Dialogflow Enterprise Edition offers greater flexibility and support to meet the needs of large-scale businesses.)


Backend infrastructure: GCP

It’s a no-brainer to build your IoT apps on a Google Cloud Platform (GCP) backend, as you can use a single Google account to sign into your voice device, create Actions on Google apps and Dialogflow agents, and host the web services. To help get you up and running, we developed two sample web applications based on different GCP technologies that you can use as inspiration when creating a voice-powered IoT app:


  • Cloud Functions for Firebase. If your goal is quick deployment and iteration, Cloud Functions for Firebase is a simple, low-cost and powerful option—even if you don’t have much server-side development experience. It integrates quickly and easily with the other tools used here. Dialogflow, for example, now allows you to drop Cloud Functions for Firebase code directly into its graphical user interface.

  • App Engine. For those of you with more development experience and/or curiosity, App Engine is just as easy to deploy and scale, but includes more options for integrations with your other applications, additional programming language/framework choices, and a host of third-party add-ons. App Engine is a great choice if you already have a Node.js application to which you want to add voice actions, you want to tie into more of Google’s machine learning services, or you want to get deeper into device connection and management.


Next steps

As makers, we’ve only just scratched the surface of what we can do with these new tools like IoT, AI and cloud. Check out our full tutorials, and grab the code on Github. With these examples to build from, we hope we’ve made it easier for you to add voice powers to your maker project. For some extra inspiration, check out what other makers have built with AIY Voice Kit. And for even more ways to add machine learning to your maker project, check out the AIY Vision Kit, which just went on pre-sale today.

We can’t wait to see what you build!

Introducing the AIY Vision Kit: Add computer vision to your maker projects

Posted by Billy Rutledge, Director, AIY Projects

Since we released AIY Voice Kit, we've been inspired by the thousands of amazing builds coming in from the maker community. Today, the AIY Team is excited to announce our next project: the AIY Vision Kit — an affordable, hackable, intelligent camera.

Much like the Voice Kit, our Vision Kit is easy to assemble and connects to a Raspberry Pi computer. Based on user feedback, this new kit is designed to work with the smaller Raspberry Pi Zero W computer and runs its vision algorithms on-device so there's no cloud connection required.

Build intelligent devices that can perceive, not just see

The kit materials list includes a VisionBonnet, a cardboard outer shell, an RGB arcade-style button, a piezo speaker, a macro/wide lens kit, flex cables, standoffs, a tripod mounting nut and connecting components.

The VisionBonnet is an accessory board for Raspberry Pi Zero W that features the Intel® Movidius™ MA2450, a low-power vision processing unit capable of running neural networks. This will give makers visual perception instead of image sensing. It can run at speeds of up to 30 frames per second, providing near real-time performance.

Bundled with the software image are three neural network models:

  • A model based on MobileNetsthat can recognize a thousand common objects.
  • A model for face detection capable of not only detecting faces in the image, but also scoring facial expressions on a "joy scale" that ranges from "sad" to "laughing."
  • A model for the important task of discerning between cats, dogs and people.

For those of you who have your own models in mind, we've included the original TensorFlow code and a compiler. Take a new model you have (or train) and run it on the the Intel® Movidius™ MA2450.

Extend the kit to solve your real-world problems

The AIY Vision Kit is completely hackable:

  • Want to prototype your own product? The Vision Kit and the Raspberry Pi Zero W can fit into any number of tiny enclosures.
  • Want to change the way the camera reacts? Use the Python API to write new software to customize the RGB button colors, piezo element sounds and GPIO pins.
  • Want to add more lights, buttons, or servos? Use the 4 GPIO expansion pins to connect your own hardware.

We hope you'll use it to solve interesting challenges, such as:

  • Build "hotdog/not hotdog" (or any other food recognizer)
  • Turn music on when someone walks through the door
  • Send a text when your car leaves the driveway
  • Open the dog door when she wants to get back in the house

Ready to get your hands on one?

AIY Vision Kits will be available in December, with online pre-sales at Micro Center starting today.

*** Please note that AIY Vision Kit requires Raspberry Pi Zero W, Raspberry Pi Camera V2 and a micro SD card, which must be purchased separately.

Tell us what you think!

We're listening — let us know how we can improve our kits and share what you're making using the #AIYProjects hashtag on social media. We hope AIY Vision Kit inspires you to build all kinds of creative devices.