Tag Archives: artificial intelligence

Google Developers Launchpad introduces The Lever, sharing applied-Machine Learning best practices

Posted by Malika Cantor, Program Manager for Launchpad

The Lever is Google Developers Launchpad's new resource for sharing applied-Machine Learning (ML) content to help startups innovate and thrive. In partnership with experts and leaders across Google and Alphabet, The Lever is operated by Launchpad, Google's global startup acceleration program. The Lever will publish the Launchpad community's experiences of integrating ML into products, and will include case studies, insights from mentors, and best practices from both Google and global thought leaders.

Peter Norvig, Google ML Research Director, and Cassie Kozyrkov, Google Cloud Chief Decision Scientist, are editors of the publication. Hear from them and other Googlers on the importance of developing and sharing applied ML product and business methodologies:

Peter Norvig (Google ML Research, Director): "The software industry has had 50 years to perfect a methodology of software development. In Machine Learning, we've only had a few years, so companies need to pay more attention to the process in order to create products that are reliable, up-to-date, have good accuracy, and are respectful of their customers' private data."

Cassie Kozyrkov (Chief Decision Scientist, Google Cloud): "We live in exciting times where the contributions of researchers have finally made it possible for non-experts to do amazing things with Artificial Intelligence. Now that anyone can stand on the shoulders of giants, process-oriented avenues of inquiry around how to best apply ML are coming to the forefront. Among these is decision intelligence engineering: a new approach to ML, focusing on how to discover opportunities and build towards safe, effective, and reliable solutions. The world is poised to make data more useful than ever before!"

Clemens Mewald (Lead, Machine Learning X and TensorFlow X): "ML/AI has had a profound impact in many areas, but I would argue that we're still very early in this journey. Many applications of ML are incremental improvements on existing features and products. Video recommendations are more relevant, ads have become more targeted and personalized. However, as Sundar said, AI is more profound than electricity (or fire). Electricity enabled modern technology, computing, and the internet. What new products will be enabled by ML/AI? I am convinced that the right ML product methodologies will help lead the way to magical products that have previously been unthinkable."

We invite you to follow the publication, and actively comment on our blog posts to share your own experience and insights.

New AIY Edge TPU Boards

Posted by Billy Rutledge, Director of AIY Projects

Over the past year and a half, we've seen more than 200K people build, modify, and create with our Voice Kit and Vision Kit products. Today at Cloud Next we announced two new devices to help professional engineers build new products with on-device machine learning(ML) at their core: the AIY Edge TPU Dev Board and the AIY Edge TPU Accelerator. Both are powered by Google's Edge TPU and represent our first steps towards expanding AIY into a platform for experimentation with on-device ML.

The Edge TPU is Google's purpose-built ASIC chip designed to run TensorFlow Lite ML models on your device. We've learned that performance-per-watt and performance-per-dollar are critical benchmarks when processing neural networks within a small footprint. The Edge TPU delivers both in a package that's smaller than the head of a penny. It can accelerate ML inferencing on device, or can pair with Google Cloud to create a full cloud-to-edge ML stack. In either configuration, by processing data directly on-device, a local ML accelerator increases privacy, removes the need for persistent connections, reduces latency, and allows for high performance using less power.

The AIY Edge TPU Dev Board is an all-in-one development board that allows you to prototype embedded systems that demand fast ML inferencing. The baseboard provides all the peripheral connections you need to effectively prototype your device — including a 40-pin GPIO header to integrate with various electrical components. The board also features a removable System-on-module (SOM) daughter board can be directly integrated into your own hardware once you're ready to scale.

The AIY Edge TPU Accelerator is a neural network coprocessor for your existing system. This small USB-C stick can connect to any Linux-based system to perform accelerated ML inferencing. The casing includes mounting holes for attachment to host boards such as a Raspberry Pi Zero or your custom device.

On-device ML is still in its early days, and we're excited to see how these two products can be applied to solve real world problems — such as increasing manufacturing equipment reliability, detecting quality control issues in products, tracking retail foot-traffic, building adaptive automotive sensing systems, and more applications that haven't been imagined yet.

Both devices will be available online this fall in the US with other countries to follow shortly.

For more product information visit g.co/aiy and sign up to be notified as products become available.

AIY Projects: Updated kits for 2018

Posted by Billy Rutledge, Director of AIY Projects

Last year, AIY Projects launched to give makers the power to build AI into their projects with two do-it-yourself kits. We're seeing continued demand for the kits, especially from the STEM audience where parents and teachers alike have found the products to be great tools for the classroom. The changing nature of work in the future means students may have jobs that haven't yet been imagined, and we know that computer science skills, like analytical thinking and creative problem solving, will be crucial.

We're taking the first of many steps to help educators integrate AIY into STEM lesson plans and help prepare students for the challenges of the future by launching a new version of our AIY kits. The Voice Kit lets you build a voice controlled speaker, while the Vision Kit lets you build a camera that learns to recognize people and objects (check it out here). The new kits make getting started a little easier with clearer instructions, a new app and all the parts in one box.

To make setup easier, both kits have been redesigned to work with the new Raspberry Pi Zero WH, which comes included in the box, along with the USB connector cable and pre-provisioned SD card. Now users no longer need to download the software image and can get running faster. The updated AIY Vision Kit v1.1 also includes the Raspberry Pi Camera v2.

AIY Voice Kit v2 includes Raspberry Pi Zero WH and pre-provisioned SD card

AIY Voice Kit v1.1 includes Raspberry Pi Zero WH, Raspberry Pi Cam 2 and pre-provisioned SD card

We're also introducing the AIY companion app for Android, available here in Google Play, to make wireless setup and configuration a snap. The kits still work with monitor, keyboard and mouse as an alternate path and we're working on iOS and Chrome companions which will be coming soon.

The AIY website has been refreshed with improved documentation, now easier for young makers to get started and learn as they build. It also includes a new AIY Models area, showcasing a collection of neural networks designed to work with AIY kits. While we've solved one barrier to entry for the STEM audience, we recognize that there are many other things that we can do to make our kits even more useful. We'll once again be at #MakerFaire events to gather feedback from our users and in June we'll be working with teachers from all over the world at the ISTE conference in Chicago.

The new AIY Voice Kit and Vision Kit have arrived at Target Stores and Target.com (US) this month and we're working to make them globally available through retailers worldwide. Sign up on our mailing list to be notified when our products become available.

We hope you'll pick up one of the new AIY kits and learn more about how to build your own smart devices. Be sure to share your recipes on Hackster.io and social media using #aiyprojects.

Google Developers Launchpad Studio works with top startups to tackle healthcare challenges with machine learning

Posted by Malika Cantor, Developer Relations Program Manager

Google is an artificial intelligence-first company. Machine Learning (ML) and Cloud are deeply embedded in our product strategies and have been crucial thus far in our efforts to tackle some of humanity's greatest challenges - like bringing high-quality, affordable, and specialized healthcare to people globally.

In that spirit, we're excited to announce the first four startups to join Launchpad Studio, our 6-month mentorship program tailored to help applied-ML startups build great products using the most advanced tools and technologies available. Working side-by-side with experts from across Google product and research teams - including Google Cloud, Verily, X, Brain, ML Research -, we intend to support these startups on their journey to build successful applications, and explore leveraging Google Cloud Platform, TensorFlow, Android, and other Google platforms. Launchpad Studio has also enlisted the expertise of a number of top industry practitioners and thought leaders to ensure Studio startups are successful in practice and long-term.

These four startups were selected based on the novel ways they've found to apply ML to important challenges in the Healthcare industry. Namely:

  1. Reducing doctor burnout and increasing doctor productivity (Augmedix)
  2. Regaining movement in paralyzed limbs (BrainQ)
  3. Accelerating clinical trials and enabling value-based healthcare (Byteflies)
  4. Detecting sepsis (CytoVale)

Let's take a closer look:

Reducing Doctor Burnout and Increasing Doctor Productivity

Numerous studies have shown that primary care physicians currently spend about half of their workday on the computer, documenting in the electronic health records (EHR).

Augmedix is on a mission to reclaim this time and repurpose it for what matters most: patient care. When doctors use the service by wearing Alphabet's Glass hardware, their documentation and administrative load is almost entirely alleviated. This saves doctors 2-3 hours per day and dramatically improves the doctor-patient experience.

Augmedix has started leveraging advances in deep learning and natural language understanding to accelerate these efficiencies and offer additional value that further improves patient care.

Regaining Movement in Paralyzed Limbs

Motor disability following neuro-disorders such as stroke, spinal cord injury, and traumatic brain injury affects tens of millions of people each year worldwide.

BrainQ's mission is to help these patients back on their feet, restoring their ability to perform activities of daily living. BrainQ is currently conducting clinical trials in leading hospitals in Israel.

The company is developing a medical device that utilizes artificial intelligence tools to identify high resolution spectral patterns in patient's brain waves, observed in electroencephalogram (EEG) sensors. These patterns are then translated into a personalized electromagnetic treatment protocol aimed at facilitating targeted neuroplasticity and enhancing patient's recovery.

Accelerating Clinical Trials and Enabling Value-Based Healthcare

Today, sensors are making it easier to collect data about health and diseases. However building a new wearable health application that is clinically validated and end-user friendly is still a daunting task. Byteflies' modular platform makes this whole process much easier and cost-effective. Through their medical and signal processing expertise, Byteflies has made advances in the interpretation of multiple synchronized vital signs. This multimodal high-resolution vital sign data is very useful for healthcare and clinical trial applications. With that level of data ingestion comes a great need for automated data processing. Byteflies plans to use ML to transform these data streams into actionable, personalized, and medically-relevant data.

Early Sepsis Detection

Research suggests that sepsis kills more Americans than breast cancer, prostate cancer, and AIDS combined. Fortunately, sepsis can often be quickly mitigated if caught early on in patient care.

CytoVale is developing a medical diagnostics platform based on cell mechanics, initially for use in early detection of sepsis in the emergency room setting. It analyzes thousands of cells' mechanical properties using ultra high speed video to diagnose disease in a few minutes. Their technology also has applications in immune activation, cancer detection, research tools, and biodefense.

CytoVale is leveraging recent advances in ML and computer vision in conjunction with their unique measurement approach to facilitate this early detection of sepsis.

More about the Program

Each startup will get tailored, equity-free support, with the goal of successfully completing a ML-focused project during the term of the program. To support this process, we provide resources, including deep engagement with engineers in Google Cloud, Google X, and other product teams, as well as Google Cloud credits. We also include both Google Cloud Platform and GSuite training in our engagement with all Studio startups.

Join Us

Based in San Francisco, Launchpad Studio is a fully tailored product development acceleration program that matches top ML startups and experts from Silicon Valley with the best of Google - its people, network, and advanced technologies - to help accelerate applied ML and AI innovation. The program's mandate is to support the growth of the ML ecosystem, and to develop product methodologies for ML.

Launchpad Studio is looking to work with the best and most game-changing ML startups from around the world. While we're currently focused on working with startups in the Healthcare and Biotech space, we'll soon be announcing other industry verticals, and any startup applying AI / ML technology to a specific industry vertical can apply on a rolling-basis.

Apply to Google Developers Launchpad Studio for AI & ML focused startups

Posted by Roy Glasberg, Global Lead, Google Developers Launchpad

The mission of Google Developers Launchpad is to enable startups from around the world to build great companies. In the last 4 years, we've learned a lot while supporting early and late-stage founders. From working with dynamic startups---such as teams applying Artificial Intelligence technology to solving transportation problems in Israel, improving tele-medicine in Brazil, and optimizing online retail in India---we've learned that these startups require specialized services to help them scale.

So today, we're launching a new initiative - Google Developers Launchpad Studio - a full-service studio that provides tailored technical and product support to Artificial Intelligence & Machine Learning startups, all in one place.

Whether you're a 3-person team or an established post-Series B startup applying AI/ML to your product offering, we want to start connecting with you.

Applications to join Launchpad Studio are now open and you can apply here.

The global headquarters of Launchpad Studio will be based in San Francisco at Launchpad Space, with events and activities taking place in Tel Aviv and New York. We plan to expand our activities and events to Toronto, London, Bangalore, and Singapore soon.

As a member of the Studio program, you'll find services tailored to your startups' unique needs and challenges such as:

  • Applied AI integration toolkits: Datasets, testing environments, rapid prototyping, simulation tools, and architecture troubleshooting.
  • Product validation support: Industry-specific proof of concept and pilots, as well as use case workshops with Fortune 500 industry practitioners and other experts.
  • Access to AI experts: Best practice advice from our global community of AI thought leaders, which includes Peter Norvig, Dan Ariely, Yossi MatiasChris DiBonaand more.
  • Access to AI practitioners and investors: Interaction with some of the best AI and ML engineers, product managers, industry leaders and VCs from Google, Silicon Valley, and other international locations.

We're looking forward to working closely with you in the AI & Machine Learning space, soon!

"Innovation is open to everyone, worldwide. With this global program we now have an important opportunity to support entrepreneurs everywhere in the world who are aiming to use AI to solve the biggest challenges." Yossi Matias, VP of Engineering, Google

AIY Projects: Do-it-yourself AI for Makers

Posted by Billy Rutledge, Director of AIY Projects
Our teams are continually inspired by how Makers use Google technology to do crazy, cool new things. Things we would've never imagined doing ourselves, things that solve real world problems. After talking to Maker community members, we learned that many were interested in using artificial intelligence in projects, but didn't know where to begin. To address this gap, we're launching AIY Projects: do-it-yourself artificial intelligence for Makers.
With AIY Projects, Makers can use artificial intelligence to make human-to-machine interaction more like human-to-human interactions. We'll be releasing a series of reference kits, starting with voice recognition. The speech recognition capability in our first project could be used to:
  • Replace physical buttons and digital displays (those are so 90's) on household appliances and consumer electronics (imagine a coffee machine with no buttons or screen -- just talk to it)
  • Replace smartphone apps to control devices (those are so 2000's) on connected devices (imagine a connected light bulb or thermostat -- just talk to them)
  • Add voice recognition to assistive robotics (e.g. for accessibility) -- just talk to the robot as a simplified programming interface, e.g. "tell me what's in this room or "tell me when you see the mail-carrier come to the door"
Fully assembled Voice Kit.
The first open source reference project is the Voice Kit: instructions to build a Voice User Interface (VUI) that can use cloud services (like the new Google Assistant SDK or Cloud Speech API) or run completely on-device. This project extends the functionality of the most popular single board computer used for digital making - the Raspberry Pi.
Everything that comes in the Voice Kit.

The included Voice Hardware Accessory on Top (HAT) contains hardware for audio capture and playback: easy-to-use connectors for the dual mic daughter board and speaker, GPIO pins to connect low-voltage components like micro-servos and sensors, and an optional barrel connector for dedicated power supply. It was designed and tested with the Raspberry Pi 3 Model B.
Alternately, Developers can run Android Things on the Voice Kit with full functionality - making it easy to prototype Internet-of-Things devices and scale to full commercial products with several turnkey hardware solutions available (including Intel Edison, NXP Pico, and Raspberry Pi 3). Download the latest Android Things developer preview to get started.
Close up of the Voice HAT accessory board.


Making with the Google Assistant SDK
The Google Assistant SDK developer preview was released last week. It's enabled by default, and brings the Google Assistant to your Voice Kit: including voice control, natural language understanding, Google's smarts, and more.
In combination with the rest of the Voice Kit, we think the Google Assistant SDK will provide you many creative opportunities to build fun and engaging projects. Makers have already started experimenting with the SDK - including building a mocktail maker.


The Voice Kit ships out to all MagPi Magazine subscribers on May 4, 2017, and we've published a parts list, assembly instructions, source code and suggested extensions to our website: aiyprojects.withgoogle.com. The complete kit is also for sale at over 500 Barnes & Noble stores nationwide, as well as UK retailers WH Smith, Tesco, Sainsburys, and Asda.
This is just the first AIY Project. There are more in the works, but we need to know how you'd like to incorporate AI into your own projects. Visit hackster.io to share your experiences and discuss future projects. Use #AIYprojects on social media to help us find your inventions. And if you happen to be at the San Mateo Maker Faire on May 19-21, 2017, stop by the Google pavilion to give us feedback.


Google Summer of Code wrap-up post: Institute for Artificial Intelligence

Now that the 11th year of Google Summer of Code has officially come to a close, we will devote Fridays to wrap-up posts from a handful of the 137 mentoring organizations that participated in 2015. Organizations this year represented a wide range of computing fields including artificial intelligence, featured below.


logo-ai.png

Two software libraries that originate from our laboratory, the Institute for Artificial Intelligence, that are used and supported by a larger user community are the KnowRob system for robot knowledge processing and the CRAM (Cognitive Robot Abstract Machine) framework for plan-based robot control. In our group, we have a very strong focus on open source software and active maintenance and integration of projects. The systems we develop are available under BSD and MIT licenses, and partly (L)GPL.

Within the context of these frameworks, we offered four projects during the summer term in 2015, which were all accepted to Google Summer of Code (GSoC).

Multi-modal Big Data Analysis for Robotic Everyday Manipulation Activities

The project "Multi-modal Big Data Analysis for Robotic Everyday Manipulation Activities" added to our ongoing work to build the robotic perception system RoboSherlock for service robots performing household chores. Our GSoC student, Alexander, made exciting progress and valuable contributions during the summer. He ported an earlier prototypical proprioceptive module from Java to C++ to integrate it into RoboSherlock, he developed tools for visualizing the module's various detections and annotations, and applied this infrastructure to detect collisions of the robot's arms with unperceived parts of the environment in a shelf reordering task. We are also very happy that Alexander decided to stay and keep on working on RoboSherlock after GSoC ended.

Kitchen Activity Games GUI

Our GSoC student, Mesut, developed a GUI to interact with the robotics simulator Gazebo. The simulator has been used as a library, allowing different scenarios (worlds) to be selected and executed. Playlists can be generated in order to replay logged episodes. During the replay, various plugins can be linked and executed from the GUI to allow post processing the data. The user interface will ease organizing and saving simulation data further used for learning. You can view Mesut’s project on GitHub here.

Symbolic Reasoning Tools with Bullet using CRAM

Autonomous robots performing complex manipulation tasks in household environments, such as preparing a meal or tidying up, are required to know where different objects are located and what properties they have. The knowledge about their environment is called “belief state”, i.e. the information that the robot believes holds true in the surrounding world. Our GSoC student, Kunal, worked on improving the world representation of the CRAM robotic framework, which represents the environment as a 3-dimensional world where simple physics rules of the Bullet Physics engine apply. The goal of the project was to issue events when errors are found in the belief state, such as, if the robot thinks its arm is inside of a table, which is physically impossible. A stand-alone ROS (Robot Operating System) publisher node, that would notify all its listeners about errors, was partially implemented while integration with the CRAM belief state is still in progress.

Report Card Generation from Robot Mobile Manipulation Activities

Throughout the summer, our GSoC student Kacper made great progress in developing a framework for automatically generating report cards from robot experiences. We have a special focus in mobile manipulation activities in robots and are interested in anomaly detection in our rather complex systems — the developed components greatly help us save time on mundane analysis tasks, and make complicated analysis steps (looking up all aspects of a certain action, comparing different trials) easier to do.

By Jan Winkler, Organization Administrator and PhD student at the Institute of Artificial Intelligence