Category Archives: Google Developers Blog

News and insights on Google platforms, tools and events

TheVentureCity and Google Consolidate Miami as a Tech Powerhouse

Google and TheVentureCity logo
  • Google Developers Launchpad Accelerator executes its graduation week for the startups in the second class of the program for Latin America
  • 19 startups from 10 countries will enjoy access to the best that Miami and TheVentureCity have to offer

Miami, December 9, 2019. Once again, the status of Miami as an international tech hub is elevated with TheVentureCity playing host to the final week of Google’s startup acceleration program, Launchpad. The event marks the end of a 10-week immersion that began in Mexico, continued in Argentina, and concludes in South Florida; connecting startups with dedicated support to take their businesses to the next level.

This week, the 9 startups in the Launchpad program will meet with startups in TheVentureCity’s own Growth program. This is an opportunity for them to share experiences, engage with each other and grow. TheVentureCity has invited top venture capitalists (VCs) from around the U.S., Europe, and Latin America as an added benefit for startups on both programs.

“Being able to host this program for startups from across Latin America in Miami feels to me like a dream come true. It is an unparalleled occasion to showcase the amazing work that developers and entrepreneurs are leading in the region", said Paco Solsona, manager of Google Developers.

For Laura González-Estéfani, CEO and founder of TheVentureCity: “This initiative consolidates Miami as an epicenter of innovation and entrepreneurship by bringing together companies that think beyond borders. We believe that talent is evenly distributed around the world, but opportunities are unequal.”

Startups on the Launchpad Accelerator come from a wide span of Latin American countries, including Argentina, México, Colombia, Chile, and El Salvador. The companies graduating this week are: 123Seguro, Al Turing, Apli, DevF, Hugo, Jetty, Jüsto, Odd Industries, and TransparentBusiness. The participating companies from TheVentureCity are: Qempo, Cajero, ComigoSaude, Digital Innovation One, TheFastMind, eMasters, Alba, 1Doc3, Stayfilm and Erudit. While being an international group, these startups represent the talent, diversity, and richness of the continent.

The main benefit of hosting this program in Miami is to diversify thinking in the American tech ecosystem and to keep relevant stakeholders informed about the challenges being faced by startups across the region. This is the second time that Google and TheVentureCity partner up to support startups; last March, we successfully hosted the first-ever Google Developers Launchpad Start, a one-week version of the accelerator program, in Miami.

***

About TheVentureCity

TheVentureCity is a new growth and accelerator model that helps diverse founders achieve global impact. Our mission is to make the global entrepreneurial ecosystem more diverse, international and accessible to fair capital. TheVentureCity supports founders with a global mindset to achieve their next big success.

About Google Developers Launchpad

Google Developers Launchpad is a startup accelerator program that empowers the global startup ecosystems to solve the world’s biggest challenges with the best of Google - its people, research, and advanced technologies. Started by you. Accelerated with Google.

Flutter: the first UI platform designed for ambient computing

Posted by the Flutter team

We’re writing to you from Flutter Interact, our biggest Flutter event to date, where we’re making a number of announcements about Flutter, including a new updated release and a series of partnerships that demonstrate our commitment to supporting the ever-growing ecosystem around Flutter.

From Device-centric to App-centric development

Our original release for Flutter was focused on helping you build apps that run on iOS and Android from a single codebase. But we want to go further.

We live in a world where internet-connected devices are pervading every area of our lives. Many of us transition throughout the day between multiple devices: phones, watches and other wearable devices, tablets, desktop or laptop computers, televisions and increasingly, smart displays such as the Google Nest Hub.

In this emerging world, the focus starts to move away from any individual device towards an environment where your services and software are available wherever you need them. We call this ambient computing, and it’s core to our vision for Flutter: a portable toolkit for building beautiful experiences wherever you might want to paint pixels on the screen.

Flutter for ambient computing hero image

With Flutter, instead of being forced to start your app development by asking “which device am I targeting?”, we want you to be able to begin by focusing on what you want to build. In this multi-device, multi-platform world, Flutter aims to provide a framework and tooling for creating user experiences without compromise on any device or form factor. The Dart-powered Flutter engine supports fast development with stateful hot reload, and fast performance in production with native compilation, whether it is running on mobile, desktop, web, or embedded devices.

If you’re a startup, Flutter lets you test your idea on your total addressable market, rather than being forced to target just one user base due to a lack of resources. If you’re a larger company, Flutter lets you consolidate your team’s resources onto shipping a single experience, reusing code across mobile, web and desktop as you see fit. Flutter is unique in supporting such a diversity of natively-compiled experiences from a single codebase.

It’s been great to see how Flutter has flourished in the short time since its initial release. Well over a million developers are already using Flutter for apps both large and small. In GitHub’s 2019 State of the Octoverse report, Dart and Flutter ranked #1 and #2 for fastest-growing language and open source project respectively over the last twelve months, and Flutter is now one of the ten most starred software repos on their site. And in a recent analysis by LinkedIn, Flutter is described as “the fastest-growing skill among software engineers”.

The rest of this article talks about our progress towards this ambient computing vision, and specifically focuses on the announcements we made today to help designers and developers collaborate together on stunning visual experiences built with Flutter.

Flutter on mobile, desktop, and web

Today at Flutter Interact, we are announcing Flutter 1.12, our latest stable release of the Flutter framework. This latest quarterly release represents the work of hundreds of contributors from inside and outside Google, and brings new performance improvements, more control over adding Flutter content to existing apps, and updates to the Material and Cupertino libraries. We also have a new Google Fonts package that provides direct access to almost 1,000 open sourced font families, putting beautiful typography within reach in just a line of code. More information about what’s new in Flutter 1.12 can be found in our dedicated blog post on the Flutter Medium channel.

Google is increasingly using Flutter for mobile app development, thanks to the productivity benefits it offers for multiplatform development. At Interact, the Stadia team showcased their app, running on both iOS and Android from the same codebase and built with Flutter. Talking about their experiences, the team had this to say:

“As Stadia was initially investigating mobile, Flutter enabled us to prototype quickly, demonstrate gameplay on Android, and then staff one team to build our cross-platform experience without compromise. We’re delighted with the results and are continuing to build new features in Flutter.”

Stadia

Of course, many companies outside Google are also using Flutter for their app development. Splice provides a library of millions of sounds, loops, and presets that help musicians bring their ideas to life. When they decided to add a mobile app to supplement their existing desktop experience, they chose Flutter because, as they put it: “Speed to validate our product hypothesis was critical. We are a small team, so we needed a single solution that could deliver an equally great experience to all our users on iOS and Android.”

Within six weeks, they had built a prototype that validated their choice, and their new mobile experience is live in both the Apple Store and the Google Play Store:

mobile experience

Adding a mobile experience is already showing results, with a significant percentage of purchases now coming through their mobile app. And with Flutter providing consistency across multiple platforms, they’re now experimenting with bringing some of the same experiences into their desktop app.

On the subject of desktop, we’ve made much progress with macOS support. For the first time, you can use the release mode to build a fully-optimized macOS application using Flutter, and we’ve been working to expand the Material design system in Flutter to support apps that are designed for desktop-class form factors. More information on building for desktop can be found at flutter.dev/desktop.

Flutter gallery window

Lastly, we’re delighted to announce the beta release of Flutter’s web support, adding stability and maturity over the previews that we shipped earlier this year. It’s now possible to build and consume web plug-ins, enabling Flutter apps to take advantage of a growing ecosystem of Dart components for everything from Firebase to the latest web APIs. Since we announced our early adopter program a couple of months ago, we’ve been working with customers like Journey to test web-based experiences with Flutter, and now we’re ready for a broader set of developers to start building web content with Flutter. More information on Flutter’s web support can be found at flutter.dev/web and at the companion blog article that was published today.

explore page

All this is possible thanks to Dart, the programming language and platform that powers Flutter across an array of ambient computing experiences. Dart is somewhat unique in offering develop-mode and release-mode toolchains for ARM, Intel, and JavaScript, enabling native compilation to almost any platform you could want to target. Today we’re releasing Dart 2.7, which adds new capabilities to the Dart language including extension methods. You can find more information about these features on the Dart blog. We’ve also released an update to DartPad that allows users to not only edit Flutter code, but also to run it and view the rendered UI.

Flutter as a canvas for creative exploration

We focused our event this year primarily on the creative technologists, prototypers, interactive designers, and visual coders. A core motivation for building Flutter was that we don’t think multi-platform development should require compromise on visual quality. We see Flutter as a canvas for creative expression and exploration, since it removes many of the restrictions that visually-oriented developers have often faced. Flutter’s stateful hot reload feature makes it easy to make changes and see the results in real-time; and with every pixel drawn by Flutter, you can blend UI, graphical content, text and video with custom animations and transformations.

In preparing for this event, we’ve been particularly inspired by the work of Robert Felker, a digital artist who has created a series of generative art explorations with Flutter that combine geometry, texture and light in stunning ways. We never envisioned Flutter being used to create images like this, but it speaks to the expressive power of Flutter, combined with artistic creativity, that the image on the right below is generated with less than 60 lines of Dart code:

Flutter art by digital artist Robert Felker

Today we’re honored to be joined by several partners who are launching tools for designers to more fully participate in the creative process of building Flutter applications.

Supernova has integrated Flutter into their design and prototyping tool, with animation support, Material Design integration, and an updated interface designed for Flutter. They also announced a new browser-based tool, Supernova Cloud, which is built entirely with Flutter’s web support.

Supernova app

Rive (previously 2Dimensions, who published the Flare graphics tool) announced that they’re consolidating their company name and product into one brand. They unveiled their new company and product name, Rive, as well as a number of new product features. Perhaps the most notable feature in Rive is support for importing Lottie files created with Adobe After Effects, enabling deeper integration of Flutter into existing workflows for animated content. Rive now supports real-time dynamic layering effects like drop shadows, inner shadows, glows, blurs, and masking.

Rive eliminates the need to recreate designs and animations in code, greatly simplifying the designer-to-developer handoff. This means that designers are free to iterate and make changes at any time. And because it outputs real assets that integrate directly with Flutter, not just MP4 videos or GIF images, Rive allows you to create sophisticated and dynamic interactions, game characters, animated icons, and onboarding screens.

Lastly, another big endorsement of Flutter as a canvas for creatives comes from Adobe, who are announcing Flutter support in Creative Cloud with a plugin that exports designs from Adobe XD into Flutter. Adobe XD, Adobe’s user experience design platform, allows product design teams to design and prototype user experiences for mobile, web, desktop, and beyond. Instead of simply handing off design specs and leaving development teams to understand and interpret a designer’s vision, the new XD-to-Flutter plugin automatically converts XD designs into code that is immediately usable as part of your Flutter application development.

Flutter support in Creative Cloud

The XD to Flutter plugin will be available as open source early next year. You can find out more about XD to Flutter and sign up for early access on Adobe’s website. We’re thrilled to partner with Adobe; their pedigree in extensible design tooling will give product designers a huge head start in creating amazing Flutter experiences.

Conclusion

Flutter is at its heart an open source project. The value we derive for Google comes in part from the productivity gains realized by other product teams inside the company who use Flutter, but we build Flutter with you and for you, knowing that a larger ecosystem and community benefits us all. Our journey thus far has broadened from our original mobile-centric release to incorporate a wider range of form factors, and we continue to invest in designer and developer tools that increase both the productivity and the beauty of your finished application.

But what perhaps brings us the greatest satisfaction is when Flutter helps an individual turn their idea into a completed work that they can share with the world. The story in the below video of one family is a touching tribute to all those who have made Flutter possible, whether by contributing code, bug reports or fixes, or sharing knowledge with others in the community. We’re so grateful to be on this journey with you!

Object Detection and Tracking using MediaPipe

Posted by Ming Guang Yong, Product Manager for MediaPipe

MediaPipe in 2019

MediaPipe is a framework for building cross platform multimodal applied ML pipelines that consist of fast ML inference, classic computer vision, and media processing (e.g. video decoding). MediaPipe was open sourced at CVPR in June 2019 as v0.5.0. Since our first open source version, we have released various ML pipeline examples like

In this blog, we will introduce another MediaPipe example: Object Detection and Tracking. We first describe our newly released box tracking solution, then we explain how it can be connected with Object Detection to provide an Object Detection and Tracking system.

Box Tracking in MediaPipe

In MediaPipe v0.6.7.1, we are excited to release a box tracking solution, that has been powering real-time tracking in Motion Stills, YouTube’s privacy blur, and Google Lens for several years and that is leveraging classic computer vision approaches. Pairing tracking with ML inference results in valuable and efficient pipelines. In this blog, we pair box tracking with object detection to create an object detection and tracking pipeline. With tracking, this pipeline offers several advantages over running detection per frame:

  • It provides instance based tracking, i.e. the object ID is maintained across frames.
  • Detection does not have to run every frame. This enables running heavier detection models that are more accurate while keeping the pipeline lightweight and real-time on mobile devices.
  • Object localization is temporally consistent with the help of tracking, meaning less jitter is observable across frames.

Our general box tracking solution consumes image frames from a video or camera stream, and starting box positions with timestamps, indicating 2D regions of interest to track, and computes the tracked box positions for each frame. In this specific use case, the starting box positions come from object detection, but the starting position can also be provided manually by the user or another system. Our solution consists of three main components: a motion analysis component, a flow packager component, and a box tracking component. Each component is encapsulated as a MediaPipe calculator, and the box tracking solution as a whole is represented as a MediaPipe subgraph shown below.

Visualization of Tracking State for Each Box

MediaPipe Box Tracking Subgraph

The MotionAnalysis calculator extracts features (e.g. high-gradient corners) across the image, tracks those features over time, classifies them into foreground and background features, and estimates both local motion vectors and the global motion model. The FlowPackager calculator packs the estimated motion metadata into an efficient format. The BoxTracker calculator takes this motion metadata from the FlowPackager calculator and the position of starting boxes, and tracks the boxes over time. Using solely the motion data (without the need for the RGB frames) produced by the MotionAnalysis calculator, the BoxTracker calculator tracks individual objects or regions while discriminating from others. To track an input region, we first use the motion data corresponding to this region and employ iteratively reweighted least squares (IRLS) fitting a parametric model to the region’s weighted motion vectors. Each region has a tracking state including its prior, mean velocity, set of inlier and outlier feature IDs, and the region centroid. See the figure below for a visualization of the tracking state, with green arrows indicating motion vectors of inliers, and red arrows indicating motion vectors of outliers. Note that by only relying on feature IDs we implicitly capture the region’s appearance, since each feature’s patch intensity stays roughly constant over time. Additionally, by decomposing a region’s motion into that of the camera motion and the individual object motion, we can even track featureless regions.

Visualization of Tracking State for Each Box

An advantage of our architecture is that by separating motion analysis into a dedicated MediaPipe calculator and tracking features over the whole image, we enable great flexibility and constant computation independent of the number of regions tracked! By not having to rely on the RGB frames during tracking, our tracking solution provides the flexibility to cache the metadata across a batch of frame. Caching enables tracking of regions both backwards and forwards in time; or even sync directly to a specified timestamp for tracking with random access.

Object Detection and Tracking

A MediaPipe example graph for object detection and tracking is shown below. It consists of 4 compute nodes: a PacketResampler calculator, an ObjectDetection subgraph released previously in the MediaPipe object detection example, an ObjectTracking subgraph that wraps around the BoxTracking subgraph discussed above, and a Renderer subgraph that draws the visualization.

MediaPipe Example Graph for Object Detection and Tracking. Boxes in purple are subgraphs.

In general, the ObjectDetection subgraph (which performs ML model inference internally) runs only upon request, e.g. at an arbitrary frame rate or triggered by specific signals. More specifically, in this example PacketResampler temporally subsamples the incoming video frames to 0.5 fps before they are passed into ObjectDetection. This frame rate can be configured differently as an option in PacketResampler.

The ObjectTracking subgraph runs in real-time on every incoming frame to track the detected objects. It expands the BoxTracking subgraph described above with additional functionality: when new detections arrive it uses IoU (Intersection over Union) to associate the current tracked objects/boxes with new detections to remove obsolete or duplicated boxes.

A sample result of this object detection and tracking example can be found below. The left image is the result of running object detection per frame. The right image is the result of running object detection and tracking. Note that the result with tracking is much more stable with less temporal jitter. It also maintains object IDs across frames.

Comparison Between Object Detection Per Frame and Object Detection and Tracking

Follow MediaPipe

This is our first Google Developer blog post for MediaPipe. We look forward to publishing new blog posts related to new MediaPipe ML pipeline examples and features. Please follow the MediaPipe tag on the Google Developer blog and Google Developer twitter account (@googledevs)

Acknowledgements

We would like to thank Fan Zhang, Genzhi Ye, Jiuqiang Tang, Jianing Wei, Chuo-Ling Chang, Ming Guang Yong, and Matthias Grundman for building the object detection and tracking solution in MediaPipe and contributing to this blog post.

Blending Realities with the ARCore Depth API

Posted by Shahram Izadi, Director of Research and Engineering

ARCore, our developer platform for building augmented reality (AR) experiences, allows your devices to display content immersively in the context of the world around us-- making them instantly accessible and useful.
Earlier this year, we introduced Environmental HDR, which brings real world lighting to AR objects and scenes, enhancing immersion with more realistic reflections, shadows, and lighting. Today, we're opening a call for collaborators to try another tool that helps improve immersion with the new Depth API in ARCore, enabling experiences that are vastly more natural, interactive, and helpful.
The ARCore Depth API allows developers to use our depth-from-motion algorithms to create a depth map using a single RGB camera. The depth map is created by taking multiple images from different angles and comparing them as you move your phone to estimate the distance to every pixel.
Example depth map

Example depth map, with red indicating areas that are close by, and blue representing areas that are farther away.

One important application for depth is occlusion: the ability for digital objects to accurately appear in front of or behind real world objects. Occlusion helps digital objects feel as if they are actually in your space by blending them with the scene. We will begin making occlusion available in Scene Viewer, the developer tool that powers AR in Search, to an initial set of over 200 million ARCore-enabled Android devices today.

A virtual cat with occlusion off and with occlusion on.

We’ve also been working with Houzz, a company that focuses on home renovation and design, to bring the Depth API to the “View in My Room” experience in their app. “Using the ARCore Depth API, people can see a more realistic preview of the products they’re about to buy, visualizing our 3D models right next to the existing furniture in a room,” says Sally Huang, Visual Technologies Lead at Houzz. “Doing this gives our users much more confidence in their purchasing decisions.”
The Houzz app with occlusion is available today.
The Houzz app with occlusion is available today.
In addition to enabling occlusion, having a 3D understanding of the world on your device unlocks a myriad of other possibilities. Our team has been exploring some of these, playing with realistic physics, path planning, surface interaction, and more.

Physics, path planning, and surface interaction examples.

When applications of the Depth API are combined together, you can also create experiences in which objects accurately bounce and splash across surfaces and textures, as well as new interactive game mechanics that enable players to duck and hide behind real-world objects.
A demo experience we created where you have to dodge and throw food at a robot chef
A demo experience we created where you have to dodge and throw food at a robot chef.
The Depth API is not dependent on specialized cameras and sensors, and it will only get better as hardware improves. For example, the addition of depth sensors, like time-of-flight (ToF) sensors, to new devices will help create more detailed depth maps to improve existing capabilities like occlusion, and unlock new capabilities such as dynamic occlusion—the ability to occlude behind moving objects.
We’ve only begun to scratch the surface of what’s possible with the Depth API and we want to see how you will innovate with this feature. If you are interested in trying the new Depth API, please fill out our call for collaborators form.

Flutter Interact – December 11 – create beautiful apps

Posted by Martin Aguinis, Flutter Marketing Lead
Flutter Interact banner

Summary: Flutter Interact is happening on December 11th. Sign up here for our global livestream and watch it at g.co/FlutterInteract.
Google’s conference focusing on beautiful designs and apps, Flutter Interact, is streaming worldwide on December 11. Flutter Interact is a day dedicated to creation and collaboration. Whether you are a web developer, mobile developer, front-end engineer, UX designer, or designer, this is a good opportunity to hear the latest from Google.
This one-day event has several talks focused on different topics regarding development and design. Speakers include Matias Duarte, VP of Google Design; Tim Sneath, Group PM for Flutter and Dart; and Grant Skinner, CEO, GSkinner, Inc.

What to expect at Flutter Interact


Flutter Interact will focus on creating eye-catching experiences across devices. We’ll showcase the latest from Google Design and from Flutter, Google’s free and open source UI toolkit to build beautiful, natively compiled applications for mobile, web, and desktop from a single codebase. This event is tailored to a global audience, with a worldwide livestream, hundreds of viewing parties, and the opportunity to ask questions that are answered at the event.




It will include content and announcements from the Material Design and Flutter teams, partners, and other companies.

Tune in to the livestream


Go to g.co/FlutterInteract and sign up for livestream updates. The event will be broadcasted on the website on Dec 11, with a keynote starting at 10:00 a.m. EST (GMT-5).
You can also add this event directly to your Google Calendar.

Join a local viewing party


People and organizations all over the world are hosting over 450 free viewing parties to watch and discuss Flutter Interact. Find one of the hundreds of viewing parties happening near you.

Get Involved with #AskFlutter and #FlutterInteract


Flutter Interact is geared toward our online audience. There are two main ways to get involved.
  • #FlutterInteract 
    • The official event hashtag. We will have a social wall that is constantly showing tweets coming in with #FlutterInteract, both on site and on our livestream. Make sure to tweet your pictures, comments, videos, and thoughts while you experience Flutter Interact. 
  • #AskFlutter 
    • Our team will be on site, live, answering questions in real time. Tweet your questions and comments with the #AskFlutter hashtag to connect with the Flutter team (and the open source community), and get your questions answered. Your tweet may also appear on the global livestream during the event.





We are grateful to experience Flutter Interact with you on December 11th. In the meantime, follow us on twitter at @FlutterDev and get started with Flutter at flutter.dev.

DevKids: An inside look at the kids of DevFest

DevFest Banner
After Aaron Ma, an 11-year-old DevFest speaker, recently gave his tips on coding, people kept asking us, “so what are the other kids of DevFest working on?” In response, we want to show you how these incredible kids, or DevKids as we call them, are spreading their ideas at DevFest events around the world.


Below you will find the stories of DevKids from Morocco to Toronto, who have spoken on topics ranging from robotics to augmented reality. We hope you enjoy!
Ider, an 11-year-old developer from Morocco

Ider, an 11-year-old developer from Morocco, has a passion for Python and is not afraid to use it. With an incredibly advanced understanding of machine learning and augmented reality, he was asked to speak at DevFest Agadir on what the future holds for kids interested in programming.

Ider’s presentation was titled, The Talk of The Next Generation and focused on how kids can find their passion for computer science and start building the future they one day hope to call reality. 


Selin, a 13-year-old developer from Istanbul

Selin, a 13-year-old developer from Istanbul who was named the European Digital Girl of the Year by AdaAwards, joined the DevFest family last season. Recently, at a DevFest event in Istanbul, she told the story of how she became fascinated with robotics through a presentation titled, My Journey of Building Robots. With a passion for Python, Java, and Ruby, she explained how she is using her skills to build a robotic guide dog for the blind. She hopes that with the help of technology, she can open up a new, more accessible world for those with disabilities.






Radostin, a 13-year-old programmer from Bulgaria

Radostin, a 13-year-old programmer from Bulgaria, joined the DevFest family last season as a speaker and is now a core member of the DevFest Organizing Team. Recently, he created an app for the team that gathers feedback on different DevFest events. 

Previously, he gave a talk at DevFest Sofia on how he built an app that teaches people to play chess on Google Assistant. The young developer also spoke of how his aunt introduced him to coding and how watching her program inspired him to learn Java, Kotlin, C #, Python, Dart, and C ++.  He ended his presentation recounting long nights, spent watching YouTube videos, in search of anything he could get his hands on to learn. Radostin has inspired his DevFest family to believe they can learn anything, anywhere, at anytime. 




Artash (12-years-old) and Arushi (9-years-old), are a brother-sister programing team from Canada

Artash (12-years-old) and Arushi (9-years-old), are a brother-sister programing team from Canada. At DevFest Toronto, they showcased their very-own facial recognition robot that uses Machine Learning to Detect Facial Emotions. Their presentation was complete with live demonstrations where their robot analyzed fellow DevFest speakers and gave physical responses to their emotions. The two up-and-coming programmers also described how they went about creating their own ML algorithm to build the robot. 

What inspired them to start such a project? Space travel. Artash and Arushi believe that as astronauts embark on longer space missions, it’s important to build tools that can monitor their mental health. One day, they hope their robot will accompany astronauts on the first trip to Mars.





Inspired by these awesome kids? Want to share your own ideas with a welcoming community? Then find a DevFest near you, at devfest.withgoogle.com.

Updates from Coral: Mendel Linux 4.0 and much more!

Posted by Carlos Mendonça (Product Manager), Coral TeamIllustration of the Coral Dev Board placed next to Fall foliage

Last month, we announced that Coral graduated out of beta, into a wider, global release. Today, we're announcing the next version of Mendel Linux (4.0 release Day) for the Coral Dev Board and SoM, as well as a number of other exciting updates.

We have made significant updates to improve performance and stability. Mendel Linux 4.0 release Day is based on Debian 10 Buster and includes upgraded GStreamer pipelines and support for Python 3.7, OpenCV, and OpenCL. The Linux kernel has also been updated to version 4.14 and U-Boot to version 2017.03.3.

We’ve also made it possible to use the Dev Board's GPU to convert YUV to RGB pixel data at up to 130 frames per second on 1080p resolution, which is one to two orders of magnitude faster than on Mendel Linux 3.0 release Chef. These changes make it possible to run inferences with YUV-producing sources such as cameras and hardware video decoders.

To upgrade your Dev Board or SoM, follow our guide to flash a new system image.

MediaPipe on Coral

MediaPipe is an open-source, cross-platform framework for building multi-modal machine learning perception pipelines that can process streaming data like video and audio. For example, you can use MediaPipe to run on-device machine learning models and process video from a camera to detect, track and visualize hand landmarks in real-time.

Developers and researchers can prototype their real-time perception use cases starting with the creation of the MediaPipe graph on desktop. Then they can quickly convert and deploy that same graph to the Coral Dev Board, where the quantized TensorFlow Lite model will be accelerated by the Edge TPU.

As part of this first release, MediaPipe is making available new experimental samples for both object and face detection, with support for the Coral Dev Board and SoM. The source code and instructions for compiling and running each sample are available on GitHub and on the MediaPipe documentation site.

New Teachable Sorter project tutorial

New Teachable Sorter project tutorial

A new Teachable Sorter tutorial is now available. The Teachable Sorter is a physical sorting machine that combines the Coral USB Accelerator's ability to perform very low latency inference with an ML model that can be trained to rapidly recognize and sort different objects as they fall through the air. It leverages Google’s new Teachable Machine 2.0, a web application that makes it easy for anyone to quickly train a model in a fun, hands-on way.

The tutorial walks through how to build the free-fall sorter, which separates marshmallows from cereal and can be trained using Teachable Machine.

Coral is now on TensorFlow Hub

Earlier this month, the TensorFlow team announced a new version of TensorFlow Hub, a central repository of pre-trained models. With this update, the interface has been improved with a fresh landing page and search experience. Pre-trained Coral models compiled for the Edge TPU continue to be available on our Coral site, but a select few are also now available from the TensorFlow Hub. On the site, you can find models featuring an Overlay interface, allowing you to test the model's performance against a custom set of images right from the browser. Check out the experience for MobileNet v1 and MobileNet v2.

We are excited to share all that Coral has to offer as we continue to evolve our platform. For a list of worldwide distributors, system integrators and partners, visit the new Coral partnerships page. We hope you’ll use the new features offered on Coral.ai as a resource and encourage you to keep sending us feedback at [email protected].

Time is Ticking: Clock Contest live with over $10,000 in prizes

Posted by Martin Aguinis, Flutter Marketing LeadTake the Flutter Clock challenge bannerFlutter Clock is a contest offered by Google, with participation from the Flutter, Google Assistant, and Lenovo teams, that challenges you to build a Flutter clock face application for the Lenovo Smart Clock that is beautiful and innovative. Whether you’re a Flutter expert or novice, we invite you to join us and see what you can create. Over $10,000 in prizes will be awarded to the winners! Visit flutter.dev/clock to enter.

Flutter clock content partnership with Google Assistant and Lenovo

High Level Details

Date: All entries must be submitted by January 20, 2020 11:59 PM PST (GMT-8).

How to Submit: Entries will be collected on the form linked at flutter.dev/clock, but see the Official Rules for full details.

Winners: Submissions will be rated by Google and Flutter expert judges against the following rubric: visual beauty, code quality, novelty of idea, and overall execution.

Prizes: Potential prizes include a fully loaded iMac Pro, Lenovo Smart Display, and Lenovo Smart Clock. Also, all complete and valid submissions will receive a digital certificate of completion. In addition, some of the clock contest submissions might be integrated into the Lenovo Smart Clock's lineup of clock faces, or used as inspiration for future clock faces!

Results will be announced at our Mobile World Congress 2020 Keynote.

Good luck and have fun! Time is ticking…

Accelerating Japan’s AI startups in our new Tokyo Campus

Posted by Takuo Suzuki

Japan is well known as an epicenter of innovation and technology, and its startup ecosystem is no different. We’ve seen this first hand from our work with startups such as Cinnamon-- who uses artificial intelligence to remove repetitive tasks from office workers daily function, allowing more work to get done by fewer people, faster.

This is why we are pleased to announce our second accelerator program, housed at the new Google for Startups Campus in the heart of Tokyo. Accelerated with Google in JapanThe Google for Startups Accelerator (previously Launchpad Accelerator) is an intensive three-month program for high potential, AI-focused startups, utilizing the proven Launchpad foundational components and content.

Founders who successfully apply for the accelerator will have the opportunity to work on the technical problems facing their startup alongside relevant experts from Google and the industry. They will receive mentorship on these challenges, support on machine learning best practices, as well as connections to relevant teams from across Google to help grow their business.

In addition to mentorship and technical project support, the accelerator also includes deep dives and workshops focused on product design, customer acquisition, and leadership development for founders.

“We hope that by providing these founders with the tools, mentorship, and connections to prepare for the next step in their journey it will, in turn, contribute to a stronger Japanese economy.” says Takuo Suzuki, Google Developers Regional Lead for Japan. “We are excited to work with such passionate startups in a new Google for Startups Campus, an environment built to foster startup growth, and meet our next cohort in 2020”

The program will run from February-May 2020 and applications are now open until 13th December 2019.

Let the Kids Play: A young DevFest speaker and a DevFest organizer talk tech

DevFest banner
As over 400 community-led DevFest events continue to take place around the world, something is becoming clear: kids are taking over. We’re not kidding. Many young students are taking the stage this season to speak on topics ranging from machine learning to robotics, and people are loving it.

At the same time, these kids and the GDG (Google Developers Groups) community organizers of local DevFests are becoming great friends. We saw this recently at a DevFest in San Francisco, where Vikram Tiwari, a GDG Lead, and 11-year-old Aaron Ma, the youngest speaker at the event, had a great conversation on programming. 

We wanted to let you in on their conversation, so we asked Vikram to answer a few questions on coding, and then asked Aaron to respond to his answers. Check out their conversation below! 

What is your favorite language to code in? 



Vikram: I would have to say JavaScript - it used to be the language no one cared about, and then suddenly node.js changed the whole landscape. Nowadays, you can’t escape js, it’s everywhere from browsers to IoT and now even Machine Learning. The best part about using js is the flexibility it gives you. For example, it’s easy to make mistakes in js, but then if you want to prototype quickly, it doesn’t hold you back. And of course, you can’t forget about the vibrant node.js ecosystem, which is always striving for ease of use and speed. 


11-year-old Aaron Ma

Aaron: Open source is definitely the move! Especially open source competitions because they’re super exciting, let me see where I need to improve, and let me test if I’ve mastered a field of study. I also like to contribute or create my own open-source projects so I can grow as an open-source minded developer. Right now, I am the youngest contributor to Google’s TensorFlow, so to all the other kids out there reading this...come join me!




Do you like jumping right into coding or thinking through every line before you write?  


Vikram Tiwari, GDG lead
Vikram: I do like to think about the problem beforehand. However, if the problem has already been distilled down, then I like to get right to execution. In this case, I generally start with writing a bunch of pseudo functions, mocking the inputs and outputs of those functions, connecting them together, and then finally writing the actual logic. This approach generally helps me with context switching in a sense that I can stop working on that specific problem at any point and pick it back up from the same position when I get back to it.



11-year-old Aaron Ma

Aaron: I like how you think! ?If someone has already implemented the problem and packaged it, I would try to get right to the deployment process. But if no one has implemented the problem, I would first start with writing some pseudocode, and then slowly convert the pseudocode into actual code that works.








What is your favorite part of the DevFest community?


Vikram Tiwari, GDG lead

Vikram: That DevFest is a home for all developers, from all walks of life, with all kinds of ideas. Yes, this family loves building your tech skills, but it also loves helping you breakthrough any social barriers you may face. From feeling more comfortable around people to feeling more confident with your code, this community wants to help you do it all.





11-year-old Aaron Ma
Aaron: We are a DevFamily! ❤️I couldn’t agree more. My favorite part about DevFest is how this community can inspire. We, as DevFest developers, have the chance to change how we all think about CS every time we get together. From students like myself to long time experts, there is such an open and positive exchange of ideas taking place here - it’s so exciting and always makes me smile. ?





Want to join a conversation like this one? Respond to the questions yourself with the #DevFest or find a DevFest near you, at devfest.withgoogle.com.