Category Archives: Google Developers Blog

News and insights on Google platforms, tools and events

Flutter Interact – December 11 – create beautiful apps

Posted by Martin Aguinis, Flutter Marketing Lead
Flutter Interact banner

Summary: Flutter Interact is happening on December 11th. Sign up here for our global livestream and watch it at g.co/FlutterInteract.
Google’s conference focusing on beautiful designs and apps, Flutter Interact, is streaming worldwide on December 11. Flutter Interact is a day dedicated to creation and collaboration. Whether you are a web developer, mobile developer, front-end engineer, UX designer, or designer, this is a good opportunity to hear the latest from Google.
This one-day event has several talks focused on different topics regarding development and design. Speakers include Matias Duarte, VP of Google Design; Tim Sneath, Group PM for Flutter and Dart; and Grant Skinner, CEO, GSkinner, Inc.

What to expect at Flutter Interact


Flutter Interact will focus on creating eye-catching experiences across devices. We’ll showcase the latest from Google Design and from Flutter, Google’s free and open source UI toolkit to build beautiful, natively compiled applications for mobile, web, and desktop from a single codebase. This event is tailored to a global audience, with a worldwide livestream, hundreds of viewing parties, and the opportunity to ask questions that are answered at the event.




It will include content and announcements from the Material Design and Flutter teams, partners, and other companies.

Tune in to the livestream


Go to g.co/FlutterInteract and sign up for livestream updates. The event will be broadcasted on the website on Dec 11, with a keynote starting at 10:00 a.m. EST (GMT-5).
You can also add this event directly to your Google Calendar.

Join a local viewing party


People and organizations all over the world are hosting over 450 free viewing parties to watch and discuss Flutter Interact. Find one of the hundreds of viewing parties happening near you.

Get Involved with #AskFlutter and #FlutterInteract


Flutter Interact is geared toward our online audience. There are two main ways to get involved.
  • #FlutterInteract 
    • The official event hashtag. We will have a social wall that is constantly showing tweets coming in with #FlutterInteract, both on site and on our livestream. Make sure to tweet your pictures, comments, videos, and thoughts while you experience Flutter Interact. 
  • #AskFlutter 
    • Our team will be on site, live, answering questions in real time. Tweet your questions and comments with the #AskFlutter hashtag to connect with the Flutter team (and the open source community), and get your questions answered. Your tweet may also appear on the global livestream during the event.





We are grateful to experience Flutter Interact with you on December 11th. In the meantime, follow us on twitter at @FlutterDev and get started with Flutter at flutter.dev.

DevKids: An inside look at the kids of DevFest

DevFest Banner
After Aaron Ma, an 11-year-old DevFest speaker, recently gave his tips on coding, people kept asking us, “so what are the other kids of DevFest working on?” In response, we want to show you how these incredible kids, or DevKids as we call them, are spreading their ideas at DevFest events around the world.


Below you will find the stories of DevKids from Morocco to Toronto, who have spoken on topics ranging from robotics to augmented reality. We hope you enjoy!
Ider, an 11-year-old developer from Morocco

Ider, an 11-year-old developer from Morocco, has a passion for Python and is not afraid to use it. With an incredibly advanced understanding of machine learning and augmented reality, he was asked to speak at DevFest Agadir on what the future holds for kids interested in programming.

Ider’s presentation was titled, The Talk of The Next Generation and focused on how kids can find their passion for computer science and start building the future they one day hope to call reality. 


Selin, a 13-year-old developer from Istanbul

Selin, a 13-year-old developer from Istanbul who was named the European Digital Girl of the Year by AdaAwards, joined the DevFest family last season. Recently, at a DevFest event in Istanbul, she told the story of how she became fascinated with robotics through a presentation titled, My Journey of Building Robots. With a passion for Python, Java, and Ruby, she explained how she is using her skills to build a robotic guide dog for the blind. She hopes that with the help of technology, she can open up a new, more accessible world for those with disabilities.






Radostin, a 13-year-old programmer from Bulgaria

Radostin, a 13-year-old programmer from Bulgaria, joined the DevFest family last season as a speaker and is now a core member of the DevFest Organizing Team. Recently, he created an app for the team that gathers feedback on different DevFest events. 

Previously, he gave a talk at DevFest Sofia on how he built an app that teaches people to play chess on Google Assistant. The young developer also spoke of how his aunt introduced him to coding and how watching her program inspired him to learn Java, Kotlin, C #, Python, Dart, and C ++.  He ended his presentation recounting long nights, spent watching YouTube videos, in search of anything he could get his hands on to learn. Radostin has inspired his DevFest family to believe they can learn anything, anywhere, at anytime. 




Artash (12-years-old) and Arushi (9-years-old), are a brother-sister programing team from Canada

Artash (12-years-old) and Arushi (9-years-old), are a brother-sister programing team from Canada. At DevFest Toronto, they showcased their very-own facial recognition robot that uses Machine Learning to Detect Facial Emotions. Their presentation was complete with live demonstrations where their robot analyzed fellow DevFest speakers and gave physical responses to their emotions. The two up-and-coming programmers also described how they went about creating their own ML algorithm to build the robot. 

What inspired them to start such a project? Space travel. Artash and Arushi believe that as astronauts embark on longer space missions, it’s important to build tools that can monitor their mental health. One day, they hope their robot will accompany astronauts on the first trip to Mars.





Inspired by these awesome kids? Want to share your own ideas with a welcoming community? Then find a DevFest near you, at devfest.withgoogle.com.

Updates from Coral: Mendel Linux 4.0 and much more!

Posted by Carlos Mendonça (Product Manager), Coral TeamIllustration of the Coral Dev Board placed next to Fall foliage

Last month, we announced that Coral graduated out of beta, into a wider, global release. Today, we're announcing the next version of Mendel Linux (4.0 release Day) for the Coral Dev Board and SoM, as well as a number of other exciting updates.

We have made significant updates to improve performance and stability. Mendel Linux 4.0 release Day is based on Debian 10 Buster and includes upgraded GStreamer pipelines and support for Python 3.7, OpenCV, and OpenCL. The Linux kernel has also been updated to version 4.14 and U-Boot to version 2017.03.3.

We’ve also made it possible to use the Dev Board's GPU to convert YUV to RGB pixel data at up to 130 frames per second on 1080p resolution, which is one to two orders of magnitude faster than on Mendel Linux 3.0 release Chef. These changes make it possible to run inferences with YUV-producing sources such as cameras and hardware video decoders.

To upgrade your Dev Board or SoM, follow our guide to flash a new system image.

MediaPipe on Coral

MediaPipe is an open-source, cross-platform framework for building multi-modal machine learning perception pipelines that can process streaming data like video and audio. For example, you can use MediaPipe to run on-device machine learning models and process video from a camera to detect, track and visualize hand landmarks in real-time.

Developers and researchers can prototype their real-time perception use cases starting with the creation of the MediaPipe graph on desktop. Then they can quickly convert and deploy that same graph to the Coral Dev Board, where the quantized TensorFlow Lite model will be accelerated by the Edge TPU.

As part of this first release, MediaPipe is making available new experimental samples for both object and face detection, with support for the Coral Dev Board and SoM. The source code and instructions for compiling and running each sample are available on GitHub and on the MediaPipe documentation site.

New Teachable Sorter project tutorial

New Teachable Sorter project tutorial

A new Teachable Sorter tutorial is now available. The Teachable Sorter is a physical sorting machine that combines the Coral USB Accelerator's ability to perform very low latency inference with an ML model that can be trained to rapidly recognize and sort different objects as they fall through the air. It leverages Google’s new Teachable Machine 2.0, a web application that makes it easy for anyone to quickly train a model in a fun, hands-on way.

The tutorial walks through how to build the free-fall sorter, which separates marshmallows from cereal and can be trained using Teachable Machine.

Coral is now on TensorFlow Hub

Earlier this month, the TensorFlow team announced a new version of TensorFlow Hub, a central repository of pre-trained models. With this update, the interface has been improved with a fresh landing page and search experience. Pre-trained Coral models compiled for the Edge TPU continue to be available on our Coral site, but a select few are also now available from the TensorFlow Hub. On the site, you can find models featuring an Overlay interface, allowing you to test the model's performance against a custom set of images right from the browser. Check out the experience for MobileNet v1 and MobileNet v2.

We are excited to share all that Coral has to offer as we continue to evolve our platform. For a list of worldwide distributors, system integrators and partners, visit the new Coral partnerships page. We hope you’ll use the new features offered on Coral.ai as a resource and encourage you to keep sending us feedback at coral-support@google.com.

Time is Ticking: Clock Contest live with over $10,000 in prizes

Posted by Martin Aguinis, Flutter Marketing LeadTake the Flutter Clock challenge bannerFlutter Clock is a contest offered by Google, with participation from the Flutter, Google Assistant, and Lenovo teams, that challenges you to build a Flutter clock face application for the Lenovo Smart Clock that is beautiful and innovative. Whether you’re a Flutter expert or novice, we invite you to join us and see what you can create. Over $10,000 in prizes will be awarded to the winners! Visit flutter.dev/clock to enter.

Flutter clock content partnership with Google Assistant and Lenovo

High Level Details

Date: All entries must be submitted by January 20, 2020 11:59 PM PST (GMT-8).

How to Submit: Entries will be collected on the form linked at flutter.dev/clock, but see the Official Rules for full details.

Winners: Submissions will be rated by Google and Flutter expert judges against the following rubric: visual beauty, code quality, novelty of idea, and overall execution.

Prizes: Potential prizes include a fully loaded iMac Pro, Lenovo Smart Display, and Lenovo Smart Clock. Also, all complete and valid submissions will receive a digital certificate of completion. In addition, some of the clock contest submissions might be integrated into the Lenovo Smart Clock's lineup of clock faces, or used as inspiration for future clock faces!

Results will be announced at our Mobile World Congress 2020 Keynote.

Good luck and have fun! Time is ticking…

Accelerating Japan’s AI startups in our new Tokyo Campus

Posted by Takuo Suzuki

Japan is well known as an epicenter of innovation and technology, and its startup ecosystem is no different. We’ve seen this first hand from our work with startups such as Cinnamon-- who uses artificial intelligence to remove repetitive tasks from office workers daily function, allowing more work to get done by fewer people, faster.

This is why we are pleased to announce our second accelerator program, housed at the new Google for Startups Campus in the heart of Tokyo. Accelerated with Google in JapanThe Google for Startups Accelerator (previously Launchpad Accelerator) is an intensive three-month program for high potential, AI-focused startups, utilizing the proven Launchpad foundational components and content.

Founders who successfully apply for the accelerator will have the opportunity to work on the technical problems facing their startup alongside relevant experts from Google and the industry. They will receive mentorship on these challenges, support on machine learning best practices, as well as connections to relevant teams from across Google to help grow their business.

In addition to mentorship and technical project support, the accelerator also includes deep dives and workshops focused on product design, customer acquisition, and leadership development for founders.

“We hope that by providing these founders with the tools, mentorship, and connections to prepare for the next step in their journey it will, in turn, contribute to a stronger Japanese economy.” says Takuo Suzuki, Google Developers Regional Lead for Japan. “We are excited to work with such passionate startups in a new Google for Startups Campus, an environment built to foster startup growth, and meet our next cohort in 2020”

The program will run from February-May 2020 and applications are now open until 13th December 2019.

Let the Kids Play: A young DevFest speaker and a DevFest organizer talk tech

DevFest banner
As over 400 community-led DevFest events continue to take place around the world, something is becoming clear: kids are taking over. We’re not kidding. Many young students are taking the stage this season to speak on topics ranging from machine learning to robotics, and people are loving it.

At the same time, these kids and the GDG (Google Developers Groups) community organizers of local DevFests are becoming great friends. We saw this recently at a DevFest in San Francisco, where Vikram Tiwari, a GDG Lead, and 11-year-old Aaron Ma, the youngest speaker at the event, had a great conversation on programming. 

We wanted to let you in on their conversation, so we asked Vikram to answer a few questions on coding, and then asked Aaron to respond to his answers. Check out their conversation below! 

What is your favorite language to code in? 



Vikram: I would have to say JavaScript - it used to be the language no one cared about, and then suddenly node.js changed the whole landscape. Nowadays, you can’t escape js, it’s everywhere from browsers to IoT and now even Machine Learning. The best part about using js is the flexibility it gives you. For example, it’s easy to make mistakes in js, but then if you want to prototype quickly, it doesn’t hold you back. And of course, you can’t forget about the vibrant node.js ecosystem, which is always striving for ease of use and speed. 


11-year-old Aaron Ma

Aaron: Open source is definitely the move! Especially open source competitions because they’re super exciting, let me see where I need to improve, and let me test if I’ve mastered a field of study. I also like to contribute or create my own open-source projects so I can grow as an open-source minded developer. Right now, I am the youngest contributor to Google’s TensorFlow, so to all the other kids out there reading this...come join me!




Do you like jumping right into coding or thinking through every line before you write?  


Vikram Tiwari, GDG lead
Vikram: I do like to think about the problem beforehand. However, if the problem has already been distilled down, then I like to get right to execution. In this case, I generally start with writing a bunch of pseudo functions, mocking the inputs and outputs of those functions, connecting them together, and then finally writing the actual logic. This approach generally helps me with context switching in a sense that I can stop working on that specific problem at any point and pick it back up from the same position when I get back to it.



11-year-old Aaron Ma

Aaron: I like how you think! 😝If someone has already implemented the problem and packaged it, I would try to get right to the deployment process. But if no one has implemented the problem, I would first start with writing some pseudocode, and then slowly convert the pseudocode into actual code that works.








What is your favorite part of the DevFest community?


Vikram Tiwari, GDG lead

Vikram: That DevFest is a home for all developers, from all walks of life, with all kinds of ideas. Yes, this family loves building your tech skills, but it also loves helping you breakthrough any social barriers you may face. From feeling more comfortable around people to feeling more confident with your code, this community wants to help you do it all.





11-year-old Aaron Ma
Aaron: We are a DevFamily! ❤️I couldn’t agree more. My favorite part about DevFest is how this community can inspire. We, as DevFest developers, have the chance to change how we all think about CS every time we get together. From students like myself to long time experts, there is such an open and positive exchange of ideas taking place here - it’s so exciting and always makes me smile. 😊





Want to join a conversation like this one? Respond to the questions yourself with the #DevFest or find a DevFest near you, at devfest.withgoogle.com.

Using machine learning to tackle Fall Armyworm

Guest post by Nsubuga Hassan, CEO at Hansu Mobile and Intelligent Innovations, Android and Machine Learning DeveloperNazirini doing work on laptopIn 2016 Fall armyworm (FAW) was first reported in Africa. It has devastated maize crops across the continent.

Research shows the potential impact of FAW on continental wide maize yield lies between 8.3 and 20.6 million tonnes per year (total expected production of 39m tonnes per year); with losses lying between US$2,48m and US$6,19m per year (of a US$11,59m annual expected value). The impact of FAW is far reaching, and is now reported in many countries around the world.

Agriculture is the backbone of Uganda’s economy, employing 70% of the population. It contributes to half of Uganda’s export earnings and a quarter of the country’s gross domestic product (GDP). Fall armyworm posses a great threat on our livelihoods. Two people having a conversationWe are a small group of like minded developers living and working in Uganda. Most of our relatives grow maize so the impact of the worm was very close to home. We really felt like we needed to do something about it. Fall Armyworm that threatens cropsThe vast damage and yield losses in maize production, due to FAW, got the attention of global organizations, who are calling for innovators to help. It is the perfect time to apply machine learning. Our goal is to build an intelligent agent to help local farmers fight this pest in order to increase our food security.

Based on a Machine Learning Crash Course, our Google Developer Group (GDG) in Mbale hosted some study jams in May 2018, alongside several other code labs. This is where we first got hands-on experience using TensorFlow, from which the foundations were laid for the Farmers Companion app. Finally, we felt as though an intelligent solution to help farmers had been conceived.

Equipped with this knowledge & belief, the team embarked on collecting training data from nearby fields. This was done using a smartphone to take images, with the help of some GDG Mbale members. With farmers miles from town, and many fields inaccessible by road (not to mention the floods), this was not as simple as we had first hoped. To inhibit us further, our smartphones were (and still are) the only hard drives we had, thus decreasing the number of images & data we can capture in a day.

But we persisted! Once gathered, the images were sorted, one at a time, and categorized. With TensorFlow we re-trained a MobileNet, a technique known as transfer learning. We then used the TensorFlow Converter to generate a TensorFlow Lite FlatButter file which we deployed in an Android app. Demonstration of the Android app identifying the Fall armywormWe started with about 3956 images, but our dataset is growing exponentially. We are actively collecting more and more data to improve our model’s accuracy. The improvements in TensorFlow, with Keras high level APIs, has really made our approach to deep learning easy and enjoyable and we are now experimenting with TensorFlow 2.0.

The app is simple for the user. Once installed, the user focuses the camera through the app, on a maize crop. Then an image frame is picked and, using TensorFlow Lite, the image frame is analysed to look for Fall armyworm damage. Depending on the results from this phase, a suggestion of a possible solution is given.

The app is available for download and it is constantly undergoing updates, as we push for local farmers to adapt and use it. We strive to ensure a world with #ZeroHunger and believe technology can do a lot to help us achieve this.

We have so far been featured on a national TV station in Uganda, participated in the #hackAgainstHunger and ‘The International Symposium on Agricultural Innovations’ for family farmers, organized by the Food Agricultural Organization of the United Nations, where our solution was highlighted.

More recently, Google highlighted our work with this film:

We have embarked on scaling the solution to coffee disease and cassava diseases and will slowly be moving on to more. We have also introduced virtual reality to help farmers showcase good farming practices and various training.

Our plan is to collect more data and to scale the solution to handle more pests and diseases. We are also shifting to cloud services and Firebase to improve and serve our model better despite the lack of resources. With improved hardware and greater localised understanding, there's huge scope for Machine Learning to make a difference in the fight against hunger.

The Go language turns 10: A Look at Go’s Growth in the Enterprise

Posted by Steve Francia, Go TeamGo's gopher mascot

The Go gopher was created by renowned illustrator Renee French. This image is adapted from a drawing by Egon Elbre.

November 10 marked Go’s 10th anniversary—a milestone that we are lucky enough to celebrate with our global developer community.

The Gopher community will be celebrating Go’s 10th anniversary at conferences such as Gopherpalooza in Mountain View and KubeCon in San Diego, and dozens of meetups around the world.

In recognition of this milestone, we’re taking a moment to reflect on the tremendous growth and progress Go (also known as golang) has made: from its creation at Google and open sourcing, to many early adopters and enthusiasts, to the global enterprises that now rely on Go everyday for critical workloads.

New to Go?

Go is an open-source programming language designed to help developers build fast, reliable, and efficient software at scale. It was created at Google and is now supported by over 2100 contributors, primarily from the open-source community. Go is syntactically similar to C, but with the added benefits of memory safety, garbage collection, structural typing, and CSP-style concurrency.

Most importantly, Go was purposefully designed to improve productivity for multicore, networked machines and large codebases—allowing programmers to rapidly scale both software development and deployment.

Millions of Gophers!

Today, Go has more than a million users worldwide, ranging across industries, experience, and engineering disciplines. Go’s simple and expressive syntax, ease-of-use, formatting, and speed have helped it become one of the fastest growing languages—with a thriving open source community.

As Go’s use has grown, more and more foundational services have been built with it. Popular open source applications built on Go include Docker, Hugo, Kubernetes. Google’s hybrid cloud platform, Anthos, is also built with Go.

Go was first adopted to support large amounts of Google’s services and infrastructure. Today, Go is used by companies including, American Express, Dropbox, The New York Times, Salesforce, Target, Capital One, Monzo, Twitch, IBM, Uber, and Mercado Libre. For many enterprises, Go has become their language of choice for building on the cloud.

An Example of Go In the Enterprise

One exciting example of Go in action is at MercadoLibre, which uses Go to scale and modernize its ecommerce ecosystem, improve cost-efficiencies, and system response times.

MercadoLibre’s core API team builds and maintains the largest APIs at the center of the company’s microservices solutions. Historically, much of the company’s stack was based on Grails and Groovy backed by relational databases. However this big framework with multiple layers was soon found encountering scalability issues.

Converting that legacy architecture to Go as a new, very thin framework for building APIs streamlined those intermediate layers and yielded great performance benefits. For example, one large Go service is now able to run 70,000 requests per machine with just 20 MB of RAM.

“Go was just marvelous for us,” explains Eric Kohan, Software Engineering Manager at MercadoLibre. “It’s very powerful and very easy to learn, and with backend infrastructure has been great for us in terms of scalability.”

Using Go allowed MercadoLibre to cut the number of servers they use for this service to one-eighth the original number (from 32 servers down to four), plus each server can operate with less power (originally four CPU cores, now down to two CPU cores). With Go, the company obviated 88 percent of their servers and cut CPU on the remaining ones in half—producing a tremendous cost-savings.

With Go, MercadoLibre’s build times are three times (3x) faster and their test suite runs an amazing 24 times faster. This means the company’s developers can make a change, then build and test that change much faster than they could before.

Today, roughly half of Mercadolibre's traffic is handled by Go applications.

"We really see eye-to-eye with the larger philosophy of the language," Kohan explains. "We love Go's simplicity, and we find that having its very explicit error handling has been a gain for developers because it results in safer, more stable code in production."

Visit go.dev to Learn More

We’re thrilled by how the Go community continues to grow, through developer usage, enterprise adoption, package contribution, and in many other ways.

Building off of that growth, we’re excited to announce go.dev, a new hub for Go developers.

There you’ll find centralized information for Go packages and modules, a wealth of learning resources to get started with the language, and examples of critical use cases and case studies of companies using Go.

MercadoLibre’s recent experience is just one example of how Go is being used to build fast, reliable, and efficient software at scale.

You can read more about MercadoLibre’s success with Go in the full case study.

Google Pay Now Available on Stripe Checkout

Posted by Soc Sieng, Developer Advocate

Google Pay is now available on Stripe Checkout. Businesses with Stripe Checkout on their websites can now provide an optimized checkout experience to Google Pay users. Google Pay Now Available on Stripe Checkout

Google Pay is available directly from Stripe Checkout

Refer to Stripe’s Checkout documentation for more information.

Stripe merchants that aren’t using Stripe Checkout can integrate directly with Google Pay using the Google Pay Setup Guide.

About Google Pay

Google Pay is the fast, simple and secure way to pay on sites, in apps, and in stores using the payment options saved to your Google Account.

See Google Pay Developer documentation for information on additional integration options.

Open sourcing Google Cardboard

Posted by Jeffrey Chen, Product Manager, AR & VRGoogle CardboardFive years ago, we launched Google Cardboard—a simple cardboard viewer that anyone can use to experience virtual reality (VR). From a giveaway at Google I/O to more than 15 million units worldwide, Cardboard has played an important role in introducing people to VR through experiences like YouTube and Expeditions. In many cases, it provided access to VR to people who otherwise couldn’t have afforded it.

With Cardboard and the Google VR software development kit (SDK), developers have created and distributed VR experiences across both Android and iOS devices, giving them the ability to reach millions of users. While we’ve seen overall usage of Cardboard decline over time and we’re no longer actively developing the Google VR SDK, we still see consistent usage around entertainment and education experiences, like YouTube and Expeditions, and want to ensure that Cardboard’s no-frills, accessible-to-everyone approach to VR remains available.

Today, we’re releasing the Cardboard open source project to let the developer community continue to build Cardboard experiences and add support to their apps for an ever increasing diversity of smartphone screen resolutions and configurations. We think that an open source model—with additional contributions from us—is the best way for developers to continue to build experiences for Cardboard. We’ve already seen success with this approach with our Cardboard Manufacturer Kit—an open source project to enable third-party manufacturers to design and build their own unique compatible VR viewers—and we’re excited to see where the developer community takes Cardboard in the future.

What's Included in the open source project

We're releasing libraries for developers to build their Cardboard apps for iOS and Android and render VR experiences on Cardboard viewers. The open source project provides APIs for head tracking, lens distortion rendering and input handling. We’ve also included an Android QR code library, so that apps can pair any Cardboard viewer without depending on the Cardboard app.

An open source model will enable the community to continue to improve Cardboard support and expand its capabilities, for example adding support for new smartphone display configurations and Cardboard viewers as they become available. We’ll continue to contribute to the Cardboard open source project by releasing new features, including an SDK package for Unity.

If you’re interested in learning how to develop with the Cardboard open source project, please see our developer documentation, or visit the Cardboard GitHub repo to access source code, build the project and download the latest release.