Category Archives: Google Developers Blog

News and insights on Google platforms, tools and events

3 ways local developer communities are staying connected virtually

Posted by JP Loughran, Google Developers

As the world continues to embrace remote opportunities, Google Developer Group (GDG) and Developer Student Club (DSC) communities have been working hard to support each other virtually – complete with online technical education and remote spaces to build local community connections. In particular, community groups in Sweden, Singapore, and throughout MENA have been creating resources to help developers find online employment, education, and engagement opportunities. Curious to find out more? Keep reading below.

1. Employment - Sweden

Community members from Google Developer Group West Sweden recently rolled up their sleeves to do what they do best: hack. From April 6th – 8th, the community worked with the Swedish Government to create “Hack the Crisis,” a virtual community event focused on designing, testing, and executing ideas in response to recent challenges.

The event included several project pitches and a group of judges to select winning proposals. One of the finalists, Remote + Gigs on Platsbanken, developed a plan to modernize the Swedish Government’s employment website in an effort to save business.

Specifically, the idea suggests updating the website’s interface so that job seekers can be easily matched with compatible remote work based on their preferences. A creative way to safely bring work to both people and businesses in need.

2. Education - Singapore

(On the left is the Singapore University of Technology and Design. On the right is a virtual model of the campus.)

Recently, a Google Developer Student Club at the Singapore University of Technology and Design (SUTD) built a virtual model of their college to host tours that were canceled in person. Specifically, the team of 40+ student developers came together to construct the model on Minecraft, creating an experience that allows visitors to freely and interactively explore the campus – just like they would in person.

The team also used their knowledge of DialogFlow to build a tour guide chatbot that answers questions from the virtual visitors.

The university has loved the virtual campus. On the first day it opened, roughly 200 visitors joined tours and over 1,000 users came to interact with the site.

3. Engagement - Middle East & North Africa

Recently, 200+ developer communities, 100 local experts, and 10+ Google Developer Experts have come together to host MENA Digital Days – a 32 week series of live video workshops providing various training on everything from working from home to leadership to coding.

The series, which started at the end of March and will continue until the end of October, engages viewers with its live workshop style format and is published on a daily and weekly basis. With a unique theme each week, the series aims to provide learning opportunities for developers, women, students, and startups.

With such a broad range of participants from all over the world, MENA Digital Days is currently taking place in three languages: Arabic, French, and English. Catch up any time on past videos on the YouTube playlist or join a live workshop to get involved.

Community matters more than ever, so it’s impressive to see so many groups adapt so quickly to being digital-first. If you’re inspired by these stories, learn more about local developer communities hosting virtual events near you here.

Sip a cup of Java 11 for your Cloud Functions

Posted by Guillaume Laforge, Developer Advocate for Google Cloud

With the beta of the new Java 11 runtime for Google Cloud Functions, Java developers can now write their functions using the Java programming language (a language often used in enterprises) in addition to Node.js, Go, or Python. Cloud Functions allow you to run bits of code locally or in the cloud, without provisioning or managing servers: Deploy your code, and let the platform handle scaling up and down for you. Just focus on your code: handle incoming HTTP requests or respond to some cloud events, like messages coming from Cloud Pub/Sub or new files uploaded in Cloud Storage buckets.

In this article, let’s focus on what functions look like, how you can write portable functions, how to run and debug them locally or deploy them in the cloud or on-premises, thanks to the Functions Framework, an open source library that runs your functions. But you will also learn about third-party frameworks that you might be familiar with, that also let you create functions using common programming paradigms.

The shape of your functions

There are two types of functions: HTTP functions, and background functions. HTTP functions respond to incoming HTTP requests, whereas background functions react to cloud-related events.

The Java Functions Framework provides an API that you can use to author your functions, as well as an invoker which can be called to run your functions locally on your machine, or anywhere with a Java 11 environment.

To get started with this API, you will need to add a dependency in your build files. If you use Maven, add the following dependency tag in pom.xml:

<dependency>
<groupId>com.google.cloud.functions</groupId>
<artifactId>functions-framework-api</artifactId>
<version>1.0.1</version>
<scope>provided</scope>
</dependency>

If you are using Gradle, add this dependency declaration in build.gradle:

compileOnly("com.google.cloud.functions:functions-framework-api")

Responding to HTTP requests

A Java function that receives an incoming HTTP request implements the HttpFunction interface:

import com.google.cloud.functions.*;
import java.io.*;

public class Example implements HttpFunction {
@Override
public void service(HttpRequest request, HttpResponse response)
throws IOException {
var writer = response.getWriter();
writer.write("Hello developers!");
}
}

The service() method provides an HttpRequest and an HttpResponse object. From the request, you can get information about the HTTP headers, the payload body, or the request parameters. It’s also possible to handle multipart requests. With the response, you can set a status code or headers, define a body payload and a content-type.

Responding to cloud events

Background functions respond to events coming from the cloud, like new Pub/Sub messages, Cloud Storage file updates, or new or updated data in Cloud Firestore. There are actually two ways to implement such functions, either by dealing with the JSON payloads representing those events, or by taking advantage of object marshalling thanks to the Gson library, which takes care of the parsing transparently for the developer.

With a RawBackgroundFunction, the responsibility is on you to handle the incoming cloud event JSON-encoded payload. You receive a JSON string, so you are free to parse it however you like, with your JSON parser of your choice:

import com.google.cloud.functions.Context;
import com.google.cloud.functions.RawBackgroundFunction;

public class RawFunction implements RawBackgroundFunction {
@Override
public void accept(String json, Context context) {
...
}
}

But you also have the option to write a BackgroundFunction which uses Gson for unmarshalling a JSON representation into a Java class (a POJO, Plain-Old-Java-Object) representing that payload. To that end, you have to provide the POJO as a generic argument:

import com.google.cloud.functions.Context;
import com.google.cloud.functions.BackgroundFunction;

public class PubSubFunction implements BackgroundFunction<PubSubMsg> {
@Override
public void accept(PubSubMsg msg, Context context) {
System.out.println("Received message ID: " + msg.messageId);
}
}

public class PubSubMsg {
String data;
Map<String, String> attributes;
String messageId;
String publishTime;
}

The Context parameter contains various metadata fields like timestamps, the type of events, and other attributes.

Which type of background function should you use? It depends on the control you need to have on the incoming payload, or if the Gson unmarshalling doesn’t fully fit your needs. But having the unmarshalling covered by the framework definitely streamlines the writing of your function.

Running your function locally

Coding is always great, but seeing your code actually running is even more rewarding. The Functions Framework comes with the API we used above, but also with an invoker tool that you can use to run functions locally. For improving developer productivity, having a direct and local feedback loop on your own computer makes it much more comfortable than deploying in the cloud for each change you make to your code.

With Maven

If you’re building your functions with Maven, you can install the Function Maven plugin in your pom.xml:

<plugin>
<groupId>com.google.cloud.functions</groupId>
<artifactId>function-maven-plugin</artifactId>
<version>0.9.2</version>
<configuration>
<functionTarget>com.example.Example</functionTarget>
</configuration>
</plugin>

On the command-line, you can then run:

$ mvn function:run

You can pass extra parameters like --target to define a different function to run (in case your project contains several functions), --port to specify the port to listen to, or --classpath to explicitly set the classpath needed by the function to run. These are the parameters of the underlying Invoker class. However, to set these parameters via the Maven plugin, you’ll have to pass properties with -Drun.functionTarget=com.example.Example and -Drun.port.

With Gradle

With Gradle, there is no dedicated plugin, but it’s easy to configure build.gradle to let you run functions.

First, define a dedicated configuration for the invoker:

configurations { 
invoker
}

In the dependencies, add the Invoker library:

dependencies {
invoker 'com.google.cloud.functions.invoker:java-function-invoker:1.0.0-beta1'
}

And then, create a new task to run the Invoker:

tasks.register("runFunction", JavaExec) {
main = 'com.google.cloud.functions.invoker.runner.Invoker'
classpath(configurations.invoker)
inputs.files(configurations.runtimeClasspath,
sourceSets.main.output)
args('--target',
project.findProperty('runFunction.target') ?:
'com.example.Example',
'--port',
project.findProperty('runFunction.port') ?: 8080
)
doFirst {
args('--classpath', files(configurations.runtimeClasspath,
sourceSets.main.output).asPath)
}
}

By default, the above launches the function com.example.Example on port 8080, but you can override those on the command-line, when running gradle or the gradle wrapper:

$ gradle runFunction -PrunFunction.target=com.example.HelloWorld \
-PrunFunction.port=8080

Running elsewhere, making your functions portable

What’s interesting about the Functions Framework is that you are not tied to the Cloud Functions platform for deploying your functions. As long as, in your target environment, you can run your functions with the Invoker class, you can run your functions on Cloud Run, on Google Kubernetes Engine, on Knative environments, on other clouds when you can run Java, or more generally on any servers on-premises. It makes your functions highly portable between environments. But let’s have a closer look at deployment now.

Deploying your functions

You can deploy functions with the Maven plugin as well, with various parameters to tweak for defining regions, memory size, etc. But here, we’ll focus on using the cloud SDK, with its gcloud command-line, to deploy our functions.

For example, to deploy an HTTP function, you would type:

$ gcloud functions deploy exampleFn \
--region europe-west1 \
--trigger-http \
--allow-unauthenticated \
--runtime java11 \
--entry-point com.example.Example \
--memory 512MB

For a background function that would be notified of new messages on a Pub/Sub topic, you would launch:

$ gcloud functions deploy exampleFn \
--region europe-west1 \
--trigger-topic msg-topic \
--runtime java11 \
--entry-point com.example.PubSubFunction \
--memory 512MB

Note that deployments come in two flavors as well, although the above commands are the same: functions are deployed from source with a pom.xml and built in Google Cloud, but when using a build tool other than Maven, you can also use the same command to deploy a pre-compiled JAR that contains your function implementation. Of course, you’ll have to create that JAR first.

What about other languages and frameworks?

So far, we looked at Java and the plain Functions Framework, but you can definitely use alternative JVM languages such as Apache Groovy, Kotlin, or Scala, and third-party frameworks that integrate with Cloud Functions like Micronaut and Spring Boot!

Pretty Groovy functions

Without covering all those combinations, let’s have a look at two examples. What would an HTTP function look like in Groovy?

The first step will be to add Apache Groovy as a dependency in your pom.xml:

<dependency>
<groupId>org.codehaus.groovy</groupId>
<artifactId>groovy-all</artifactId>
<version>3.0.4</version>
<type>pom</type>
</dependency>

You will also need the GMaven compiler plugin to compile the Groovy code:

<plugin>
<groupId>org.codehaus.gmavenplus</groupId>
<artifactId>gmavenplus-plugin</artifactId>
<version>1.9.0</version>
<executions>
<execution>
<goals>
<goal>addSources</goal>
<goal>addTestSources</goal>
<goal>compile</goal>
<goal>compileTests</goal>
</goals>
</execution>
</executions>
</plugin>

When writing the function code, just use Groovy instead of Java:

import com.google.cloud.functions.*

class HelloWorldFunction implements HttpFunction {
void service(HttpRequest request, HttpResponse response) {
response.writer.write "Hello Groovy World!"
}
}

The same explanations regarding running your function locally or deploying it still applies: the Java platform is pretty open to alternative languages too! And the Cloud Functions builder will happily build your Groovy code in the cloud, since Maven lets you compile this code thanks to the Groovy library.

Micronaut functions

Third-party frameworks also offer a dedicated Cloud Functions integration. Let’s have a look at Micronaut.

Micronaut is a “modern, JVM-based, full-stack framework for building modular, easily testable microservice and serverless applications”, as explained on its website. It supports the notion of serverless functions, web apps and microservices, and has a dedicated integration for Google Cloud Functions.

In addition to being a very efficient framework with super fast startup times (which is important, to avoid long cold starts on serverless services), what’s interesting about using Micronaut is that you can use Micronaut’s own programming model, including Dependency Injection, annotation-driven bean declaration, etc.

For HTTP functions, you can use the framework’s own @Controller / @Get annotations, instead of the Functions Framework’s own interfaces. So for example, a Micronaut HTTP function would look like:

import io.micronaut.http.annotation.*;

@Controller("/hello")
public class HelloController {

@Get(uri="/", produces="text/plain")
public String index() {
return "Example Response";
}
}

This is the standard way in Micronaut to define a Web microservice, but it transparently builds upon the Functions Framework to run this service as a Cloud Function. Furthermore, this programming model offered by Micronaut is portable across other environments, since Micronaut runs in many different contexts.

Last but not least, if you are using the Micronaut Launch project (hosted on Cloud Run) which allows you to scaffold new projects easily (from the command-line or from a nice UI), you can opt for adding the google-cloud-function support module, and even choose your favorite language, build tool, or testing framework:

Micronaut Launch

Be sure to check out the documentation for the Micronaut Cloud Functions support, and Spring Cloud Function support.

What’s next?

Now it’s your turn to try Cloud Functions for Java 11 today, with your favorite JVM language or third-party frameworks. Read the getting started guide, and try this for free with Google Cloud Platform free trial. Explore Cloud Functions’ features and use cases, take a look at the quickstarts, perhaps even contribute to the open source Functions Framework. And we’re looking forward to seeing what functions you’re going to build on this platform!

Strengthen your cloud skills with Google Cloud training

Posted by Yuri Grinshteyn, Site Reliability Engineer

We know many of you are looking for ways to keep learning and connecting with other developers virtually right now, and we want to help. Below you can check out our top on-demand Google Cloud training webinars and resources where you can take hands-on labs and learn, at no charge, more about everything from the basics of Google Cloud to more advanced topics like building robust cloud architecture.

Starting with the basics

You can tune in from May 19-20 to watch instructors in Cloud OnBoard break down what it takes to migrate to Google Cloud and explain the basics of the Google Kubernetes Engine, a managed, production-ready environment for running containerized applications. After the sessions, you’ll have a chance to test what you’ve learned by participating in hands-on labs and challenges with the Cloud Hero Online Challenge. Missed the live recording on May 19-20? No worries! You can view it on-demand starting May 21 and still participate in hands-on labs.

Gaining more hands-on experience and a deeper understanding of Google Cloud products

Ready to gain more hands-on cloud experience and deeper product knowledge? We have webinars where Googlers will walk you through more hands-on labs on Qwiklabs and share product tips and tricks.

If you’re interested in big data and machine learning, you can do a lab I recorded in the Baseline: Data, ML, AI webinar to get more experience using tools like Big Query, Cloud Speech API, and Cloud ML Engine. You can also learn how to use BigQuery and other Google tools to draw insights and visualize data from the public health data sets Google released to support the COVID-19 research process in our Data science for public health: Working with public COVID-19 datasets webinar.

Getting role-based training and preparing for certification

For those of you who are already cloud professionals, our top webinars this year so far are Professional Cloud DevOps and Professional Cloud Architect.

You can learn how to improve the way you build software delivery pipelines, deploy and monitor services, and manage incidents in the DevOps webinar. The Cloud Architect webinar will discuss how to ensure you’re designing, developing, and managing effective solutions.

Both webinars will also help prepare you to earn Google Cloud certifications. If you’d like to learn more about the certification program, you can attend our on-demand webinar Why Certify? Everything to know about Google Cloud Certification.

More no-cost resources to check out

We’re also offering our extensive catalog of Google Cloud on-demand training courses on Pluralsight and Qwiklabs at no cost when you sign up by May 31, 20201. You can learn how to prototype an app, build prediction models, and more—at your own pace by registering here.

We hope these webinars and resources help you continue learning new skills and stay connected with the broader Google developer community.

1. Your 30-days access to these Google Cloud training courses at no cost starts when you enroll for your courses. These offers are valid until May 31, 2020. After your 30-days, you will incur charges on Pluralsight; for Qwiklabs, you will need to purchase credits to continue taking labs.

Building a more resilient world together

Posted by Billy Rutledge, Director of the Coral team

UNDP Hackster.io COVID19 Detect Protect Poster

Recently, we’ve seen communities respond to the challenges of the coronavirus pandemic by using technology in new ways to effect positive change. It’s increasingly important that our systems are able to adapt to new contexts, handle disruptions, and remain efficient.

At Coral, we believe intelligence at the edge is a key ingredient towards building a more resilient future. By making the latest machine learning tools easy-to-use and accessible, innovators can collaborate to create solutions that are most needed in their communities. Developers are already using Coral to build solutions that can understand and react in real-time, while maintaining privacy for everyone present.

Helping our communities stay safe, together

As mandatory isolation measures begin to relax, compliance with safe social distancing protocol has become a topic of primary concern for experts across the globe. Businesses and individuals have been stepping up to find ways to use technology to help reduce the risk and spread. Many efforts are employing the benefits of edge AI—here are a few early stage examples that have inspired us.

woman and child crossing the street

In Belgium, engineers at Edgise recently used Coral to develop an occupancy monitor to aid businesses in managing capacity. With the privacy preserving properties of edge AI, businesses can anonymously count how many customers enter and exit a space, signaling when the area is too full.

A research group at the Sathyabama Institute of Science and Technology in India are using Coral to develop a wearable device to serve as a COVID-19 cough counter and health monitor, allowing medical professionals to better care for low risk patients in an outpatient capacity. Coral's Edge TPU enables biometric data to be processed efficiently, without draining the limited power resources available in wearable devices.

All across the US, hospitals are seeking solutions to ensure adherence to hygiene policy amongst hospital staff. In one example, a device incorporates the compact, affordable and offline benefits of the Coral modules to aid in handwashing practices at numerous stations throughout a facility.

And around the world, members of the PyImageSearch community are exploring how to train a COVID-19: Face Mask Detector model using TensorFlow that can be used to identify whether people are wearing a mask. Open source frameworks can empower anyone to develop solutions, and with Coral components we can help bring those benefits to everyone.

Eliciting a global response

In an effort to rally greater community involvement, Coral has joined The United Nations Development Programme and Hackster.io, as a sponsor of the COVID-19 Detect and Protect Challenge. The initiative calls on developers to build affordable and reproducible solutions that support response efforts in developing countries. All ideas are welcome—whether they use ML or not—and we encourage you to participate.

To make edge ML capabilities even easier to integrate, we’re also announcing a price reduction for the Coral products widely used for experimentation and prototyping. Our Dev Board will now be offered at $129.99, the USB Accelerator at $59.99, the Camera Module at $19.99, and the Enviro Board at $14.99. Additionally, we are introducing the USB Accelerator into 10 new markets: Ghana, Thailand, Singapore, Oman, Philippines, Indonesia, Kenya, Malaysia, Israel, and Vietnam. For more details, visit Coral.ai/products.

We’re excited to see the solutions developers will bring forward with Coral. And as always, please keep sending us feedback at [email protected]

Android 11: Beta Plans

Posted by Dave Burke, VP of Engineering

Android 11 Dial logo

When we started planning Android 11, we didn’t expect the kinds of changes that would find their way to all of us, across nearly every region in the world. These have challenged us to stay flexible and find new ways to work together, especially with our developer community.

To help us meet those challenges we’re announcing an update to our release timeline. We’re bringing you a fourth Developer Preview today and moving Beta 1 to June 3. And to tell you all about the release and give you the technical resources you need, we’re hosting an online developer event that we’re calling #Android11: the Beta Launch Show.

Join us for #Android11: The Beta Launch Show

While the circumstances prevent us from joining together with you in-person at Shoreline Amphitheatre for Google I/O, our annual developer conference, we’re organizing an online event where we can share with you all the best of what’s new in Android. We hope you’ll join us for #Android11: The Beta Launch Show, your opportunity to find out what’s new in Android from the people who build Android. Hosted by me, Dave Burke, we’ll be kicking off at 11AM ET on June 3. And we’ll be wrapping it up with a post-show live Q&A; tweet your #AskAndroid questions to get them answered live!

Later that day, we’ll be sharing a number of talks on a range of topics from Jetpack Compose to Android Studio and Google Play–talks that we had originally planned for Google I/O–to help you take advantage of the latest in Android development. You can sign-up to receive updates on this digital event at developer.android.com/android11.

Android 11 schedule update

Our industry moves really fast, and we know that many of our device-maker partners are counting on us to help them bring Android 11 to new consumer devices later this year. We also know that many of you have been working to prioritize early app and game testing on Android 11, based in part on our Platform Stability and other milestones. At the same time, all of us are collaborating remotely and prioritizing the well-being of our families, friends and colleagues.

So to help us meet the needs of the ecosystem while being mindful of the impacts on our developers and partners, we’ve decided to add a bit of extra time in the Android 11 release schedule. We’re moving out Beta 1 and all subsequent milestones by about a month, which gives everyone a bit more room but keeps us on track for final release later in Q3.

Here are some of the key changes in the new schedule:

  • We’re releasing a fourth Developer Preview today for testing and feedback.
  • Beta 1 release moves to June 3. We’ll include the final SDK and NDK APIs with this release and open up Google Play publishing for apps targeting Android 11.
  • Beta 2 moves to July. We’ll reach Platform Stability with this release.
  • Beta 3 moves to August and will include release candidate builds for final testing

By bringing you the final APIs on the original timeline while shifting the other dates, we’re giving you an extra month to compile and test with the final APIs, while also ensuring that you have the same amount of time between Platform Stability and the final release, planned for later in Q3. Here’s a look at the timeline.

Android 11 timeline

You can read more about what the new timeline means to app developers in the preview program overview.

App compatibility

The schedule change adds some extra time for you to test your app for compatibility and identify any work you’ll need to do. We recommend releasing a compatible app update by Android 11 Beta on June 3rd to get feedback from the larger group of Android Beta users who will be getting the update.

With Beta 1 the SDK and NDK APIs will be final, and as we reach Platform Stability in July, the system behaviors and non-SDK greylists will also be finalized. At that time, plan on doing your final compatibility testing and releasing your fully compatible app, SDK, or library as soon as possible so that it is ready for the final Android 11 release. You can read more in the timeline for developers.

You can start compatibility testing today on a Pixel 2, 3, 3a, or 4 device, or you can use the Android Emulator. Just flash the latest build, install your current production app, and test the user flows. Make sure to review the behavior changes for areas where your app might be affected. There’s no need to change the app’s targetSdkVersion at this time, although we recommend evaluating the work since many changes apply once your app is targeting the new API level.

Get started with Android 11

Today we're pushing a Developer Preview 4 with the latest bug fixes, API tweaks, and features to try in your apps. It’s available by manual download and flash for Pixel 2, 3, 3a, or 4 devices, and if you’re already running a Developer Preview build, you’ll get an over-the-air (OTA) update to today’s release.

For complete information on Android 11, visit the Android 11 developer site, and please continue to let us know what you think!

Google for Startups Accelerator: Meet the first (and fully-remote) Brazilian class of 2020

Posted by Rodrigo Carraresi, Developer Relations Regional Lead, Brazil

Since 2018, the Google for Startups Accelerator Brazil (previously Google Developers Launchpad Accelerator) has contributed to the growth of more than 30 Brazilian startups, such as EasyCrédito, Liv Up, and SmarttBot. With the help of renowned mentors and experts from Google and other leading organizations across the globe, we’re helping companies overcome technical challenges such as Cloud, AI, and machine learning.

Today, we’re proud to announce the ten startups selected for the first cohort of 2020, which will be held entirely on Google Hangouts due to the COVID-19 crisis:

  • Bothub: creates chatbots in multiple languages using data from neuro-linguistic programming
  • Caju: provides a benefit tracking platform for companies
  • DeÔnibus: web platform for purchasing public transport tickets across Brazil
  • GoFind: organizing store and product information to improve the supply chain, making the consumer experience more practical and convenient
  • Isportistics: video interpretation and tagging for sports content, powered by AI.
  • Jobecam: employment platform focused on helping with efficiency and more diversity in selection processes
  • Loft: website for buying and selling luxury real estate
  • Neomed: a marketplace simplifying the relationship between clinics, laboratories and hospitals that require high-quality medical reports
  • Promobit: promotions and discounts mapping service, built in a community format.
  • Real Valor: investment portfolio management platform

The three-month Google for Startups Accelerator offers assistance and tools to help startups that already have a funded product, but still face particular technical obstacles. This version of the program, which kicked off on April 13, was purposefully designed as an online version of the traditional Google for Startups Accelerator model and the selected companies will take advantage of the following:

  • Tailored, one-on-one mentoring to work on practical aspects of a startup’s technical capabilities
  • Support from Google people and product experts, as well as subject matter leaders and partner organizations around the world
  • Google Cloud Platform credits
  • Access to the Google for Startups network of like-minded founders & alumni around the world

Google for Startups Accelerator is just one of many Google for Startups’ initiatives in Brazil, which also include Campus São Paulo, support programs such as Residency and Startup Zone, open events such as Presents, and ongoing training workshops by the Startup School. Brazil has a strong startup ecosystem, a thriving hub of technology and innovation, and we are proud to help these founders grow and scale businesses that will change the world on a global scale.

Stay tuned throughout the course of the program on Google for Startups social channels to learn key takeaways, advice, and learnings from the latest Brazilian Accelerator program.

Google for Startups Accelerator: Meet the first (and fully-remote) Brazilian class of 2020

Posted by Rodrigo Carraresi, Developer Relations Regional Lead, Brazil

Since 2018, the Google for Startups Accelerator Brazil (previously Google Developers Launchpad Accelerator) has contributed to the growth of more than 30 Brazilian startups, such as EasyCrédito, Liv Up, and SmarttBot. With the help of renowned mentors and experts from Google and other leading organizations across the globe, we’re helping companies overcome technical challenges such as Cloud, AI, and machine learning.

Today, we’re proud to announce the ten startups selected for the first cohort of 2020, which will be held entirely on Google Hangouts due to the COVID-19 crisis:

  • Bothub: creates chatbots in multiple languages using data from neuro-linguistic programming
  • Caju: provides a benefit tracking platform for companies
  • DeÔnibus: web platform for purchasing public transport tickets across Brazil
  • GoFind: organizing store and product information to improve the supply chain, making the consumer experience more practical and convenient
  • Isportistics: video interpretation and tagging for sports content, powered by AI.
  • Jobecam: employment platform focused on helping with efficiency and more diversity in selection processes
  • Loft: website for buying and selling luxury real estate
  • Neomed: a marketplace simplifying the relationship between clinics, laboratories and hospitals that require high-quality medical reports
  • Promobit: promotions and discounts mapping service, built in a community format.
  • Real Valor: investment portfolio management platform

The three-month Google for Startups Accelerator offers assistance and tools to help startups that already have a funded product, but still face particular technical obstacles. This version of the program, which kicked off on April 13, was purposefully designed as an online version of the traditional Google for Startups Accelerator model and the selected companies will take advantage of the following:

  • Tailored, one-on-one mentoring to work on practical aspects of a startup’s technical capabilities
  • Support from Google people and product experts, as well as subject matter leaders and partner organizations around the world
  • Google Cloud Platform credits
  • Access to the Google for Startups network of like-minded founders & alumni around the world

Google for Startups Accelerator is just one of many Google for Startups’ initiatives in Brazil, which also include Campus São Paulo, support programs such as Residency and Startup Zone, open events such as Presents, and ongoing training workshops by the Startup School. Brazil has a strong startup ecosystem, a thriving hub of technology and innovation, and we are proud to help these founders grow and scale businesses that will change the world on a global scale.

Stay tuned throughout the course of the program on Google for Startups social channels to learn key takeaways, advice, and learnings from the latest Brazilian Accelerator program.

Google for Startups Accelerator: Meet the first (and fully-remote) Brazilian class of 2020

Posted by Rodrigo Carraresi, Developer Relations Regional Lead, Brazil

Since 2018, the Google for Startups Accelerator Brazil (previously Google Developers Launchpad Accelerator) has contributed to the growth of more than 30 Brazilian startups, such as EasyCrédito, Liv Up, and SmarttBot. With the help of renowned mentors and experts from Google and other leading organizations across the globe, we’re helping companies overcome technical challenges such as Cloud, AI, and machine learning.

Today, we’re proud to announce the ten startups selected for the first cohort of 2020, which will be held entirely on Google Hangouts due to the COVID-19 crisis:

  • Bothub: creates chatbots in multiple languages using data from neuro-linguistic programming
  • Caju: provides a benefit tracking platform for companies
  • DeÔnibus: web platform for purchasing public transport tickets across Brazil
  • GoFind: organizing store and product information to improve the supply chain, making the consumer experience more practical and convenient
  • Isportistics: video interpretation and tagging for sports content, powered by AI.
  • Jobecam: employment platform focused on helping with efficiency and more diversity in selection processes
  • Loft: website for buying and selling luxury real estate
  • Neomed: a marketplace simplifying the relationship between clinics, laboratories and hospitals that require high-quality medical reports
  • Promobit: promotions and discounts mapping service, built in a community format.
  • Real Valor: investment portfolio management platform

The three-month Google for Startups Accelerator offers assistance and tools to help startups that already have a funded product, but still face particular technical obstacles. This version of the program, which kicked off on April 13, was purposefully designed as an online version of the traditional Google for Startups Accelerator model and the selected companies will take advantage of the following:

  • Tailored, one-on-one mentoring to work on practical aspects of a startup’s technical capabilities
  • Support from Google people and product experts, as well as subject matter leaders and partner organizations around the world
  • Google Cloud Platform credits
  • Access to the Google for Startups network of like-minded founders & alumni around the world

Google for Startups Accelerator is just one of many Google for Startups’ initiatives in Brazil, which also include Campus São Paulo, support programs such as Residency and Startup Zone, open events such as Presents, and ongoing training workshops by the Startup School. Brazil has a strong startup ecosystem, a thriving hub of technology and innovation, and we are proud to help these founders grow and scale businesses that will change the world on a global scale.

Stay tuned throughout the course of the program on Google for Startups social channels to learn key takeaways, advice, and learnings from the latest Brazilian Accelerator program.

MediaPipe KNIFT: Template-based Feature Matching

Posted by Zhicheng Wang and Genzhi Ye, MediaPipe team

Image Feature Correspondence with KNIFT

In many computer vision applications, a crucial building block is to establish reliable correspondences between different views of an object or scene, forming the foundation for approaches like template matching, image retrieval and structure from motion. Correspondences are usually computed by extracting distinctive view-invariant features such as SIFT or ORB from images. The ability to reliably establish such correspondences enables applications like image stitching to create panoramas or template matching for object recognition in videos (see Figure 1).

Today, we are announcing KNIFT (Keypoint Neural Invariant Feature Transform), a general purpose local feature descriptor similar to SIFT or ORB. Likewise, KNIFT is also a compact vector representation of local image patches that is invariant to uniform scaling, orientation, and illumination changes. However unlike SIFT or ORB, which were engineered with heuristics, KNIFT is an embedding learned directly from a large number of corresponding local patches extracted from nearby video frames. This data driven approach implicitly encodes complex, real-world spatial transformations and lighting changes in the embedding. As a result, the KNIFT feature descriptor appears to be more robust, not only to affine distortions, but to some degree of perspective distortions as well. We are releasing an implementation of KNIFT in MediaPipe and a KNIFT-based template matching demo in the next section to get you started.

Figure 1: Matching a real Stop Sign with a Stop Sign template using KNIFT.

Training Method

In Machine Learning, loosely speaking, training an embedding means finding a mapping that can translate a high dimensional vector, such as an image patch, to a relatively lower dimensional vector, such as a feature descriptor. Ideally, this mapping should have the following property: image patches around a real-world point should have the same or very similar descriptors across different views or illumination changes. We have found real world videos a good source of such corresponding image patches as training data (See Figure 3 and 4) and we use the well-established Triplet Loss (see Figure 2) to train such an embedding. Each triplet consists of an anchor (denoted by a), a positive (p), and a negative (n) feature vector extracted from the corresponding image patches, and d() denotes the Euclidean distance in the feature space.

Figure 2: Triplet Loss Function.

Figure 2: Triplet Loss Function.

Training Data

The training triplets are extracted from all ~1500 video clips in the publicly available YouTube UGC Dataset. We first use an existing heuristically-engineered local feature detector to detect keypoints and compute the affine transform between two frames with a high accuracy (see Figure 4). Then we use this correspondence to find keypoint pairs and extract the patches around these keypoints. Note that the newly identified keypoints may include those that were detected but rejected by geometric verification in the first step. For each pair of matched patches, we randomly apply some form of data augmentation (e.g. random rotation or brightness adjustment) to construct the anchor-positive pair. Finally, we randomly pick an arbitrary patch from another video as the negative to finish the construction of this triplet (see Figure 5).

Figure 3: An example video clip from which we extract training triplets.

Figure 4: Finding frame correspondence using existing local features.

Figure 5: (Top to bottom) Anchor, positive and negative patches.

Hard-negative Triplet Mining

To improve model quality, we use the same hard-negative triplet mining method used by FaceNet training. We first train a base model with randomly selected triplets. Then we implement a pipeline that uses the base model to find semi-hard-negative samples (d(a,p) < d(a,n) < d(a,p)+margin) for each anchor-positive pair (Figure 6). After mixing the randomly selected triplets and hard-negative triplets, we re-train the model with this improved data.

Figure 6: (Top to bottom) Anchor, positive and semi-hard negative patches.

Model Architecture

From model architecture exploration, we have found that a relatively small architecture is sufficient to achieve decent quality, so we use a lightweight version of the Inception architecture as the KNIFT model backbone. The resulting KNIFT descriptor is a 40-dimensional float vector. For more model details, please refer to the KNIFT model card.

Benchmark

We benchmark the KNIFT model inference speed on various devices (computing 200 features) and list them in Table 1.

Table 1: KNIFT performance benchmark.

Table 1: KNIFT performance benchmark.

Quality-wise, we compare the average number of keypoints matched by KNIFT and by ORB (OpenCV implementation) respectively on an in-house benchmark (Table 2). There are many publicly available image matching benchmarks, e.g. 2020 Image Matching Benchmark, but most of them focus on matching landmarks across large perspective changes in relatively high resolution images, and the tasks often require computing thousands of keypoints. In contrast, since we designed KNIFT for matching objects in large scale (i.e. billions of images) online image retrieval tasks, we devised our benchmark to focus on low cost and high precision driven use cases, i.e. 100-200 keypoints computed per image and only ~10 matching keypoints needed for reliably determining a match. In addition, to illustrate the fine-grained performance characteristics of a feature descriptor, we divide and categorize the benchmark set by object types (e.g. 2D planar surface) and image pair relations (e.g. large size difference). In table 2, we compare the average number of keypoints matched by KNIFT and by ORB respectively in each category, based on the same 200 keypoint locations detected in each image by the oFast detector that comes with the ORB implementation in OpenCV.

Table 2: KNIFT vs ORB average number of matched keypoints.

From Table 2, we can see that KNIFT consistently matches more keypoints than ORB by a large margin in every category. Here we acknowledge the fact that KNIFT (40-d float) is considerably larger than ORB (32-d char) and this can have an effort on matching quality. Nevertheless, most local feature benchmarks do not take descriptor size into account so we will follow the convention here.

To make it easy for developers to try KNIFT in MediaPIpe, we have built a local-feature-based template matching solution (see implementation details using MediaPipe in the next section). As a side effect, we can demonstrate the matching quality between KNIFT and ORB visually in side-by-side comparisons like Figure 7 and 9.

Figure 7: Example of “matching 2D planar surface”. (Left) KNIFT 183/240, (Right) ORB 133/240.

In Figure 7, we choose a typical U.S. Stop Sign image from Google Image Search as the template and attempt to match it with the Stop Sign in this video. This example falls into the “matching 2D planar surface” category in Table 2. Using the same 200 keypoint locations detected by oFast and the same RANSAC setting, we show that KNIFT is successful at matching the Stop Sign in 183 frames out of a total of 240 frames. In comparison, ORB matches 133 frames.

Figure 8: Example of “matching 3D untextured object”. Two template images from different views.

Figure 9: Example of “matching 3D untextured object”. (Left) KNIFT 89/150, (Right) ORB 37/150.

Figure 9 shows another matching performance comparison on an example from the “matching 3D untextured object” category in Table 2. Since this example involves large perspective changes of untextured surfaces, which is known to be challenging for local feature descriptors, we use template images from two different views (shown in Figure 8) to improve the matching performance. Again, using the same keypoint locations and RANSAC setting, we show that KNIFT is successful at matching 89 frames out of a total of 150 frames while ORB matches 37 frames.

KNIFT-based Template Matching in MediaPipe

We are releasing the aforementioned template matching solution based on KNIFT in MediaPipe, which is capable of identifying pre-defined image templates and precisely localizing recognized templates on the camera image. There are 3 major components in the template-matching MediaPipe graph shown below:

  • FeatureDetectorCalculator: a calculator that consumes image frames and performs OpenCV oFast detector on the input image and outputs keypoint locations. Moreover, this calculator is also responsible for cropping patches around each keypoint with rotation and scale info and stacking them into a vector for the downstream calculator to process.
  • TfLiteInferenceCalculator with KNIFT model: a calculator that loads the KNIFT tflite model and performs model inference. The input tensor shape is (200, 32, 32, 1), indicating 200 32x32 local patches. The output tensor shape is (200, 40), indicating 200 40-dimensional feature descriptors. By default, the calculator runs the TFLite XNNPACK delegate, but users have the option to select the regular CPU delegate to run at a reduced speed.
  • BoxDetectorCalculator: a calculator that takes pre-computed keypoint locations and KNIFT descriptors and performs feature matching between the current frame and multiple template images. The output of this calculator is a list of TimedBoxProto, which contains the unique id and location of each box as a quadrilateral on the image. Aside from the classic homography RANSAC algorithm, we also apply a perspective transform verification step to ensure that the output quadrilateral does not result in too much skew or a weird shape.

Figure 10: MediaPipe graph of the demo

Demo

In this demo, we chose three different denominations ($1, $5, $20) of U.S. dollar bills as templates and attempted to match them to various real world dollar bills in videos. We resized each input frame to 640x480 pixels, ran the oFast detector to detect 200 keypoints, and used KNIFT to extract feature descriptors from each 32x32 local image patch surrounding these keypoints. We then performed template matching between these video frames and the KNIFT features extracted from the dollar bill templates. This demo runs at 20 FPS on a Pixel 2 Phone CPU with XNNPACK.

Figure 11: Matching different U.S. dollar bills using KNIFT.

Build Your Own Templates

We have provided a set of built-in planar templates in our demo. To make it easy for users to try their own templates, we also provide a tool to build such an index with user generated templates. index_building.pbtxt is a MediaPipe graph that accepts as its input a directory path containing a set of template images. Users can use this graph to compute KNIFT descriptors for all template images (which will be stored in a single file) by 1) replacing the index_proto_filename field in the main graph and the BUILD file and 2) rebuilding the APK file. For step-by-step instructions on how we created the dollar bill demo shown above, please refer to this documentation.

Acknowledgements

We would like to thank Jiuqiang Tang, Chuo-Ling Chang, Dan Gnanapragasam‎, Howard Zhou, Jianing Wei and Ming Guang Yong for contributing to this blog post.

Automate & Extend with Apps Script (Google Cloud for Student Developers)

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud


In the previous episode of our new Google Cloud for Student Developers video series, we introduced G Suite REST APIs, showing how to enhance your applications by integrating with Gmail, Drive, Calendar, Docs, Sheets, and Slides. However, not all developers prefer the lower-level style of programming requiring the use of HTTP, OAuth2, and processing the request-response cycle of API usage. Building apps that access Google technologies is open to everyone at any level, not just advanced software engineers.

Enhancing career readiness of non-engineering majors helps make our services more inclusive and helps democratize API functionality to a broader audience. For the budding data scientist, business analyst, DevOps staff, or other technical professionals who don't code every day as part of their profession, Google Apps Script was made just for you. Rather than thinking about development stacks, HTTP, or authorization, you access Google APIs with objects.

This video blends a standard "Hello World" example with various use cases where Apps Script shines, including cases of automation, add-ons that extend the functionality of G Suite editors like Docs, Sheets, and Slides, accessing other Google or online services, and custom functions for Google Sheets—the ability to add new spreadsheet functions.

One featured example demonstrates the power to reach multiple Google technologies in an expressive way: lots of work, not much code. What may surprise readers is that this entire app, written by a colleague years ago, is comprised of just 4 lines of code:

function sendMap() {
var sheet = SpreadsheetApp.getActiveSheet();
var address = sheet.getRange('A1').getValue();
var map = Maps.newStaticMap().addMarker(address);
GmailApp.sendEmail('[email protected]',
'Map', 'See below.', {attachments:[map]});
}

Apps Script shields its users from the complexities of authorization and "API service endpoints." Developers only need an object to interface with a service; in this case, SpreadsheetApp to access Google Sheets, and similarly, Maps for Google Maps plus GmailApp for Gmail. Viewers can build this sample line-by-line with its corresponding codelab (a self-paced, hands-on tutorial). This example helps student (and professional) developers...

  1. Build something useful that can be extended into much more
  2. Learn how to accomplish several tasks without a lot of code
  3. Imagine what else is possible with G Suite developer tools

For further exploration, check out this video as well as this one which introduces Apps Script and presents the same code sample with more details. (Note the second video emails the map's link, but the app has been updated to attach it instead; the code has been updated everywhere else.) You may also access the code at its open source repository. If that's not enough, learn about other ways you can use Apps Script from its video library. Finally, stay tuned for the next pair of episodes which will cover full sample apps, one with G Suite REST APIs, and another with Apps Script.

We look forward to seeing what you build with Google Cloud.