Category Archives: Google Developers Blog

News and insights on Google platforms, tools and events

Strengthen your cloud skills with Google Cloud training

Posted by Yuri Grinshteyn, Site Reliability Engineer

We know many of you are looking for ways to keep learning and connecting with other developers virtually right now, and we want to help. Below you can check out our top on-demand Google Cloud training webinars and resources where you can take hands-on labs and learn, at no charge, more about everything from the basics of Google Cloud to more advanced topics like building robust cloud architecture.

Starting with the basics

You can tune in from May 19-20 to watch instructors in Cloud OnBoard break down what it takes to migrate to Google Cloud and explain the basics of the Google Kubernetes Engine, a managed, production-ready environment for running containerized applications. After the sessions, you’ll have a chance to test what you’ve learned by participating in hands-on labs and challenges with the Cloud Hero Online Challenge. Missed the live recording on May 19-20? No worries! You can view it on-demand starting May 21 and still participate in hands-on labs.

Gaining more hands-on experience and a deeper understanding of Google Cloud products

Ready to gain more hands-on cloud experience and deeper product knowledge? We have webinars where Googlers will walk you through more hands-on labs on Qwiklabs and share product tips and tricks.

If you’re interested in big data and machine learning, you can do a lab I recorded in the Baseline: Data, ML, AI webinar to get more experience using tools like Big Query, Cloud Speech API, and Cloud ML Engine. You can also learn how to use BigQuery and other Google tools to draw insights and visualize data from the public health data sets Google released to support the COVID-19 research process in our Data science for public health: Working with public COVID-19 datasets webinar.

Getting role-based training and preparing for certification

For those of you who are already cloud professionals, our top webinars this year so far are Professional Cloud DevOps and Professional Cloud Architect.

You can learn how to improve the way you build software delivery pipelines, deploy and monitor services, and manage incidents in the DevOps webinar. The Cloud Architect webinar will discuss how to ensure you’re designing, developing, and managing effective solutions.

Both webinars will also help prepare you to earn Google Cloud certifications. If you’d like to learn more about the certification program, you can attend our on-demand webinar Why Certify? Everything to know about Google Cloud Certification.

More no-cost resources to check out

We’re also offering our extensive catalog of Google Cloud on-demand training courses on Pluralsight and Qwiklabs at no cost when you sign up by May 31, 20201. You can learn how to prototype an app, build prediction models, and more—at your own pace by registering here.

We hope these webinars and resources help you continue learning new skills and stay connected with the broader Google developer community.

1. Your 30-days access to these Google Cloud training courses at no cost starts when you enroll for your courses. These offers are valid until May 31, 2020. After your 30-days, you will incur charges on Pluralsight; for Qwiklabs, you will need to purchase credits to continue taking labs.

Building a more resilient world together

Posted by Billy Rutledge, Director of the Coral team

UNDP Hackster.io COVID19 Detect Protect Poster

Recently, we’ve seen communities respond to the challenges of the coronavirus pandemic by using technology in new ways to effect positive change. It’s increasingly important that our systems are able to adapt to new contexts, handle disruptions, and remain efficient.

At Coral, we believe intelligence at the edge is a key ingredient towards building a more resilient future. By making the latest machine learning tools easy-to-use and accessible, innovators can collaborate to create solutions that are most needed in their communities. Developers are already using Coral to build solutions that can understand and react in real-time, while maintaining privacy for everyone present.

Helping our communities stay safe, together

As mandatory isolation measures begin to relax, compliance with safe social distancing protocol has become a topic of primary concern for experts across the globe. Businesses and individuals have been stepping up to find ways to use technology to help reduce the risk and spread. Many efforts are employing the benefits of edge AI—here are a few early stage examples that have inspired us.

woman and child crossing the street

In Belgium, engineers at Edgise recently used Coral to develop an occupancy monitor to aid businesses in managing capacity. With the privacy preserving properties of edge AI, businesses can anonymously count how many customers enter and exit a space, signaling when the area is too full.

A research group at the Sathyabama Institute of Science and Technology in India are using Coral to develop a wearable device to serve as a COVID-19 cough counter and health monitor, allowing medical professionals to better care for low risk patients in an outpatient capacity. Coral's Edge TPU enables biometric data to be processed efficiently, without draining the limited power resources available in wearable devices.

All across the US, hospitals are seeking solutions to ensure adherence to hygiene policy amongst hospital staff. In one example, a device incorporates the compact, affordable and offline benefits of the Coral modules to aid in handwashing practices at numerous stations throughout a facility.

And around the world, members of the PyImageSearch community are exploring how to train a COVID-19: Face Mask Detector model using TensorFlow that can be used to identify whether people are wearing a mask. Open source frameworks can empower anyone to develop solutions, and with Coral components we can help bring those benefits to everyone.

Eliciting a global response

In an effort to rally greater community involvement, Coral has joined The United Nations Development Programme and Hackster.io, as a sponsor of the COVID-19 Detect and Protect Challenge. The initiative calls on developers to build affordable and reproducible solutions that support response efforts in developing countries. All ideas are welcome—whether they use ML or not—and we encourage you to participate.

To make edge ML capabilities even easier to integrate, we’re also announcing a price reduction for the Coral products widely used for experimentation and prototyping. Our Dev Board will now be offered at $129.99, the USB Accelerator at $59.99, the Camera Module at $19.99, and the Enviro Board at $14.99. Additionally, we are introducing the USB Accelerator into 10 new markets: Ghana, Thailand, Singapore, Oman, Philippines, Indonesia, Kenya, Malaysia, Israel, and Vietnam. For more details, visit Coral.ai/products.

We’re excited to see the solutions developers will bring forward with Coral. And as always, please keep sending us feedback at [email protected].

Android 11: Beta Plans

Posted by Dave Burke, VP of Engineering

Android 11 Dial logo

When we started planning Android 11, we didn’t expect the kinds of changes that would find their way to all of us, across nearly every region in the world. These have challenged us to stay flexible and find new ways to work together, especially with our developer community.

To help us meet those challenges we’re announcing an update to our release timeline. We’re bringing you a fourth Developer Preview today and moving Beta 1 to June 3. And to tell you all about the release and give you the technical resources you need, we’re hosting an online developer event that we’re calling #Android11: the Beta Launch Show.

Join us for #Android11: The Beta Launch Show

While the circumstances prevent us from joining together with you in-person at Shoreline Amphitheatre for Google I/O, our annual developer conference, we’re organizing an online event where we can share with you all the best of what’s new in Android. We hope you’ll join us for #Android11: The Beta Launch Show, your opportunity to find out what’s new in Android from the people who build Android. Hosted by me, Dave Burke, we’ll be kicking off at 11AM ET on June 3. And we’ll be wrapping it up with a post-show live Q&A; tweet your #AskAndroid questions to get them answered live!

Later that day, we’ll be sharing a number of talks on a range of topics from Jetpack Compose to Android Studio and Google Play–talks that we had originally planned for Google I/O–to help you take advantage of the latest in Android development. You can sign-up to receive updates on this digital event at developer.android.com/android11.

Android 11 schedule update

Our industry moves really fast, and we know that many of our device-maker partners are counting on us to help them bring Android 11 to new consumer devices later this year. We also know that many of you have been working to prioritize early app and game testing on Android 11, based in part on our Platform Stability and other milestones. At the same time, all of us are collaborating remotely and prioritizing the well-being of our families, friends and colleagues.

So to help us meet the needs of the ecosystem while being mindful of the impacts on our developers and partners, we’ve decided to add a bit of extra time in the Android 11 release schedule. We’re moving out Beta 1 and all subsequent milestones by about a month, which gives everyone a bit more room but keeps us on track for final release later in Q3.

Here are some of the key changes in the new schedule:

  • We’re releasing a fourth Developer Preview today for testing and feedback.
  • Beta 1 release moves to June 3. We’ll include the final SDK and NDK APIs with this release and open up Google Play publishing for apps targeting Android 11.
  • Beta 2 moves to July. We’ll reach Platform Stability with this release.
  • Beta 3 moves to August and will include release candidate builds for final testing

By bringing you the final APIs on the original timeline while shifting the other dates, we’re giving you an extra month to compile and test with the final APIs, while also ensuring that you have the same amount of time between Platform Stability and the final release, planned for later in Q3. Here’s a look at the timeline.

Android 11 timeline

You can read more about what the new timeline means to app developers in the preview program overview.

App compatibility

The schedule change adds some extra time for you to test your app for compatibility and identify any work you’ll need to do. We recommend releasing a compatible app update by Android 11 Beta on June 3rd to get feedback from the larger group of Android Beta users who will be getting the update.

With Beta 1 the SDK and NDK APIs will be final, and as we reach Platform Stability in July, the system behaviors and non-SDK greylists will also be finalized. At that time, plan on doing your final compatibility testing and releasing your fully compatible app, SDK, or library as soon as possible so that it is ready for the final Android 11 release. You can read more in the timeline for developers.

You can start compatibility testing today on a Pixel 2, 3, 3a, or 4 device, or you can use the Android Emulator. Just flash the latest build, install your current production app, and test the user flows. Make sure to review the behavior changes for areas where your app might be affected. There’s no need to change the app’s targetSdkVersion at this time, although we recommend evaluating the work since many changes apply once your app is targeting the new API level.

Get started with Android 11

Today we're pushing a Developer Preview 4 with the latest bug fixes, API tweaks, and features to try in your apps. It’s available by manual download and flash for Pixel 2, 3, 3a, or 4 devices, and if you’re already running a Developer Preview build, you’ll get an over-the-air (OTA) update to today’s release.

For complete information on Android 11, visit the Android 11 developer site, and please continue to let us know what you think!

Google for Startups Accelerator: Meet the first (and fully-remote) Brazilian class of 2020

Posted by Rodrigo Carraresi, Developer Relations Regional Lead, Brazil

Since 2018, the Google for Startups Accelerator Brazil (previously Google Developers Launchpad Accelerator) has contributed to the growth of more than 30 Brazilian startups, such as EasyCrédito, Liv Up, and SmarttBot. With the help of renowned mentors and experts from Google and other leading organizations across the globe, we’re helping companies overcome technical challenges such as Cloud, AI, and machine learning.

Today, we’re proud to announce the ten startups selected for the first cohort of 2020, which will be held entirely on Google Hangouts due to the COVID-19 crisis:

  • Bothub: creates chatbots in multiple languages using data from neuro-linguistic programming
  • Caju: provides a benefit tracking platform for companies
  • DeÔnibus: web platform for purchasing public transport tickets across Brazil
  • GoFind: organizing store and product information to improve the supply chain, making the consumer experience more practical and convenient
  • Isportistics: video interpretation and tagging for sports content, powered by AI.
  • Jobecam: employment platform focused on helping with efficiency and more diversity in selection processes
  • Loft: website for buying and selling luxury real estate
  • Neomed: a marketplace simplifying the relationship between clinics, laboratories and hospitals that require high-quality medical reports
  • Promobit: promotions and discounts mapping service, built in a community format.
  • Real Valor: investment portfolio management platform

The three-month Google for Startups Accelerator offers assistance and tools to help startups that already have a funded product, but still face particular technical obstacles. This version of the program, which kicked off on April 13, was purposefully designed as an online version of the traditional Google for Startups Accelerator model and the selected companies will take advantage of the following:

  • Tailored, one-on-one mentoring to work on practical aspects of a startup’s technical capabilities
  • Support from Google people and product experts, as well as subject matter leaders and partner organizations around the world
  • Google Cloud Platform credits
  • Access to the Google for Startups network of like-minded founders & alumni around the world

Google for Startups Accelerator is just one of many Google for Startups’ initiatives in Brazil, which also include Campus São Paulo, support programs such as Residency and Startup Zone, open events such as Presents, and ongoing training workshops by the Startup School. Brazil has a strong startup ecosystem, a thriving hub of technology and innovation, and we are proud to help these founders grow and scale businesses that will change the world on a global scale.

Stay tuned throughout the course of the program on Google for Startups social channels to learn key takeaways, advice, and learnings from the latest Brazilian Accelerator program.

Google for Startups Accelerator: Meet the first (and fully-remote) Brazilian class of 2020

Posted by Rodrigo Carraresi, Developer Relations Regional Lead, Brazil

Since 2018, the Google for Startups Accelerator Brazil (previously Google Developers Launchpad Accelerator) has contributed to the growth of more than 30 Brazilian startups, such as EasyCrédito, Liv Up, and SmarttBot. With the help of renowned mentors and experts from Google and other leading organizations across the globe, we’re helping companies overcome technical challenges such as Cloud, AI, and machine learning.

Today, we’re proud to announce the ten startups selected for the first cohort of 2020, which will be held entirely on Google Hangouts due to the COVID-19 crisis:

  • Bothub: creates chatbots in multiple languages using data from neuro-linguistic programming
  • Caju: provides a benefit tracking platform for companies
  • DeÔnibus: web platform for purchasing public transport tickets across Brazil
  • GoFind: organizing store and product information to improve the supply chain, making the consumer experience more practical and convenient
  • Isportistics: video interpretation and tagging for sports content, powered by AI.
  • Jobecam: employment platform focused on helping with efficiency and more diversity in selection processes
  • Loft: website for buying and selling luxury real estate
  • Neomed: a marketplace simplifying the relationship between clinics, laboratories and hospitals that require high-quality medical reports
  • Promobit: promotions and discounts mapping service, built in a community format.
  • Real Valor: investment portfolio management platform

The three-month Google for Startups Accelerator offers assistance and tools to help startups that already have a funded product, but still face particular technical obstacles. This version of the program, which kicked off on April 13, was purposefully designed as an online version of the traditional Google for Startups Accelerator model and the selected companies will take advantage of the following:

  • Tailored, one-on-one mentoring to work on practical aspects of a startup’s technical capabilities
  • Support from Google people and product experts, as well as subject matter leaders and partner organizations around the world
  • Google Cloud Platform credits
  • Access to the Google for Startups network of like-minded founders & alumni around the world

Google for Startups Accelerator is just one of many Google for Startups’ initiatives in Brazil, which also include Campus São Paulo, support programs such as Residency and Startup Zone, open events such as Presents, and ongoing training workshops by the Startup School. Brazil has a strong startup ecosystem, a thriving hub of technology and innovation, and we are proud to help these founders grow and scale businesses that will change the world on a global scale.

Stay tuned throughout the course of the program on Google for Startups social channels to learn key takeaways, advice, and learnings from the latest Brazilian Accelerator program.

Google for Startups Accelerator: Meet the first (and fully-remote) Brazilian class of 2020

Posted by Rodrigo Carraresi, Developer Relations Regional Lead, Brazil

Since 2018, the Google for Startups Accelerator Brazil (previously Google Developers Launchpad Accelerator) has contributed to the growth of more than 30 Brazilian startups, such as EasyCrédito, Liv Up, and SmarttBot. With the help of renowned mentors and experts from Google and other leading organizations across the globe, we’re helping companies overcome technical challenges such as Cloud, AI, and machine learning.

Today, we’re proud to announce the ten startups selected for the first cohort of 2020, which will be held entirely on Google Hangouts due to the COVID-19 crisis:

  • Bothub: creates chatbots in multiple languages using data from neuro-linguistic programming
  • Caju: provides a benefit tracking platform for companies
  • DeÔnibus: web platform for purchasing public transport tickets across Brazil
  • GoFind: organizing store and product information to improve the supply chain, making the consumer experience more practical and convenient
  • Isportistics: video interpretation and tagging for sports content, powered by AI.
  • Jobecam: employment platform focused on helping with efficiency and more diversity in selection processes
  • Loft: website for buying and selling luxury real estate
  • Neomed: a marketplace simplifying the relationship between clinics, laboratories and hospitals that require high-quality medical reports
  • Promobit: promotions and discounts mapping service, built in a community format.
  • Real Valor: investment portfolio management platform

The three-month Google for Startups Accelerator offers assistance and tools to help startups that already have a funded product, but still face particular technical obstacles. This version of the program, which kicked off on April 13, was purposefully designed as an online version of the traditional Google for Startups Accelerator model and the selected companies will take advantage of the following:

  • Tailored, one-on-one mentoring to work on practical aspects of a startup’s technical capabilities
  • Support from Google people and product experts, as well as subject matter leaders and partner organizations around the world
  • Google Cloud Platform credits
  • Access to the Google for Startups network of like-minded founders & alumni around the world

Google for Startups Accelerator is just one of many Google for Startups’ initiatives in Brazil, which also include Campus São Paulo, support programs such as Residency and Startup Zone, open events such as Presents, and ongoing training workshops by the Startup School. Brazil has a strong startup ecosystem, a thriving hub of technology and innovation, and we are proud to help these founders grow and scale businesses that will change the world on a global scale.

Stay tuned throughout the course of the program on Google for Startups social channels to learn key takeaways, advice, and learnings from the latest Brazilian Accelerator program.

MediaPipe KNIFT: Template-based Feature Matching

Posted by Zhicheng Wang and Genzhi Ye, MediaPipe team

Image Feature Correspondence with KNIFT

In many computer vision applications, a crucial building block is to establish reliable correspondences between different views of an object or scene, forming the foundation for approaches like template matching, image retrieval and structure from motion. Correspondences are usually computed by extracting distinctive view-invariant features such as SIFT or ORB from images. The ability to reliably establish such correspondences enables applications like image stitching to create panoramas or template matching for object recognition in videos (see Figure 1).

Today, we are announcing KNIFT (Keypoint Neural Invariant Feature Transform), a general purpose local feature descriptor similar to SIFT or ORB. Likewise, KNIFT is also a compact vector representation of local image patches that is invariant to uniform scaling, orientation, and illumination changes. However unlike SIFT or ORB, which were engineered with heuristics, KNIFT is an embedding learned directly from a large number of corresponding local patches extracted from nearby video frames. This data driven approach implicitly encodes complex, real-world spatial transformations and lighting changes in the embedding. As a result, the KNIFT feature descriptor appears to be more robust, not only to affine distortions, but to some degree of perspective distortions as well. We are releasing an implementation of KNIFT in MediaPipe and a KNIFT-based template matching demo in the next section to get you started.

Figure 1: Matching a real Stop Sign with a Stop Sign template using KNIFT.

Training Method

In Machine Learning, loosely speaking, training an embedding means finding a mapping that can translate a high dimensional vector, such as an image patch, to a relatively lower dimensional vector, such as a feature descriptor. Ideally, this mapping should have the following property: image patches around a real-world point should have the same or very similar descriptors across different views or illumination changes. We have found real world videos a good source of such corresponding image patches as training data (See Figure 3 and 4) and we use the well-established Triplet Loss (see Figure 2) to train such an embedding. Each triplet consists of an anchor (denoted by a), a positive (p), and a negative (n) feature vector extracted from the corresponding image patches, and d() denotes the Euclidean distance in the feature space.

Figure 2: Triplet Loss Function.

Figure 2: Triplet Loss Function.

Training Data

The training triplets are extracted from all ~1500 video clips in the publicly available YouTube UGC Dataset. We first use an existing heuristically-engineered local feature detector to detect keypoints and compute the affine transform between two frames with a high accuracy (see Figure 4). Then we use this correspondence to find keypoint pairs and extract the patches around these keypoints. Note that the newly identified keypoints may include those that were detected but rejected by geometric verification in the first step. For each pair of matched patches, we randomly apply some form of data augmentation (e.g. random rotation or brightness adjustment) to construct the anchor-positive pair. Finally, we randomly pick an arbitrary patch from another video as the negative to finish the construction of this triplet (see Figure 5).

Figure 3: An example video clip from which we extract training triplets.

Figure 4: Finding frame correspondence using existing local features.

Figure 5: (Top to bottom) Anchor, positive and negative patches.

Hard-negative Triplet Mining

To improve model quality, we use the same hard-negative triplet mining method used by FaceNet training. We first train a base model with randomly selected triplets. Then we implement a pipeline that uses the base model to find semi-hard-negative samples (d(a,p) < d(a,n) < d(a,p)+margin) for each anchor-positive pair (Figure 6). After mixing the randomly selected triplets and hard-negative triplets, we re-train the model with this improved data.

Figure 6: (Top to bottom) Anchor, positive and semi-hard negative patches.

Model Architecture

From model architecture exploration, we have found that a relatively small architecture is sufficient to achieve decent quality, so we use a lightweight version of the Inception architecture as the KNIFT model backbone. The resulting KNIFT descriptor is a 40-dimensional float vector. For more model details, please refer to the KNIFT model card.

Benchmark

We benchmark the KNIFT model inference speed on various devices (computing 200 features) and list them in Table 1.

Table 1: KNIFT performance benchmark.

Table 1: KNIFT performance benchmark.

Quality-wise, we compare the average number of keypoints matched by KNIFT and by ORB (OpenCV implementation) respectively on an in-house benchmark (Table 2). There are many publicly available image matching benchmarks, e.g. 2020 Image Matching Benchmark, but most of them focus on matching landmarks across large perspective changes in relatively high resolution images, and the tasks often require computing thousands of keypoints. In contrast, since we designed KNIFT for matching objects in large scale (i.e. billions of images) online image retrieval tasks, we devised our benchmark to focus on low cost and high precision driven use cases, i.e. 100-200 keypoints computed per image and only ~10 matching keypoints needed for reliably determining a match. In addition, to illustrate the fine-grained performance characteristics of a feature descriptor, we divide and categorize the benchmark set by object types (e.g. 2D planar surface) and image pair relations (e.g. large size difference). In table 2, we compare the average number of keypoints matched by KNIFT and by ORB respectively in each category, based on the same 200 keypoint locations detected in each image by the oFast detector that comes with the ORB implementation in OpenCV.

Table 2: KNIFT vs ORB average number of matched keypoints.

From Table 2, we can see that KNIFT consistently matches more keypoints than ORB by a large margin in every category. Here we acknowledge the fact that KNIFT (40-d float) is considerably larger than ORB (32-d char) and this can have an effort on matching quality. Nevertheless, most local feature benchmarks do not take descriptor size into account so we will follow the convention here.

To make it easy for developers to try KNIFT in MediaPIpe, we have built a local-feature-based template matching solution (see implementation details using MediaPipe in the next section). As a side effect, we can demonstrate the matching quality between KNIFT and ORB visually in side-by-side comparisons like Figure 7 and 9.

Figure 7: Example of “matching 2D planar surface”. (Left) KNIFT 183/240, (Right) ORB 133/240.

In Figure 7, we choose a typical U.S. Stop Sign image from Google Image Search as the template and attempt to match it with the Stop Sign in this video. This example falls into the “matching 2D planar surface” category in Table 2. Using the same 200 keypoint locations detected by oFast and the same RANSAC setting, we show that KNIFT is successful at matching the Stop Sign in 183 frames out of a total of 240 frames. In comparison, ORB matches 133 frames.

Figure 8: Example of “matching 3D untextured object”. Two template images from different views.

Figure 9: Example of “matching 3D untextured object”. (Left) KNIFT 89/150, (Right) ORB 37/150.

Figure 9 shows another matching performance comparison on an example from the “matching 3D untextured object” category in Table 2. Since this example involves large perspective changes of untextured surfaces, which is known to be challenging for local feature descriptors, we use template images from two different views (shown in Figure 8) to improve the matching performance. Again, using the same keypoint locations and RANSAC setting, we show that KNIFT is successful at matching 89 frames out of a total of 150 frames while ORB matches 37 frames.

KNIFT-based Template Matching in MediaPipe

We are releasing the aforementioned template matching solution based on KNIFT in MediaPipe, which is capable of identifying pre-defined image templates and precisely localizing recognized templates on the camera image. There are 3 major components in the template-matching MediaPipe graph shown below:

  • FeatureDetectorCalculator: a calculator that consumes image frames and performs OpenCV oFast detector on the input image and outputs keypoint locations. Moreover, this calculator is also responsible for cropping patches around each keypoint with rotation and scale info and stacking them into a vector for the downstream calculator to process.
  • TfLiteInferenceCalculator with KNIFT model: a calculator that loads the KNIFT tflite model and performs model inference. The input tensor shape is (200, 32, 32, 1), indicating 200 32x32 local patches. The output tensor shape is (200, 40), indicating 200 40-dimensional feature descriptors. By default, the calculator runs the TFLite XNNPACK delegate, but users have the option to select the regular CPU delegate to run at a reduced speed.
  • BoxDetectorCalculator: a calculator that takes pre-computed keypoint locations and KNIFT descriptors and performs feature matching between the current frame and multiple template images. The output of this calculator is a list of TimedBoxProto, which contains the unique id and location of each box as a quadrilateral on the image. Aside from the classic homography RANSAC algorithm, we also apply a perspective transform verification step to ensure that the output quadrilateral does not result in too much skew or a weird shape.

Figure 10: MediaPipe graph of the demo

Demo

In this demo, we chose three different denominations ($1, $5, $20) of U.S. dollar bills as templates and attempted to match them to various real world dollar bills in videos. We resized each input frame to 640x480 pixels, ran the oFast detector to detect 200 keypoints, and used KNIFT to extract feature descriptors from each 32x32 local image patch surrounding these keypoints. We then performed template matching between these video frames and the KNIFT features extracted from the dollar bill templates. This demo runs at 20 FPS on a Pixel 2 Phone CPU with XNNPACK.

Figure 11: Matching different U.S. dollar bills using KNIFT.

Build Your Own Templates

We have provided a set of built-in planar templates in our demo. To make it easy for users to try their own templates, we also provide a tool to build such an index with user generated templates. index_building.pbtxt is a MediaPipe graph that accepts as its input a directory path containing a set of template images. Users can use this graph to compute KNIFT descriptors for all template images (which will be stored in a single file) by 1) replacing the index_proto_filename field in the main graph and the BUILD file and 2) rebuilding the APK file. For step-by-step instructions on how we created the dollar bill demo shown above, please refer to this documentation.

Acknowledgements

We would like to thank Jiuqiang Tang, Chuo-Ling Chang, Dan Gnanapragasam‎, Howard Zhou, Jianing Wei and Ming Guang Yong for contributing to this blog post.

Automate & Extend with Apps Script (Google Cloud for Student Developers)

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud


In the previous episode of our new Google Cloud for Student Developers video series, we introduced G Suite REST APIs, showing how to enhance your applications by integrating with Gmail, Drive, Calendar, Docs, Sheets, and Slides. However, not all developers prefer the lower-level style of programming requiring the use of HTTP, OAuth2, and processing the request-response cycle of API usage. Building apps that access Google technologies is open to everyone at any level, not just advanced software engineers.

Enhancing career readiness of non-engineering majors helps make our services more inclusive and helps democratize API functionality to a broader audience. For the budding data scientist, business analyst, DevOps staff, or other technical professionals who don't code every day as part of their profession, Google Apps Script was made just for you. Rather than thinking about development stacks, HTTP, or authorization, you access Google APIs with objects.

This video blends a standard "Hello World" example with various use cases where Apps Script shines, including cases of automation, add-ons that extend the functionality of G Suite editors like Docs, Sheets, and Slides, accessing other Google or online services, and custom functions for Google Sheets—the ability to add new spreadsheet functions.

One featured example demonstrates the power to reach multiple Google technologies in an expressive way: lots of work, not much code. What may surprise readers is that this entire app, written by a colleague years ago, is comprised of just 4 lines of code:

function sendMap() {
var sheet = SpreadsheetApp.getActiveSheet();
var address = sheet.getRange('A1').getValue();
var map = Maps.newStaticMap().addMarker(address);
GmailApp.sendEmail('[email protected]',
'Map', 'See below.', {attachments:[map]});
}

Apps Script shields its users from the complexities of authorization and "API service endpoints." Developers only need an object to interface with a service; in this case, SpreadsheetApp to access Google Sheets, and similarly, Maps for Google Maps plus GmailApp for Gmail. Viewers can build this sample line-by-line with its corresponding codelab (a self-paced, hands-on tutorial). This example helps student (and professional) developers...

  1. Build something useful that can be extended into much more
  2. Learn how to accomplish several tasks without a lot of code
  3. Imagine what else is possible with G Suite developer tools

For further exploration, check out this video as well as this one which introduces Apps Script and presents the same code sample with more details. (Note the second video emails the map's link, but the app has been updated to attach it instead; the code has been updated everywhere else.) You may also access the code at its open source repository. If that's not enough, learn about other ways you can use Apps Script from its video library. Finally, stay tuned for the next pair of episodes which will cover full sample apps, one with G Suite REST APIs, and another with Apps Script.

We look forward to seeing what you build with Google Cloud.

Local Home SDK Ready for Actions

Posted by Dave Smith, Developer Advocate

Last year we introduced the developer preview of the Local Home SDK, a suite of local technologies to enhance your smart home integration with Google Assistant by adding local fulfillment. Since then, we've been hard at work incorporating your feedback and getting the experience ready for production. Starting today, we're exiting developer preview and allowing you to submit local fulfillment apps along with your smart home Action through the Actions console using Local Home SDK v1.0.

Adding local fulfillment for your smart home Action.

As part of the Smart Home platform, local fulfillment extends your smart home Action and routes commands to devices through the local network, benefitting users with reduced latency and higher reliability. If a local path cannot be successfully established, commands fall back to your cloud fulfillment.

The Local Home SDK v1.0 supports discovery of local devices over Wi-Fi using the mDNS, UDP, or UPnP protocols. Once a local path is established, apps can send commands to devices using TCP, UDP, or HTTP. For more details on the API changes in SDK v1.0, check out the changelog.

Multi-scan configurations

Along with this release, we've also improved the scan configurations in the Actions console based on your feedback. You can now enter multiple scan configurations for a given project, enabling your local fulfillment app to handle multiple device families that may be using different discovery protocols.

New multi-scan configuration UI.

The new interface groups scan attributes by protocol and highlights required fields, making it clearer how to properly configure your project.

Submit your app

The Local Home SDK configuration page in the Actions console now accepts JavaScript bundles for your local fulfillment app. When you are ready to publish your app, upload your JavaScript files to the console and submit your Action. For more details on submitting your smart home Action for review, see the smart home launch guide.

Upload your local fulfillment app.

We've updated the test suite for smart home to support local fulfillment as well. Be sure to self-test your local fulfillment before submitting your updated smart home Action for review. You must provide updated test suite results with your certification request when you submit.

Get started

To learn more about enhancing your smart home Actions with local fulfillment, check out the Introduction to Local Home SDK and the developer guide. Build your first local fulfillment app with the codelab, and go deeper with the samples and API reference.

We want to hear from you, so continue sharing your feedback with us through the issue tracker, and engage with other smart home developers in the /r/GoogleAssistantDev community. Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on. We can’t wait to see what you build!

Become a Developer Student Club Lead

Posted by Erica Hanson, Global Program Lead, Developer Student Clubs

Calling all student developers: If you’re someone who wants to lead, is passionate about technology, loves problem-solving, and is driven to give back to your community, then Developer Student Clubs has a home for you. Interest forms for the upcoming 2020-2021 academic year are now available. Ready to dive in? Get started at goo.gle/dsc-leads.

Want to know more? Check out these details below.

Image description: People holding up Developer Students Club sign

What are Developer Student Clubs?

Developer Student Clubs (DSC) are university based community groups for students interested in Google developer technologies. With programs that meet in person and online, students from all undergraduate and graduate programs with an interest in growing as a developer are welcome. By joining a DSC, students grow their knowledge in a peer-to-peer learning environment and build solutions for local businesses and their community.

Why should I join?

- Grow your skills as a developer with training content from Google.

- Think of your own project, then lead a team of your peers to scale it.

- Build prototypes and solutions for local problems.

- Participate in a global developer competition.

- Receive access to select Google events and conferences.

- Gain valuable experience

Is there a Developer Student Club near me?

Developer Student Clubs are now in 68+ countries with 860+ groups. Find a club near you or learn how to start your own, here.

When do I need to submit the interest form?

You may express interest through the form until May 15th, 11:59pm PST. Get started here.

Make sure to learn more about our program criteria.

Our DSC Leads are working on meaningful projects around the world. Watch this video of how one lead worked to protect her community from dangerous floods in Indonesia. Similarly, read this story of how another lead helped modernize healthcare in Uganda.

We’re looking forward to welcoming a new group of leads to Developer Student Clubs. Have a friend who you think is a good fit? Pass this article along. Wishing all developer students the best on the path towards building great products and community.

Submit interest form here.



*Developer Student Clubs are student-led independent organizations, and their presence does not indicate a relationship between Google and the students' universities.