The Google Calendar API has changed how it manages API usage. Previously, queries were monitored and limited on a daily basis. As of May 2021, queries started to be monitored and limited on a per-minute basis. This introduces better behavior when your quota is exceeded, as requests are rate-limited until quota is available rather than failing all requests for the rest of the day. This also helps developers recognize issues around quota enforcements faster and shouldn't affect the performance of existing projects.
To view your usage and quota limits, have a look in the Google API console.
To help you manage your quotas, we’ve put together a few helpful tips:
Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud
Migrating web framework
The Google Cloud team recently introduced a series of codelabs (free, self-paced, hands-on tutorials) and corresponding videos designed to help users on one of our serverless compute platforms modernize their apps, with an initial focus on our earliest users running their apps on Google App Engine. We kick off this content by showing users how to migrate from App Engine's webapp2 web framework to Flask, a popular framework in the Python community.
While users have always been able to use other frameworks with App Engine, webapp2 comes bundled with App Engine, making it the default choice for many developers. One new requirement in App Engine's next generation platform (which launched in 2018) is that web frameworks must do their own routing, which unfortunately, means that webapp2 is no longer supported, so here we are. The good news is that as a result, modern App Engine is more flexible, lets users to develop in a more idiomatic fashion, and makes their apps more portable.
For example, while webapp2 apps can run on App Engine, Flask apps can run on App Engine, your servers, your data centers, or even on other clouds! Furthermore, Flask has more users, more published resources, and is better supported. If Flask isn't right for you, you can select from other WSGI-compliant frameworks such as Django, Pyramid, and others.
In the previous video, we introduced developers to the baseline Python 2 App Engine NDB webapp2 sample app that we're taking through each of the migrations. In the video above, users see that the majority of the changes are in the main application handler, MainHandler:
The "diffs" between the webapp2 and Flask versions of the sample app
Upon (re)deploying the app, users should see no visible changes to the output from the original version:
VisitMe application sample output
Next steps
Today's video picks up from where we left off: the Python 2 baseline app in its Module 0 repo folder. We call this the "START". By the time the migration has completed, the resulting source code, called "FINISH", can be found in the Module 1 repo folder. If you mess up partway through, you can rewind back to the START, or compare your solution with ours, FINISH. We also hope to one day provide a Python 3 version as well as cover other legacy runtimes like Java 8, PHP 5, and Go 1.11 and earlier, so stay tuned!
All of the migration learning modules, corresponding videos (when published), codelab tutorials, START and FINISH code, etc., can all be found in the migration repo. The next video (Module 2) will cover migrating from App Engine's ndblibrary for Datastore to Cloud NDB. We hope you find all these resources helpful in your quest to modernize your serverless apps!
Now that we’ve packed up all of the virtual stages from Google I/O 2021, let's take a look at some of the highlights and new product announcements for App Actions, Conversational Actions, and Smart Home Actions. We also held a number of amazing live events and meetups that happened during I/O - which we’ll summarize as well.
App Actions
App Actions allows developers to extend their Android App to Google Assistant. For our Android Developers, we are happy to announce that App Actions is now part of the Android framework. With the introduction of the beta shortcuts.xml configuration resource and our latest Google Assistant Plug App Actions is moving closer to the Android platform.
Capabilities
Capabilities is a new Android framework API that allows you to declare the types of actions users can take to launch your app and jump directly to performing a specific task. Assistant provides the first available concrete implementation of the capabilities API. You can utilize capabilities by creating shortcuts.xml resources and defining your capabilities. Capabilities specify two things: how it's triggered and what to do when it's triggered. To add a capability, use Built-In intents (BIIs), which are pre-built intents that provide all the Natural Language Understanding to map the user's input to individual fields. When a BII is matched by the user’s speech, your capability will trigger an Android Intent that delivers the understood BII fields to your app, so you can determine what to show in response.
This framework integration is in the Beta release stage, and will eventually replace the original implementation of App Actions that uses actions.xml. If your app provides both the new shortcuts.xml and old actions.xml, the latter will be disregarded.
Voice shortcuts for Discovery
Google Assistant suggests relevant shortcuts to users and has made it easier for users to discover and add shortcuts by saying “Hey Google, shortcuts.”
You can use the Google Shortcuts Integration library, currently in beta, to push an unlimited number of dynamic shortcuts to Google to make your shortcuts visible to users as voice shortcuts. Assistant can suggest relevant shortcuts to users to help make it more convenient for the user to interact with your Android app.
In-App Promo SDK
Not only can Assistant suggest shortcuts, with In-App Promo SDK you can proactively suggest shortcuts in your app for actions that the user can repeat with a voice command to Assistant, in beta. The SDK allows you to check if the shortcut you want to suggest already exists for that user and prompt the user to create the suggested shortcut.
Google Assistant plugin for Android Studio
To support testing Capabilities, Google Assistant plugin for Android Studio was launched. It contains an updated App Action Test Tool that creates a preview of your App Action, so you can test an integration before publishing it to the Play store.
During the What's New in Google Assistant keynote, Director of Product for the Google Assistant Developer Platform Rebecca Nathenson mentioned several coming updates and changes for Conversational Actions.
Updates to Interactive Canvas
Over the coming weeks, we’ll introduce new functionality to Interactive Canvas. Canvas developers will be able to manage intent fulfillment client-side, removing the need for intermediary webhooks in some cases. For use cases which require server-side fulfillment, like transactions and account linking, developers will be able to opt-in to server-side fulfillment as needed.
We’re also introducing a new function, outputTts(), which allows you to trigger Text to Speech client-side. This should help reduce latency for end users.
Additionally, there will be updates to the APIs available to get and set storage for both the home and individual users, allowing for client-side storage of user information. You’ll be able to persist user information within your web app, which was previously only available for access by webhook.
These new features for Interactive Canvas will be made available soon as part of a developer preview for Conversational Actions Developers. For more details on these new features, check out the preview page.
Updates to Transaction UX for Smart Displays
Also coming soon to Conversational Actions - we’re updating the workflow for completing transactions, allowing users to complete transactions from their smart screens, by confirming the CVC code from their chosen payment method. Watch our demo video showing new transaction features on smart devices to get a feel for these changes.
Tips on Launching your Conversational Action
Make sure to catch our technical session Driving a successful launch for Conversational Actions to learn about some strategies for putting together a marketing team and go-to-market plan for releasing your Conversational Action.
AMA: Games on Google Assistant
If you’re interested in building Games for Google Assistant with Conversational Actions, you should check out the recording of our AMA, where Googlers answered questions from I/O attendees about designing, building, and launching games.
Smart Home Actions
The What's new in Smart Home keynote covered several updates for Smart Home Actions. Following our continued emphasis on quality smart home integrations with the updated policy launch, we added new features to help you build engaging, reliable Actions for your users.
Test Suite and Analytics
The updated Test Suite for Smart Home now supports automatic testing, without the use of TTS. Additionally, the Analytics dashboards have been expanded with more detailed logs and in-depth error reporting to help you more quickly identify any potential issues with your Action. For a deeper dive into these enhancements, try out the Debugging the Smart Home workshop. There are also two new debugging codelabs to help you get more familiar with using these tools to improve the quality of your Action.
Notifications
We expanded support for proactive notifications to include the device traits RunCycle and SensorState. Users can now be proactively notified for multiple different device events. We also announced the release of follow-up responses. These follow-up responses enable your smart devices to notify users asynchronously to device changes succeeding or failing.
WebRTC
We added support for WebRTC to the CameraStream trait. Smart camera users can now benefit from lower latency and half-duplex talk between devices. As mentioned in the keynote, we will also be making updates to the other currently supported protocols for smart cameras.
Bluetooth Seamless Setup
To improve the on-boarding experience, developers can now enable BLE (bluetooth low energy) for device onboarding with Bluetooth Seamless Setup. Google Home and Nest devices can act as local hubs to provision and register nearby devices for any Action configured with local fulfillment.
Matter
Project CHIP has officially rebranded as Matter. Once the IP-based connectivity protocol officially launches, we will be supporting devices running the protocol. Watch the Getting started with Project CHIPtech session to learn more.
Ecosystem and Community
The women building voice AI and their role in the voice revolution
Voice AI is fundamentally changing how we interact with technology and its future will be a product of the people that build it. Watch this session to hear about the talented women shaping the Voice AI field, including an interview with Lilian Rincon, Sr. Director of Product Management at Google. Leslie also discusses strategies for achieving equal gender representation in Voice AI, an ambitious but essential goal.
AMA: How the Assistant Investment Program can help fund your startup
This "Ask Me Anything" session was hosted by the all-star team who runs the Google for Startups Accelerator: Voice AI. The team fielded questions from startups and investors around the world who are interested in building businesses based on voice technology. Check out the recording of this event here. The day after the AMA session, the 2021 cohort for the Voice AI accelerator had their demo day - you can catch the recording of their presentations here.
One of the perks of I/O being virtual this year was the ability to connect with students, hobbyists, and developers around the globe to discuss the current state of Smart Home, as well as some of the upcoming features. We hosted 3 meetups for the APAC, Americas, and EMEA regions and gathered some great feedback from the community.
Assistant Google Developers Experts Meetup
Every year we host an Assistant Google Developer Expert meetup to connect and share knowledge. This year we were able to invite everyone who is interested in building for Google Assistant to network and connect with one another. At the end several attendees came together at the Assistant Sandbox for a virtual photo!
Thanks for reading! To share your thoughts or questions, join us on Reddit at r/GoogleAssistantDev.
Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on. Can’t wait to see what you build!
Posted by Richard Adem, UX Engineer at Google Arts & Culture
What is Art Filter?
One of the best ways to learn about global culture is by trying on famous art pieces using Google’s Augmented Reality technology on your mobile device. What does it feel like to wear a three thousand year old necklace, put on a sixteenth century Japanese helmet or don pearl earrings and pose in a Vermeer?
Google Arts & Culture have created a new feature called Art Filter allowing everyone to learn about culturally significant art pieces from around the world and put themselves inside famous paintings, normally safely displayed in a museum.
We teamed up with the MediaPipe team, which offers cross-platform, customizable ML solutions to combine ML with rendering to generate stunning visuals.
Working closely with the MediaPipe team to utilize their face mesh and 3D face transform allowed us to create custom effects for each of the artifacts we had chosen, and easily display them on as part of the Google Arts & Culture iOS and Android app.
Figure 1. The Art Filter feature.
The Challenges
We selected five iconic cultural treasures from around the world:
Creating 3D objects that can be viewed from all sides, using 2D references.
Some of the artwork we selected are 2D paintings and we wanted everyone to immerse themselves in the paintings. Our team of 3D artists and designers took high resolution gigapixel images from Google Arts & Culture and projected them onto 3D meshes to texture them. We also extended the 2D textures all the way around the 3D meshes while maintaining the style of the original artist. This means that when you turn your head the previously hidden parts of the piece are viewable from every angle, mimicking how the object would look in real-life.
Figure 3. The Van Gogh Self-Portrait filter - Musée d’Orsay, Paris.
Our cultural partners were immensely helpful during the creation of Art Filter. They have sourced a huge amount of reference images allowing us to reproduce the pieces accurately using photographs from different angles, to help them appear to fit into the “real world” in AR (using size comparisons).
Layering elements of the effect along with the image of the user.
Art Filter takes an image of the user from their device’s camera and uses that to generate a 3D mesh of the user’s face. All processing of user images or video feeds is run entirely on device. We do not use this feature to identify or collect any personal biometric data; the feature cannot be used to identify an individual.
The image is then reused to texture the face mesh, generated in real-time on-device with MediaPipe Face Mesh, representing it in the virtual 3D world within the device. We then add virtual 2D and 3D layers around the face to complete the effect. The Tengu Helmet, for example, sits on top of the face mesh in 3D and is “attached” to the face mesh so it moves around when the user moves their head around. The Vermeer earrings with a headscarf and Frida Kahlo’s necklace are attached to the user’s image in a similar way. The Van Gogh effect works slightly differently since we still use a mesh of the user’s face but this time we apply a texture from the painting.
We use 2D elements to complete the scene as well, such as the backgrounds in the Kahlo and Van Gogh paintings. These were created by carefully separating the painting subjects from the background then placing them behind the user in 3D. You may notice that Van Gogh’s body is also 2D, shown as a “billboard” so that it always faces the camera.
Figure 4. Creating the 3D mesh showing layers and masks.
Using shaders for different materials such as the metal helmet.
To create a realistic looking material we used “Physically Based” Rendering shaders. You can see this on the Tengu helmet, it has a bumpy surface that is affected by the real life light captured by the device. This requires creating extra textures, texture maps, for the effect that uses colors to represent how bumpy or shiny the 3D object should appear. Texture maps look like bright pink and blue images but tell the renderer about tiny details on the surface of the object without creating any extra polygons, which can slow down the frame rate of the feature.
Figure 5. User wearing Helmet with Tengu Mask and Crows - The Metropolitan Museum of Art.
Conclusion
We hope you enjoy the collection we have created in Art Filter. Please visit and try for yourself! You can also explore more amazing ML features with Google Arts & Culture such as Art Selfie and Art Transfer.
We hope to bring many more filters to the feature and are looking forward to new features from MediaPipe.
Google has updated its Passes API to enable a simple and secure way to store and access COVID vaccination and test cards on Android devices. Starting today, developers from healthcare organizations, government agencies and organizations authorized by public health authorities to distribute COVID vaccines and/or tests will have access to these APIs to create a digital version of COVID vaccination or test information. This will roll out initially in the United States followed by other countries.
Example COVID Cards from Healthvana, a company serving Los Angeles County
Once a user stores the digital version of the COVID Card to their device, they will be able to access it via a shortcut on their device home screen, even when they are offline or in areas that have weak internet service. To use this feature, the device needs to run Android 5 or later and be Play Protect certified. Installing the Google Pay app is not a requirement to access COVID Cards.
The COVID Card has been designed with privacy and security at its core.
Storing information: The user’s COVID vaccination and test information is stored on their Android device. If a user wants to access this information on multiple devices, the user will need to manually store it on each device. Google does not retain a copy of the user’s COVID vaccination or test information.
Sharing information: Users can choose to show their COVID Card to others. The information in the user’s COVID Card is not shared by Google with its various services or third parties and it is not used for targeting ads.
Securing information: A lock screen is required in order to store a COVID Card on a device. This is for added security and to protect the user’s personal information. When a user wants to access their COVID Card, they will be asked for the password, pin or biometric method set up for their Android device.
If you are a qualified provider, please sign up to share your interest here. And, for more information about COVID cards and their privacy and security features, please see the help center.
What do you think?
Do you have any questions? Let us know in the comments below or tweet using #AskGooglePayDevs and follow us @GooglePayDevs.
Posted by Badi Azad, Group Product Manager (@badiazad)
The Google Identity team is continually working to improve Google Account security and create a safer and more secure experience for our users. As part of that work, we recently introduced a new secure browser policy prohibiting Google OAuth requests in embedded browser libraries commonly referred to as embedded webviews. All embedded webviews will be blocked starting on September 30, 2021.
Embedded webview libraries are problematic because they allow a nefarious developer to intercept and alter communications between Google and its users by acting as a "man in the middle." An application embedding a webview can modify or intercept network requests, insert custom scripts that can potentially record every keystroke entered in a login form, access session cookies, or alter the content of the webpage. These libraries also allow the removal of key elements of a browser that hold user trust, such as the guarantee that the response originates from Google's servers, display of the website domain, and the ability to inspect the security of a connection. Additionally the OAuth 2.0 for Native Apps guidelines from IETF require that native apps must not use embedded user-agents such as webviews to perform authorization requests.
Embedded webviews not only affect account security, they could affect usability of your application. The sandboxed storage environment of an embedded webview disconnects a user from the single sign-on features they expect from Google. A full-featured web browser supports multiple tools to help a logged-out user quickly sign-in to their account including password managers and Web Authentication libraries. Google's users also expect multiple-step login processes, including two-step verification and child account authorizations, to function seamlessly when a login flow involves multiple devices, when switching to another app on the device, or when communicating with peripherals such as a security key.
Developers must register an appropriate OAuth client for each platform (Desktop, Android, iOS, etc.) on which your app will run, in compliance with Google's OAuth 2.0 Policies. You can verify the OAuth client ID used by your installed application is the most appropriate choice for your platform by visiting the Google API Console's Credentials page. A "Web application" client type in use by an Android application is an example of mismatched use. Reference our OAuth 2.0 for Mobile & Desktop Apps guide to properly integrate the appropriate client for your app's platform.
Applications opening all links and URLs inside an embedded webview should follow the following instructions for Android, iOS, macOS, and captive portals:
Embedded webviews implementing or extending Android WebView do not comply with Google's secure browser policy for its OAuth 2.0 Authorization Endpoint. Apps should allow general, third-party links to be handled by the default behaviors of the operating system, enabling a user's preferred routing to their chosen default web browser or another developer's preferred routing to its installed app through Android App Links. Apps may alternatively open general links to third-party sites in Android Custom Tabs.
Embedded webviews implementing or extending WKWebView, or the deprecated UIWebView, do not comply with Google's secure browser policy for its OAuth 2.0 Authorization Endpoint. Apps should allow general, third-party links to be handled by the default behaviors of the operating system, enabling a user's preferred routing to their chosen default web browser or another developer's preferred routing to its installed app through Universal Links. Apps may alternatively open general links to third-party sites in SFSafariViewController.
If your computer network intercepts network requests, redirecting to a web portal supporting authorization with a Google Account, your web content could be displayed in an embedded webview controlled by a captive network assistant. You should provide potential viewers instructions on how to access your network using their default web browser. For more information reference the Google Account Help article Sign in to a Wi-Fi network with your Google Account.
New IETF standards adopted by Android and iOS may help users access your captive pages in a full-featured web browser. Captive networks should integrate Captive-Portal Identification in DHCP and Router Advertisements (RAs) proposed IETF standard to inform clients that they are behind a captive portal enforcement device when joining the network, rather than relying on traffic interception. Networks should also integrate the Captive Portal API proposed IETF standard to quickly direct clients to a required portal URL to access the Internet. For more information reference Captive portal API support for Android and Apple's How to modernize your captive network developer articles.
If you're a developer that currently uses an embedded webview for Google OAuth 2.0 authorization flows, be aware that embedded webviews will be blocked as of September 30, 2021. To verify whether the authorization flow launched by your application is affected by these changes, test your application for compatibility and compliance with the policies outlined in this post.
You can add a query parameter to your authorization request URI to test for potential impact to your application before September 30, 2021. The following steps describe how to adjust your current requests to Google's OAuth 2.0 Authorization Endpoint to include an additional query parameter for testing purposes.
Go to where you send requests to Google's OAuth 2.0 Authorization Endpoint. Example URI: https://accounts.google.com/o/oauth2/v2/auth
Add the disallow_webview parameter with a value of true to the query component of the URI. Example: disallow_webview=true
An implementation affected by the planned changes will see a disallowed_useragent error when loading Google's OAuth 2.0 Authorization Endpoint, with the disallow_webview=true query string, in an embedded webview instead of the authorization flows currently displayed. If you do not see an error message while testing the effect of the new embedded webview policies your app's implementation might not be impacted by this announcement.
Note: A website's ability to request authorization from a Google Account may be impacted due to another developer's decision to use an embedded webview in their app. For example, if a messaging or news application opens links to your site in an embedded webview, the features available on your site, including Google OAuth 2.0 authorization flows, may be impacted. If your site or app is impacted by the implementation choice of another developer please contact that developer directly.
A warning message may be displayed in non-compliant authorization requests after August 30, 2021. The warning message will include the user support email defined in your project's OAuth consent screen in Google API Console and direct the user to visit our Sign in with a supported browser support article.
Developers may acknowledge the upcoming enforcement and suppress the warning message by passing a specific query parameter to the authorization request URI. The following steps explain how to adjust your authorization requests to include the acknowledgement parameter:
Go to where you send requests to Google's OAuth 2.0 Authorization Endpoint. Example URI: https://accounts.google.com/o/oauth2/v2/auth
Add an ack_webview_shutdown parameter with a value of the enforcement date: 2021-09-30. Example: ack_webview_shutdown=2021-09-30
A successful request to Google's OAuth 2.0 Authorization Endpoint including the acknowledgement query parameter and enforcement date will suppress the warning message in non-compliant authorization requests. All non-compliant authorization requests will display a disallowed_useragent error when loading Google's OAuth 2.0 Authorization Endpoint after the enforcement date.
Posted by Noa Havazelet, Program Manager, Google Developer Student Clubs, UK & Ireland
With 1,600 students by his side, Jack Lee grew the largest Google Developer Student Club in the world in just 6 months at the London School of Economics (LSE). A life-long athlete, who loves leading teams, Jack saw that reigniting his university’s GDSC would be a great opportunity to have a large impact on the local tech scene. With a heavy focus on partnerships, Jack connected members of his club with leaders at top companies and other student groups across Scotland, France, Norway, Canada, and Nigeria. These collaborations enabled students to practice networking, while gaining access to key internships.
Learn more about Jack and his club below.
Image of Jack Lee speaking at a GDSC event
Student-to-student mentorship with impact
Leaders like Jack Lee make Google Developer Student Clubs around the world special by providing a trusted and fun space for student-to-student mentorship. When students step up to help their peers, a strong camaraderie and support system forms beyond the classroom.
One of the secrets to Jack’s success was to appeal to both computer science students as well as those with a non-technical background, like business majors. To inspire more students with different backgrounds to join the club, Jack put together a team of additional student leaders. Under his leadership, this team had the freedom to independently build tech-focused events that would interest students across the university.
Image of GDSC LSE team
After the first semester, Jack’s approach was working. They hosted over 80 events, covering a wide range of topics including front end web development and career talks with financial firms.
The intersection of students with different backgrounds inspired club members to work together on community projects, utilizing their different skills. In fact, a few club members formed teams to solve for one of the United Nations 17 Sustainable Development Goals. As part of the Google Developer Student Clubs 2021 Solution Challenge, students from the London School of Economics developed prototype solutions for NGOs on 1) wildfire analysis using TensorFlow, 2) raising donations and grant access, and 3) increasing voter registrations.
As more students continued to join their GDSC, Jack decided to up the tempo to keep the momentum going.
Connecting students to companies
Since the London School of Economics is not only a tech-focused university, Jack requested support from a team at Google for Startups. Together they reached out to some of the world’s largest firms and startups to collaborate on events and specialized programs for the student club. Jack’s GDSC established relationships with 6 partners, and 3 local sponsors from startups, NGOs, and financial firms. All these partners contributed to nearly 30 events throughout the academic year, which included:
Introductory Python courses
Mentorship sessions
Networking events
Talks with CEOs
Panel talks across industries
These events started catching the attention of students across Europe and Asia, with some students who could not afford to attend university reaching out for technical learning resources and opportunities.
Connecting 150 students to mentors from different startups is one of the achievements that makes Jack and the club leaders most proud.
This is yet another example of how Jack’s determination to grow a stronger community led him to build a global Google Developer Student Club that left a profound impact on his fellow students.
If you’re also a student and want to join a Google Developer Student Club community like this, find one near you here.
Posted by Rodrigo Akira Hirooka, Program Manager, Google Developer Groups Latin America
Lorena Locks is on a mission to grow the LGBTQIA+ tech community in Brazil. Her inspiration came from hosting Google Developer Group (GDG) Floripa meetups with her friend Catarina, where they were able to identify a need in their community.
“We felt there wasn't a forum to meet people in the tech industry that reflected ourselves. So we decided to think bigger.”
Image from GDG Floripa event
Pride Week at GDG Floripa, Brazil
As a Women Techmakers Ambassador and Google Developer Group lead in Floripa, Brazil, Lorena worked with the local community to create a week of special events, including over 12 talks and sessions centered on empowering the LGBTQIA+ experience in tech.
The events took place every night at 7pm from June 21st - 25th and focused on creating inclusive representation and building trust among developer communities.
Lorena’s commitment to this underrepresented group gained the attention of many local leaders in tech who identify as LGBTQIA+ and volunteered as speakers during Pride Week.
By creating spaces to talk about important LGBTQIA+ topics in tech, Pride Week with Google Developer Groups Floripa included sessions on:
Spotting binary designs in products
How to build inclusive tech teams
Being an LGBTQIA+ manager
Developing 'Nohs Somos' an app for the LGBTQIA+ community
The best practices for D&I
General Personal Data Protection Law and inclusive gender questions on forms
Speakers in photo: Lorena Locks and Catarina Schein
With one-hundred percent of the speakers at these events coming from the LGTBQIA+ community, Pride Week at GDG Floripa was a high impact program that has gone on to inspire GDGs around the world.
If you want to learn more about how to get involved in Google Developer Group communities like this one, visit the site here.
We're always excited to share updates to our Coral platform for building edge ML applications. In this post, we have some interesting demos, interfaces, and tutorials to share, and we'll start by pointing you to an important software update for the Coral Dev Board.
Important update for the Dev Board / SoM
If you have a Coral Dev Board or Coral SoM, please install our latest Mendel update as soon as possible to receive a critical fix to part of the SoC power configuration. To get it, just log onto your board and install the update as follows:
This will install a patch from NXP for the Dev Board / SoM's SoC, without which it's possible the SoC will overstress and the lifetime of the device could be reduced. If you recently flashed your board with the latest system image, you might already have this fix (we also updated the flashable image today), but it never hurts to fetch all updates, as shown above.
Note: This update does not apply to the Dev Board Mini.
Manufacturing demo
We recently published the Coral Manufacturing Demo, which demonstrates how to use a single Coral Edge TPU to simultaneously accomplish two common manufacturing use-cases: worker safety and visual inspection.
The demo is designed for two specific videos and tasks (worker keepout detection and apple quality grading) but it is designed to be easily customized with different inputs and tasks. The demo, written in C++, requires OpenGL and is primarily targeted at x86 systems which are prevalent in manufacturing gateways – although ARM Cortex-A systems, like the Coral Dev Board, are also supported.
Web Coral
We've been working hard to make ML acceleration with the Coral Edge TPU available for most popular systems. So we're proud to announce support for WebUSB, allowing you to use the Coral USB Accelerator directly from Chrome. To get started, check out our WebCoral demo, which builds a webpage where you can select a model and run an inference accelerated by the Edge TPU.
New models repository
We recently released a new models repository that makes it easier to explore the various trained models available for the Coral platform, including image classification, object detection, semantic segmentation, pose estimation, and speech recognition. Each family page lists the various models, including details about training dataset, input size, latency, accuracy, model size, and other parameters, making it easier to select the best fit for the application at hand. Lastly, each family page includes links to training scripts and example code to help you get started. Or for an overview of all our models, you can see them all on one page.
Transfer learning tutorials
Even with our collection of pre-trained models, it can sometimes be tricky to create a task-specific model that's compatible with our Edge TPU accelerator. To make this easier, we've released some new Google Colab tutorials that allow you to perform transfer learning for object detection, using MobileDet and EfficientDet-Lite models. You can find these and other Colabs in our GitHub Tutorials repo.
We are excited to share all that Coral has to offer as we continue to evolve our platform. Keep an eye out for more software and platform related news coming this summer. To discover more about our edge ML platform, please visit Coral.ai and share your feedback at [email protected].
Last year, we launched the inaugural Google for Startups Accelerator: Women Founders program in North America to help women-led startups identify and solve technical challenges while scaling their companies. The inaugural cohort also received tailored programming to address some of the longstanding barriers that women founders face.
Women founders remain underrepresented in the tech startup ecosystem because they often lack access to the resources needed to start, build, and grow their businesses. The COVID-19 pandemic exacerbated these structural barriers by disproportionately impacting women in the workforce—and research shows women were more vulnerable to the economic effects of COVID-19 because of existing gender disparities.
For women founders, access to capital is one of the major challenges to launching their business. A recent report showed women-led startups received a mere 2.3% of global venture capitalist funding in 2020, falling from 2.8% the year before.
The Google for Startups Accelerator: Women Founders program aims to help bridge the gap and create opportunities for women founders to succeed. Beyond mentorship and technical project support, the accelerator also includes deep dives and workshops focused on product design, customer acquisition, and leadership development for founders. Participants will also hear from a roster of speakers and facilitators who deliver both technical and nontechnical programming for women-led startups.
Applications for the second Google for Startups Accelerator: Women Founders program are now open until July 19, for North American applicants. Approximately 10-12 startups with at least one woman founder will be selected from across North America. The accelerator runs from September through to December 2021.
To learn more about the program and to apply, visit the website.