Author Archives: Google Developers

MediaPipe 3D Face Transform

Posted by Kanstantsin Sokal, Software Engineer, MediaPipe team

Earlier this year, the MediaPipe Team released the Face Mesh solution, which estimates the approximate 3D face shape via 468 landmarks in real-time on mobile devices. In this blog, we introduce a new face transform estimation module that establishes a researcher- and developer-friendly semantic API useful for determining the 3D face pose and attaching virtual objects (like glasses, hats or masks) to a face.

The new module establishes a metric 3D space and uses the landmark screen positions to estimate common 3D face primitives, including a face pose transformation matrix and a triangular face mesh. Under the hood, a lightweight statistical analysis method called Procrustes Analysis is employed to drive a robust, performant and portable logic. The analysis runs on CPU and has a minimal speed/memory footprint on top of the original Face Mesh solution.

MediaPipe image

Figure 1: An example of virtual mask and glasses effects, based on the MediaPipe Face Mesh solution.

Introduction

The MediaPipe Face Landmark Model performs a single-camera face landmark detection in the screen coordinate space: the X- and Y- coordinates are normalized screen coordinates, while the Z coordinate is relative and is scaled as the X coordinate under the weak perspective projection camera model. While this format is well-suited for some applications, it does not directly enable crucial features like aligning a virtual 3D object with a detected face.

The newly introduced module moves away from the screen coordinate space towards a metric 3D space and provides the necessary primitives to handle a detected face as a regular 3D object. By design, you'll be able to use a perspective camera to project the final 3D scene back into the screen coordinate space with a guarantee that the face landmark positions are not changed.

Metric 3D Space

The Metric 3D space established within the new module is a right-handed orthonormal metric 3D coordinate space. Within the space, there is a virtual perspective camera located at the space origin and pointed in the negative direction of the Z-axis. It is assumed that the input camera frames are observed by exactly this virtual camera and therefore its parameters are later used to convert the screen landmark coordinates back into the Metric 3D space. The virtual camera parameters can be set freely, however for better results it is advised to set them as close to the real physical camera parameters as possible.

MediaPipe image

Figure 2: A visualization of multiple key elements in the metric 3D space. Created in Cinema 4D

Canonical Face Model

The Canonical Face Model is a static 3D model of a human face, which follows the 3D face landmark topology of the MediaPipe Face Landmark Model. The model bears two important functions:

  • Defines metric units: the scale of the canonical face model defines the metric units of the Metric 3D space. A metric unit used by the default canonical face model is a centimeter;
  • Bridges static and runtime spaces: the face pose transformation matrix is - in fact - a linear map from the canonical face model into the runtime face landmark set estimated on each frame. This way, virtual 3D assets modeled around the canonical face model can be aligned with a tracked face by applying the face pose transformation matrix to them.

Face Transform Estimation

The face transform estimation pipeline is a key component, responsible for estimating face transform data within the Metric 3D space. On each frame, the following steps are executed in the given order:

  • Face landmark screen coordinates are converted into the Metric 3D space coordinates;
  • Face pose transformation matrix is estimated as a rigid linear mapping from the canonical face metric landmark set into the runtime face metric landmark set in a way that minimizes a difference between the two;
  • A face mesh is created using the runtime face metric landmarks as the vertex positions (XYZ), while both the vertex texture coordinates (UV) and the triangular topology are inherited from the canonical face model.

Effect Renderer

The Effect Renderer is a component, which serves as a working example of a face effect renderer. It targets the OpenGL ES 2.0 API to enable a real-time performance on mobile devices and supports the following rendering modes:

  • 3D object rendering mode: a virtual object is aligned with a detected face to emulate an object attached to the face (example: glasses);
  • Face mesh rendering mode: a texture is stretched on top of the face mesh surface to emulate a face painting technique.

In both rendering modes, the face mesh is first rendered as an occluder straight into the depth buffer. This step helps to create a more believable effect via hiding invisible elements behind the face surface.

MediaPipe image

Figure 3: An example of face effects rendered by the Face Effect Renderer.

Using Face Transform Module

The face transform estimation module is available as a part of the MediaPipe Face Mesh solution. It comes with face effect application examples, available as graphs and mobile apps on Android or iOS. If you wish to go beyond examples, the module contains generic calculators and subgraphs - those can be flexibly applied to solve specific use cases in any MediaPipe graph. For more information, please visit our documentation.

Follow MediaPipe

We look forward to publishing more blog posts related to new MediaPipe pipeline examples and features. Please follow the MediaPipe label on Google Developers Blog and Google Developers twitter account (@googledevs).

Acknowledgements

We would like to thank Chuo-Ling Chang, Ming Guang Yong, Jiuqiang Tang, Gregory Karpiak, Siarhei Kazakou, Matsvei Zhdanovich and Matthias Grundman for contributing to this blog post.

Announcing DevFest 2020

Posted by Jennifer Kohl, Program Manager, Developer Community Programs

DevFest Image

On October 16-18, thousands of developers from all over the world are coming together for DevFest 2020, the largest virtual weekend of community-led learning on Google technologies.

As people around the world continue to adapt to spending more time at home, developers yearn for community now more than ever. In years past, DevFest was a series of in-person events over a season. For 2020, the community is coming together in a whole new way – virtually – over one weekend to keep developers connected when they may want it the most.

The speakers

The magic of DevFest comes from the people who organize and speak at the events - developers with various backgrounds and skill levels, all with their own unique perspectives. In different parts of the world, you can find a DevFest session in many local languages. DevFest speakers are made up of various types of technologists, including kid developers , self-taught programmers from rural areas , and CEOs and CTOs of startups. DevFest also features a wide range of speakers from Google, Women Techmakers, Google Developer Experts, and more. Together, these friendly faces, with many different perspectives, create a unique and rich developer conference.

The sessions and their mission

Hosted by Google Developer Groups, this year’s sessions include technical talks and workshops from the community, and a keynote from Google Developers. Through these events, developers will learn how Google technologies help them develop, learn, and build together.

Sessions will cover multiple technologies, such as Android, Google Cloud Platform, Machine Learning with TensorFlow, Web.dev, Firebase, Google Assistant, and Flutter.


At our core, Google Developers believes community-led developer events like these are an integral part of the advancement of technology in the world.

For this reason, Google Developers supports the community-led efforts of Google Developer Groups and their annual tentpole event, DevFest. Google provides esteemed speakers from the company and custom technical content produced by developers at Google. The impact of DevFest is really driven by the grassroots, passionate GDG community organizers who volunteer their time. Google Developers is proud to support them.

The attendees

During DevFest 2019, 138,000+ developers participated across 500+ DevFests in 100 countries. While 2020 is a very different year for events around the world, GDG chapters are galvanizing their communities to come together virtually for this global moment. The excitement for DevFest continues as more people seek new opportunities to meet and collaborate with like-minded, community-oriented developers in our local towns and regions.

Join the conversation on social media with #DevFest.

Sign up for DevFest at goo.gle/devfest.





Still curious? Check out these popular talks from DevFest 2019 events around the world...

Google Nest Device Access Console now available for partners and individuals

Posted by Gabriel Rubinsky, Senior Product Manager

Today, we’re excited to announce the Device Access Console is available.

The Device Access program lets individuals and qualified partners securely access and control Nest products with their apps and solutions.

At the heart of the Device Access program is the Smart Device Management API. Since we announced the program, Alarm.com, Control4, DISH, OhmConnect, NRG Energy, and Vivint Smart Home have successfully completed the Early Access Program (EAP) with Nest thermostat, camera, or doorbell traits. In the coming months, we expect additional devices to be supported and more smart home partners to launch their new integrations as well.

Enhanced privacy and security

The Device Access program is built on a foundation of privacy and security. The program requires partner submission of qualified use cases and completion of a security assessment before being allowed to utilize the Smart Device Management API for commercial use. The program process gives our users the confidence that commercial partners offering integrated Nest solutions have data protections and safeguards in place that meet our privacy and security standards.

Nest device access and control

The Device Access program currently allows qualified partners to integrate directly with Nest devices, enable control of thermostats, access and view camera feeds, and receive doorbell notifications with images. All qualified partner solutions and services will require end-user consent before being able to access, control, and manage Nest devices as part of their service offerings, either through a partner client app or service platform. Ultimately, this gives users more choice in how to control their home and their own generated data.

If you’re a developer or a Nest user interested in the Device Access program or access to the sandbox development environment,* you can find more information on our Device Access site.

  • Device Access for Commercial Developers

    The Device Access program allows trusted partners to offer access, management, and control of Nest devices within the partner’s app, solution, and ecosystem. It allows developers to test all API traits in the sandbox environment, before moving forward with commercial integration. Learn more

  • Device Access for Individuals

    For individual smart home developer enthusiasts, you can register to access the sandbox development environment, allowing you to directly control your own Nest devices through your private integrations and automations. Learn more

We’re doing the work to make Nest devices more secure and protect user privacy long into the future. This means expanding privacy and data security programs, and delivering flexibility for our customers to use thousands of products from partners to create a connected, helpful home.



* Registration consists of the acceptance of the Google API and Nest Device Access Sandbox Terms of Service, along with a one-time, non-refundable nominal fee per account

Google Nest Device Access Console now available for partners and individuals

Posted by Gabriel Rubinsky, Senior Product Manager

Today, we’re excited to announce the Device Access Console is available.

The Device Access program lets individuals and qualified partners securely access and control Nest products with their apps and solutions.

At the heart of the Device Access program is the Smart Device Management API. Since we announced the program, Alarm.com, Control4, DISH, OhmConnect, NRG Energy, and Vivint Smart Home have successfully completed the Early Access Program (EAP) with Nest thermostat, camera, or doorbell traits. In the coming months, we expect additional devices to be supported and more smart home partners to launch their new integrations as well.

Enhanced privacy and security

The Device Access program is built on a foundation of privacy and security. The program requires partner submission of qualified use cases and completion of a security assessment before being allowed to utilize the Smart Device Management API for commercial use. The program process gives our users the confidence that commercial partners offering integrated Nest solutions have data protections and safeguards in place that meet our privacy and security standards.

Nest device access and control

The Device Access program currently allows qualified partners to integrate directly with Nest devices, enable control of thermostats, access and view camera feeds, and receive doorbell notifications with images. All qualified partner solutions and services will require end-user consent before being able to access, control, and manage Nest devices as part of their service offerings, either through a partner client app or service platform. Ultimately, this gives users more choice in how to control their home and their own generated data.

If you’re a developer or a Nest user interested in the Device Access program or access to the sandbox development environment,* you can find more information on our Device Access site.

  • Device Access for Commercial Developers

    The Device Access program allows trusted partners to offer access, management, and control of Nest devices within the partner’s app, solution, and ecosystem. It allows developers to test all API traits in the sandbox environment, before moving forward with commercial integration. Learn more

  • Device Access for Individuals

    For individual smart home developer enthusiasts, you can register to access the sandbox development environment, allowing you to directly control your own Nest devices through your private integrations and automations. Learn more

We’re doing the work to make Nest devices more secure and protect user privacy long into the future. This means expanding privacy and data security programs, and delivering flexibility for our customers to use thousands of products from partners to create a connected, helpful home.



* Registration consists of the acceptance of the Google API and Nest Device Access Sandbox Terms of Service, along with a one-time, non-refundable nominal fee per account

Google Nest Device Access Console now available for partners and individuals

Posted by Gabriel Rubinsky, Senior Product Manager

Today, we’re excited to announce the Device Access Console is available.

The Device Access program lets individuals and qualified partners securely access and control Nest products with their apps and solutions.

At the heart of the Device Access program is the Smart Device Management API. Since we announced the program, Alarm.com, Control4, DISH, OhmConnect, NRG Energy, and Vivint Smart Home have successfully completed the Early Access Program (EAP) with Nest thermostat, camera, or doorbell traits. In the coming months, we expect additional devices to be supported and more smart home partners to launch their new integrations as well.

Enhanced privacy and security

The Device Access program is built on a foundation of privacy and security. The program requires partner submission of qualified use cases and completion of a security assessment before being allowed to utilize the Smart Device Management API for commercial use. The program process gives our users the confidence that commercial partners offering integrated Nest solutions have data protections and safeguards in place that meet our privacy and security standards.

Nest device access and control

The Device Access program currently allows qualified partners to integrate directly with Nest devices, enable control of thermostats, access and view camera feeds, and receive doorbell notifications with images. All qualified partner solutions and services will require end-user consent before being able to access, control, and manage Nest devices as part of their service offerings, either through a partner client app or service platform. Ultimately, this gives users more choice in how to control their home and their own generated data.

If you’re a developer or a Nest user interested in the Device Access program or access to the sandbox development environment,* you can find more information on our Device Access site.

  • Device Access for Commercial Developers

    The Device Access program allows trusted partners to offer access, management, and control of Nest devices within the partner’s app, solution, and ecosystem. It allows developers to test all API traits in the sandbox environment, before moving forward with commercial integration. Learn more

  • Device Access for Individuals

    For individual smart home developer enthusiasts, you can register to access the sandbox development environment, allowing you to directly control your own Nest devices through your private integrations and automations. Learn more

We’re doing the work to make Nest devices more secure and protect user privacy long into the future. This means expanding privacy and data security programs, and delivering flexibility for our customers to use thousands of products from partners to create a connected, helpful home.



* Registration consists of the acceptance of the Google API and Nest Device Access Sandbox Terms of Service, along with a one-time, non-refundable nominal fee per account

Google Pay picks Flutter to drive its global product development

Posted by David Ko, Engineering Director; Jeff Lim, Software Engineer; Pankaj Gupta, Director of Engineering; Will Horn, Software Engineer

Three years ago, when we launched Google Pay India (then called Tez), our vision was to create a simple and secure payment app for everyone in India. We started with the premise of making payments simple and built a user interface that made making payments as easy as starting a conversation. The simplicity of the design resonated with users instantly and over time, we have added functionality to help users do more than just make payments. Today users can pay their bills, recharge their phones, get loans instantly through banks, buy train tickets and much more all within the app. Last year, we also launched the Spot Platform in India, which allows merchants to create branded experiences within the Google Pay app so they can connect with their customers in a more engaging way.

As we looked at scaling our learnings from India to other parts of the world, we wanted to focus on a fast and efficient development environment, which was modern and engaging with the flexibility needed to keep the UI clean. And more importantly one that enabled us to write once and be able to deploy to both iOS and Android reaching the wide variety of users.

It was clear that we would need to build it, and ensure that it worked across a wide variety of payment rails, infrastructure, and operating systems. But with the momentum we had for Google Pay in India, and the fast evolving product features - we had limited engineering resources to put behind this effort.

After evaluating various options, it was easy to pick Flutter as the obvious choice. The three things that made it click for us were:

  • We could write once in Dart and deploy on both iOS and Android, which led to a uniform best-in-class experience on both Android and iOS;
  • Just-in-Time compiler with hot reload during development enabled rapid iteration on UI which tremendously increased developer efficiency; and
  • Ahead-of-time compilation ensured high performance deployment.

Now the task was to get it done. We started with a small team of three software engineers from both Android and iOS. Those days were focused and intense. To start with we created a vertical slice of the app — home page, chat, and payments (with the critical native plugins for payments in India). The team first tried a hybrid approach, and then decided to do a clean rewrite as it was not scalable.

We ran a few small sprints for other engineers on the team to give them an opportunity to rewrite something in Flutter and provide feedback. Everyone loved Flutter — you could see the thrill on people’s faces as they talked about how fast it was to build a user interface. One of the most exciting things was that the team could get instant feedback while developing. We could also leverage the high quality widgets that Flutter provided to make development easier.

After carefully weighing the risks and our case for migration, we decided to go all in with Flutter. It was a monumental rewrite of a moving target, and the existing app continues to evolve while we were rewriting features. After many months of hard work, Google Pay Flutter implementation is now available in open beta in India and Singapore. Our users in India and Singapore can visit the Google Play Store page for Google Pay to opt into the beta program and experience the latest app built on Flutter. Next, we are looking forward to launching Google Pay on Flutter to everyone across the world on iOS and Android.

 Google Pay image Google Pay image Google pay image Google Pay image

We hope this gives you a fair idea of how to approach and launch a complete rewrite of an active app that is used by millions of users and businesses of all sizes. It would not have been possible for us to deliver this without Flutter’s continued advances on the platform. Huge thanks to the Flutter team, as today, we are standing on their shoulders!

When fully migrated, Google Pay will be one of the largest production deployments on the Flutter platform. We look forward to sharing more learnings from our transition to Flutter in the future.

Google Pay picks Flutter to drive its global product development

Posted by David Ko, Engineering Director; Jeff Lim, Software Engineer; Pankaj Gupta, Director of Engineering; Will Horn, Software Engineer

Three years ago, when we launched Google Pay India (then called Tez), our vision was to create a simple and secure payment app for everyone in India. We started with the premise of making payments simple and built a user interface that made making payments as easy as starting a conversation. The simplicity of the design resonated with users instantly and over time, we have added functionality to help users do more than just make payments. Today users can pay their bills, recharge their phones, get loans instantly through banks, buy train tickets and much more all within the app. Last year, we also launched the Spot Platform in India, which allows merchants to create branded experiences within the Google Pay app so they can connect with their customers in a more engaging way.

As we looked at scaling our learnings from India to other parts of the world, we wanted to focus on a fast and efficient development environment, which was modern and engaging with the flexibility needed to keep the UI clean. And more importantly one that enabled us to write once and be able to deploy to both iOS and Android reaching the wide variety of users.

It was clear that we would need to build it, and ensure that it worked across a wide variety of payment rails, infrastructure, and operating systems. But with the momentum we had for Google Pay in India, and the fast evolving product features - we had limited engineering resources to put behind this effort.

After evaluating various options, it was easy to pick Flutter as the obvious choice. The three things that made it click for us were:

  • We could write once in Dart and deploy on both iOS and Android, which led to a uniform best-in-class experience on both Android and iOS;
  • Just-in-Time compiler with hot reload during development enabled rapid iteration on UI which tremendously increased developer efficiency; and
  • Ahead-of-time compilation ensured high performance deployment.

Now the task was to get it done. We started with a small team of three software engineers from both Android and iOS. Those days were focused and intense. To start with we created a vertical slice of the app — home page, chat, and payments (with the critical native plugins for payments in India). The team first tried a hybrid approach, and then decided to do a clean rewrite as it was not scalable.

We ran a few small sprints for other engineers on the team to give them an opportunity to rewrite something in Flutter and provide feedback. Everyone loved Flutter — you could see the thrill on people’s faces as they talked about how fast it was to build a user interface. One of the most exciting things was that the team could get instant feedback while developing. We could also leverage the high quality widgets that Flutter provided to make development easier.

After carefully weighing the risks and our case for migration, we decided to go all in with Flutter. It was a monumental rewrite of a moving target, and the existing app continues to evolve while we were rewriting features. After many months of hard work, Google Pay Flutter implementation is now available in open beta in India and Singapore. Our users in India and Singapore can visit the Google Play Store page for Google Pay to opt into the beta program and experience the latest app built on Flutter. Next, we are looking forward to launching Google Pay on Flutter to everyone across the world on iOS and Android.

 Google Pay image Google Pay image Google pay image Google Pay image

We hope this gives you a fair idea of how to approach and launch a complete rewrite of an active app that is used by millions of users and businesses of all sizes. It would not have been possible for us to deliver this without Flutter’s continued advances on the platform. Huge thanks to the Flutter team, as today, we are standing on their shoulders!

When fully migrated, Google Pay will be one of the largest production deployments on the Flutter platform. We look forward to sharing more learnings from our transition to Flutter in the future.

Doubling down on the edge with Coral’s new accelerator

Posted by The Coral Team

Coral image

Moving into the fall, the Coral platform continues to grow with the release of the M.2 Accelerator with Dual Edge TPU. Its first application is in Google’s Series One room kits where it helps to remove interruptions and makes the audio clearer for better video meetings. To help even more folks build products with Coral intelligence, we’re dropping the prices on several of our products. And for those folks that are looking to level up their at home video production, we’re sharing a demo of a pose based AI director to make multi-camera video easier to make.

Coral M.2 Accelerator with Dual Edge TPU

The newest addition to our product family brings two Edge TPU co-processors to systems in an M.2 E-key form factor. While the design requires a dual bus PCIe M.2 slot, it brings enhanced ML performance (8 TOPS) to tasks such as running two models in parallel or pipelining one large model across both Edge TPUs.

The ability to scale across multiple edge accelerators isn’t limited to only two Edge TPUs. As edge computing expands to local data centers, cell towers, and gateways, multi-Edge TPU configurations will be required to help process increasingly sophisticated ML models. Coral allows the use of a single toolchain to create models for one or more Edge TPUs that can address many different future configurations.

A great example of how the Coral M.2 Accelerator with Dual Edge TPU is being used is in the Series One meeting room kits for Google Meet.

The new Series One room kits for Google Meet run smarter with Coral intelligence

Coral image

Google’s new Series One room kits use our Coral M.2 Accelerator with Dual Edge TPU to bring enhanced audio clarity to video meetings. TrueVoice®, a multi-channel noise cancellation technology, minimizes distractions to ensure every voice is heard with up to 44 channels of echo and noise cancellation, making distracting sounds like snacking or typing on a keyboard a concern of the past.

Enabling the clearest possible communication in challenging environments was the target for the Google Meet hardware team. The consideration of what makes a challenging environment was not limited to unusually noisy environments, such as lunchrooms doubling as conference rooms. Any conference room can present challenging acoustics that make it difficult for all participants to be heard.

The secret to clarity without expensive and cumbersome equipment is to use virtual audio channels and AI driven sound isolation. Read more about how Coral was used to enhance and future-proof the innovative design.

Expanding the AI edge

Earlier this year, we reduced the prices of our prototyping devices and sensors. We are excited to share further price drops on more of our products. Our System-on-Module is now available for $99.99, and our Mini PCIe Accelerator, M.2 Accelerator A+E Key, and M.2 Accelerator B+M key are now available at $24.99. We hope this lower price will make our edge AI more accessible to more creative minds around the world. Later, this month our SoM offering will also expand to include 2 and 4GB RAM options.

Multi-cam with AI

Coral image

As we expand our platform and product family, we continue to keep new edge AI use cases in mind. We are continually inspired by our developer community’s experimentation and implementations. When recently faced with the challenges of multicam video production from home, Markku Lepistö, Solutions Architect at Google Cloud, created this real-time pose-based multicam tool he so aptly dubbed, AI Director.

We love seeing such unique implementations of on-device ML and invite you to share your own projects and feedback at [email protected].

For a list of worldwide distributors, system integrators and partners, visit the Coral partnerships page. Please visit Coral.ai to discover more about our edge ML platform.

Doubling down on the edge with Coral’s new accelerator

Posted by The Coral Team

Coral image

Moving into the fall, the Coral platform continues to grow with the release of the M.2 Accelerator with Dual Edge TPU. Its first application is in Google’s Series One room kits where it helps to remove interruptions and makes the audio clearer for better video meetings. To help even more folks build products with Coral intelligence, we’re dropping the prices on several of our products. And for those folks that are looking to level up their at home video production, we’re sharing a demo of a pose based AI director to make multi-camera video easier to make.

Coral M.2 Accelerator with Dual Edge TPU

The newest addition to our product family brings two Edge TPU co-processors to systems in an M.2 E-key form factor. While the design requires a dual bus PCIe M.2 slot, it brings enhanced ML performance (8 TOPS) to tasks such as running two models in parallel or pipelining one large model across both Edge TPUs.

The ability to scale across multiple edge accelerators isn’t limited to only two Edge TPUs. As edge computing expands to local data centers, cell towers, and gateways, multi-Edge TPU configurations will be required to help process increasingly sophisticated ML models. Coral allows the use of a single toolchain to create models for one or more Edge TPUs that can address many different future configurations.

A great example of how the Coral M.2 Accelerator with Dual Edge TPU is being used is in the Series One meeting room kits for Google Meet.

The new Series One room kits for Google Meet run smarter with Coral intelligence

Coral image

Google’s new Series One room kits use our Coral M.2 Accelerator with Dual Edge TPU to bring enhanced audio clarity to video meetings. TrueVoice®, a multi-channel noise cancellation technology, minimizes distractions to ensure every voice is heard with up to 44 channels of echo and noise cancellation, making distracting sounds like snacking or typing on a keyboard a concern of the past.

Enabling the clearest possible communication in challenging environments was the target for the Google Meet hardware team. The consideration of what makes a challenging environment was not limited to unusually noisy environments, such as lunchrooms doubling as conference rooms. Any conference room can present challenging acoustics that make it difficult for all participants to be heard.

The secret to clarity without expensive and cumbersome equipment is to use virtual audio channels and AI driven sound isolation. Read more about how Coral was used to enhance and future-proof the innovative design.

Expanding the AI edge

Earlier this year, we reduced the prices of our prototyping devices and sensors. We are excited to share further price drops on more of our products. Our System-on-Module is now available for $99.99, and our Mini PCIe Accelerator, M.2 Accelerator A+E Key, and M.2 Accelerator B+M key are now available at $24.99. We hope this lower price will make our edge AI more accessible to more creative minds around the world. Later, this month our SoM offering will also expand to include 2 and 4GB RAM options.

Multi-cam with AI

Coral image

As we expand our platform and product family, we continue to keep new edge AI use cases in mind. We are continually inspired by our developer community’s experimentation and implementations. When recently faced with the challenges of multicam video production from home, Markku Lepistö, Solutions Architect at Google Cloud, created this real-time pose-based multicam tool he so aptly dubbed, AI Director.

We love seeing such unique implementations of on-device ML and invite you to share your own projects and feedback at [email protected].

For a list of worldwide distributors, system integrators and partners, visit the Coral partnerships page. Please visit Coral.ai to discover more about our edge ML platform.

Applications are open for Google for Startups Accelerator in Japan

Posted by Takuo Suzuki, Developer Relations Program Manager

image from a recent accelerator

The Google for Startups Accelerator helps founders across the globe solve for important economic and societal challenges, while helping them grow and scale their business. Due to the continued success of the program around the world, we are pleased to open up applications for our third Accelerator class in Japan, commencing January 2021. Applications will remain open until October 30, 2020.

This accelerator is designed for established startups across Japan using technology to help solve important social and environmental issues, and that contribute to the Japanese economy. This includes (but is not limited to) startups tackling:

  • Ageing society and declining workforce
  • Energy, environment, and sustainability
  • Rural revitalization
  • Medicine, health, and well-being
  • Education
  • Diversity, inclusion, and social equity

Google for Startups Accelerators provide support to later-stage companies that have already launched their product, and have strong market-fit and potential to scale rapidly in the future. Startups in the program benefit from tailored Google mentorship, product advice & credits, technical workshops, and by getting connected to other founders, VCs, and industry experts.

Each participating startup selected for the Google for Startups Accelerator program will join a 500+ company alumni network of startups from around the world, such as Selan, with their product Omister (Class #2 Japan), is improving education & childcare in Japan by providing bilingual instructors for children, and mDoc, (Class #1 Sustainability, Europe), a Nigerian startup helping people in West Africa with chronic diseases get treatment via their n app.

In summary:

  • Suitable for startups solving for societal or environmental issues in Japan
  • Application open: September 15, 2020
  • Application close: October 30, 2020
  • Announcement of selected startups: December 2020
  • Program runs from late-January 2021 to end of April 2021 (planned)
  • Please refer to the website for further information and to apply.