Author Archives: Google Developers

Improving shared AR experiences with Cloud Anchors in ARCore 1.20

Posted by Eric Lai, Product Manager, Augmented Reality

Augmented reality (AR) can help you explore the world around you in new, seemingly magical ways. Whether you want to venture through the Earth’s unique habitats, explore historic cultures or even just find the shortest path to your destination, there’s no shortage of ways that AR can help you interact with the world.

That’s why we’re constantly improving ARCore — so developers can build amazing AR experiences that help us reimagine what’s possible.

In 2018, we introduced the Cloud Anchors API in ARCore, which lets people across devices view and share the same AR content in real-world spaces. Since then, we’ve been working on new ways for developers to use Cloud Anchors to make AR content persist and more easily discoverable.

Create long-lasting AR experiences

Last year, we previewed persistent Cloud Anchors, which lets people return to shared AR experiences again and again. With ARCore 1.20, this feature is now widely available to Android, iOS, and Unity mobile developers.

Developers all over the world are already using this technology to help people learn, share and engage with the world around them in new ways.

MARK, which we highlighted last year, is a social platform that lets people leave AR messages in real-world locations for friends, family and their community to discover. MARK is now available globally and will be launching the MARK Hope Campaign in the US to help people raise funds for their favorite charities and have their donations matched for a limited time.

AR photo

MARK by People Sharing Streetart Together Limited

REWILD Our Planet is an AR nature series produced by Melbourne based studio PHORIA. The experience is based on the Netflix original documentary series Our Planet. REWILD uses Ultra High Definition Video alongside AR content to let you venture into earth’s unique habitats and interact with endangered wildlife. It originally launched in museums, but can now be enjoyed on your smartphone in your living room. As episodes of the show are released, persistent Cloud Anchors allow you to return to the same spot in your own home to see how nature is changing.

AR image

REWILD Our Planet by PHORIA

Changdeok ARirang is an AR tour guide app that combines the power of SK Telecom’s 5G with persistent Cloud Anchors. Visitors at Changdeokgung Palace in South Korea are guided by the legendary Haechi to relevant locations where they can experience historical and cultural high fidelity AR content. Changdeok ARirang at Home was also launched so that this same experience can be accessed from the comfort of your couch.

AR image

Changdeok ARirang by SK Telecom

In Sweden, SJ Labs, the innovation arm of Swedish Railways, together with Bontouch, their tech innovation partner, uses persistent Cloud Anchors to help passengers find their way at Central Station in Stockholm, making it easier and faster for them to make their train departures.

AR image

SJ Labs by SJ – Swedish Railways

Coming soon, Lowe’s Persistent View will let you design your home in AR with the help of an expert. You’ll be able to add furniture and appliances to different areas of your home to see how they’d look, and return to the experience as many times as needed before making a purchase.

AR example

Lowe’s Persistent View powered by Streem

If you’re interested in building AR experiences that last over time, you can learn more about persistent Cloud Anchors in our docs.

Call for collaborators: test a new way to find AR content

As developers use Cloud Anchors to attach more AR experiences to the world, we also want to make it easier for people to discover them. That’s why we’re working on earth Cloud Anchors, a new feature that uses AR and global localization—the underlying technology that powers Live View features on Google Maps—to easily guide users to AR content. If you’re interested in early access to test this feature, you can apply here.

Some earth Cloud Anchors concepts

Introducing Learn, your key to unlocking Google’s educational content for developers

Posted by Amani Newton, Technical Writer

Any Codelabs fans in the house?

If you haven’t heard yet, we’re excited to announce the launch of developers.google.com/learn, a new one-stop destination for developers to achieve the knowledge and skills needed to develop software with Google's technology. Learn brings the learning content you already love from Google together into one easy to access place.

Google Developers learning page image

The home page of developers.google.com/learn

Previously, our educational content was separated by product area and platform. For example, you’d likely find Firebase Codelabs on firebase.google.com, and their video series on Youtube. We know you love these educational offerings, but they could be somewhat difficult to find, unless you were already in the know.

To address this issue, we built Learn to act as a portal, linking all these amazing educational activities together. In addition, we came up with some handy new ways to organize the content, so you can easily find what you’re looking for the first time, every time.

Codelabs

For newbies: Codelabs walk you through the process of building a small application, or adding a new feature to an existing application. They cover a wide range of topics such as Android Wear, Google Compute Engine, Project Tango, and Google APIs on iOS.

If you’re already familiar with Codelabs, rest assured that not too much has changed. Codelabs still provide guided, hands-on coding experience for new and aspiring developers at no charge, and you can still access all of them through codelabs.developers.google.com.

What has changed is that now there’s a new way to experience Codelabs: through our Pathways.

Pathways

Pathways image

The home for Google Learning Pathways

Pathways are a new way to learn skills using all of the educational activities Google has developed for that skill. They organize selected videos, articles, blog posts, and Codelabs, together in one sequential learning experience so you can develop knowledge and skills at your own pace.

Let’s use Flutter as an example. Did you love The Boring Flutter Development Show, but your style of learning is a little more hands-on? Look no further than the Build apps with Flutter pathway, featuring explanatory videos from the Flutter team and step-by-step Codelabs designed to help you build your first Flutter app.

Flutter image

The Flutter pathway

All Pathways finish with an assessment, which you can pass to earn a badge.

Topics

Topics allow you to explore collections of related codelabs, pathways, news, and videos.

Are you a chatbot developer, or aspire to be one? You can find all the latest news and educational content regarding chatbots in one easy to find place.

Topics image

The home for news and more about Chatbots

Developer Profiles

Here’s where the fun begins! You can show off all the new stuff you’ve learned on your Google Developer Profile.

To use the social features, first, create your unique Developer Profile on google.dev.

Developer Profile

Create a Developer Profile on google.dev

Your first badge will be the Created Developer Profile badge.

first badge

Create a Developer Profile badge

Next, try one of the pathways we currently host. After completing the activities you’ll take a quiz, and if you pass, you’ll be awarded the badge for that pathway. You can share all of your earned badges on social media, and make your other developer friends jealous!

MediaPipe 3D Face Transform

Posted by Kanstantsin Sokal, Software Engineer, MediaPipe team

Earlier this year, the MediaPipe Team released the Face Mesh solution, which estimates the approximate 3D face shape via 468 landmarks in real-time on mobile devices. In this blog, we introduce a new face transform estimation module that establishes a researcher- and developer-friendly semantic API useful for determining the 3D face pose and attaching virtual objects (like glasses, hats or masks) to a face.

The new module establishes a metric 3D space and uses the landmark screen positions to estimate common 3D face primitives, including a face pose transformation matrix and a triangular face mesh. Under the hood, a lightweight statistical analysis method called Procrustes Analysis is employed to drive a robust, performant and portable logic. The analysis runs on CPU and has a minimal speed/memory footprint on top of the original Face Mesh solution.

MediaPipe image

Figure 1: An example of virtual mask and glasses effects, based on the MediaPipe Face Mesh solution.

Introduction

The MediaPipe Face Landmark Model performs a single-camera face landmark detection in the screen coordinate space: the X- and Y- coordinates are normalized screen coordinates, while the Z coordinate is relative and is scaled as the X coordinate under the weak perspective projection camera model. While this format is well-suited for some applications, it does not directly enable crucial features like aligning a virtual 3D object with a detected face.

The newly introduced module moves away from the screen coordinate space towards a metric 3D space and provides the necessary primitives to handle a detected face as a regular 3D object. By design, you'll be able to use a perspective camera to project the final 3D scene back into the screen coordinate space with a guarantee that the face landmark positions are not changed.

Metric 3D Space

The Metric 3D space established within the new module is a right-handed orthonormal metric 3D coordinate space. Within the space, there is a virtual perspective camera located at the space origin and pointed in the negative direction of the Z-axis. It is assumed that the input camera frames are observed by exactly this virtual camera and therefore its parameters are later used to convert the screen landmark coordinates back into the Metric 3D space. The virtual camera parameters can be set freely, however for better results it is advised to set them as close to the real physical camera parameters as possible.

MediaPipe image

Figure 2: A visualization of multiple key elements in the metric 3D space. Created in Cinema 4D

Canonical Face Model

The Canonical Face Model is a static 3D model of a human face, which follows the 3D face landmark topology of the MediaPipe Face Landmark Model. The model bears two important functions:

  • Defines metric units: the scale of the canonical face model defines the metric units of the Metric 3D space. A metric unit used by the default canonical face model is a centimeter;
  • Bridges static and runtime spaces: the face pose transformation matrix is - in fact - a linear map from the canonical face model into the runtime face landmark set estimated on each frame. This way, virtual 3D assets modeled around the canonical face model can be aligned with a tracked face by applying the face pose transformation matrix to them.

Face Transform Estimation

The face transform estimation pipeline is a key component, responsible for estimating face transform data within the Metric 3D space. On each frame, the following steps are executed in the given order:

  • Face landmark screen coordinates are converted into the Metric 3D space coordinates;
  • Face pose transformation matrix is estimated as a rigid linear mapping from the canonical face metric landmark set into the runtime face metric landmark set in a way that minimizes a difference between the two;
  • A face mesh is created using the runtime face metric landmarks as the vertex positions (XYZ), while both the vertex texture coordinates (UV) and the triangular topology are inherited from the canonical face model.

Effect Renderer

The Effect Renderer is a component, which serves as a working example of a face effect renderer. It targets the OpenGL ES 2.0 API to enable a real-time performance on mobile devices and supports the following rendering modes:

  • 3D object rendering mode: a virtual object is aligned with a detected face to emulate an object attached to the face (example: glasses);
  • Face mesh rendering mode: a texture is stretched on top of the face mesh surface to emulate a face painting technique.

In both rendering modes, the face mesh is first rendered as an occluder straight into the depth buffer. This step helps to create a more believable effect via hiding invisible elements behind the face surface.

MediaPipe image

Figure 3: An example of face effects rendered by the Face Effect Renderer.

Using Face Transform Module

The face transform estimation module is available as a part of the MediaPipe Face Mesh solution. It comes with face effect application examples, available as graphs and mobile apps on Android or iOS. If you wish to go beyond examples, the module contains generic calculators and subgraphs - those can be flexibly applied to solve specific use cases in any MediaPipe graph. For more information, please visit our documentation.

Follow MediaPipe

We look forward to publishing more blog posts related to new MediaPipe pipeline examples and features. Please follow the MediaPipe label on Google Developers Blog and Google Developers twitter account (@googledevs).

Acknowledgements

We would like to thank Chuo-Ling Chang, Ming Guang Yong, Jiuqiang Tang, Gregory Karpiak, Siarhei Kazakou, Matsvei Zhdanovich and Matthias Grundman for contributing to this blog post.

Announcing DevFest 2020

Posted by Jennifer Kohl, Program Manager, Developer Community Programs

DevFest Image

On October 16-18, thousands of developers from all over the world are coming together for DevFest 2020, the largest virtual weekend of community-led learning on Google technologies.

As people around the world continue to adapt to spending more time at home, developers yearn for community now more than ever. In years past, DevFest was a series of in-person events over a season. For 2020, the community is coming together in a whole new way – virtually – over one weekend to keep developers connected when they may want it the most.

The speakers

The magic of DevFest comes from the people who organize and speak at the events - developers with various backgrounds and skill levels, all with their own unique perspectives. In different parts of the world, you can find a DevFest session in many local languages. DevFest speakers are made up of various types of technologists, including kid developers , self-taught programmers from rural areas , and CEOs and CTOs of startups. DevFest also features a wide range of speakers from Google, Women Techmakers, Google Developer Experts, and more. Together, these friendly faces, with many different perspectives, create a unique and rich developer conference.

The sessions and their mission

Hosted by Google Developer Groups, this year’s sessions include technical talks and workshops from the community, and a keynote from Google Developers. Through these events, developers will learn how Google technologies help them develop, learn, and build together.

Sessions will cover multiple technologies, such as Android, Google Cloud Platform, Machine Learning with TensorFlow, Web.dev, Firebase, Google Assistant, and Flutter.


At our core, Google Developers believes community-led developer events like these are an integral part of the advancement of technology in the world.

For this reason, Google Developers supports the community-led efforts of Google Developer Groups and their annual tentpole event, DevFest. Google provides esteemed speakers from the company and custom technical content produced by developers at Google. The impact of DevFest is really driven by the grassroots, passionate GDG community organizers who volunteer their time. Google Developers is proud to support them.

The attendees

During DevFest 2019, 138,000+ developers participated across 500+ DevFests in 100 countries. While 2020 is a very different year for events around the world, GDG chapters are galvanizing their communities to come together virtually for this global moment. The excitement for DevFest continues as more people seek new opportunities to meet and collaborate with like-minded, community-oriented developers in our local towns and regions.

Join the conversation on social media with #DevFest.

Sign up for DevFest at goo.gle/devfest.





Still curious? Check out these popular talks from DevFest 2019 events around the world...

Google Nest Device Access Console now available for partners and individuals

Posted by Gabriel Rubinsky, Senior Product Manager

Today, we’re excited to announce the Device Access Console is available.

The Device Access program lets individuals and qualified partners securely access and control Nest products with their apps and solutions.

At the heart of the Device Access program is the Smart Device Management API. Since we announced the program, Alarm.com, Control4, DISH, OhmConnect, NRG Energy, and Vivint Smart Home have successfully completed the Early Access Program (EAP) with Nest thermostat, camera, or doorbell traits. In the coming months, we expect additional devices to be supported and more smart home partners to launch their new integrations as well.

Enhanced privacy and security

The Device Access program is built on a foundation of privacy and security. The program requires partner submission of qualified use cases and completion of a security assessment before being allowed to utilize the Smart Device Management API for commercial use. The program process gives our users the confidence that commercial partners offering integrated Nest solutions have data protections and safeguards in place that meet our privacy and security standards.

Nest device access and control

The Device Access program currently allows qualified partners to integrate directly with Nest devices, enable control of thermostats, access and view camera feeds, and receive doorbell notifications with images. All qualified partner solutions and services will require end-user consent before being able to access, control, and manage Nest devices as part of their service offerings, either through a partner client app or service platform. Ultimately, this gives users more choice in how to control their home and their own generated data.

If you’re a developer or a Nest user interested in the Device Access program or access to the sandbox development environment,* you can find more information on our Device Access site.

  • Device Access for Commercial Developers

    The Device Access program allows trusted partners to offer access, management, and control of Nest devices within the partner’s app, solution, and ecosystem. It allows developers to test all API traits in the sandbox environment, before moving forward with commercial integration. Learn more

  • Device Access for Individuals

    For individual smart home developer enthusiasts, you can register to access the sandbox development environment, allowing you to directly control your own Nest devices through your private integrations and automations. Learn more

We’re doing the work to make Nest devices more secure and protect user privacy long into the future. This means expanding privacy and data security programs, and delivering flexibility for our customers to use thousands of products from partners to create a connected, helpful home.



* Registration consists of the acceptance of the Google API and Nest Device Access Sandbox Terms of Service, along with a one-time, non-refundable nominal fee per account

Google Nest Device Access Console now available for partners and individuals

Posted by Gabriel Rubinsky, Senior Product Manager

Today, we’re excited to announce the Device Access Console is available.

The Device Access program lets individuals and qualified partners securely access and control Nest products with their apps and solutions.

At the heart of the Device Access program is the Smart Device Management API. Since we announced the program, Alarm.com, Control4, DISH, OhmConnect, NRG Energy, and Vivint Smart Home have successfully completed the Early Access Program (EAP) with Nest thermostat, camera, or doorbell traits. In the coming months, we expect additional devices to be supported and more smart home partners to launch their new integrations as well.

Enhanced privacy and security

The Device Access program is built on a foundation of privacy and security. The program requires partner submission of qualified use cases and completion of a security assessment before being allowed to utilize the Smart Device Management API for commercial use. The program process gives our users the confidence that commercial partners offering integrated Nest solutions have data protections and safeguards in place that meet our privacy and security standards.

Nest device access and control

The Device Access program currently allows qualified partners to integrate directly with Nest devices, enable control of thermostats, access and view camera feeds, and receive doorbell notifications with images. All qualified partner solutions and services will require end-user consent before being able to access, control, and manage Nest devices as part of their service offerings, either through a partner client app or service platform. Ultimately, this gives users more choice in how to control their home and their own generated data.

If you’re a developer or a Nest user interested in the Device Access program or access to the sandbox development environment,* you can find more information on our Device Access site.

  • Device Access for Commercial Developers

    The Device Access program allows trusted partners to offer access, management, and control of Nest devices within the partner’s app, solution, and ecosystem. It allows developers to test all API traits in the sandbox environment, before moving forward with commercial integration. Learn more

  • Device Access for Individuals

    For individual smart home developer enthusiasts, you can register to access the sandbox development environment, allowing you to directly control your own Nest devices through your private integrations and automations. Learn more

We’re doing the work to make Nest devices more secure and protect user privacy long into the future. This means expanding privacy and data security programs, and delivering flexibility for our customers to use thousands of products from partners to create a connected, helpful home.



* Registration consists of the acceptance of the Google API and Nest Device Access Sandbox Terms of Service, along with a one-time, non-refundable nominal fee per account

Google Nest Device Access Console now available for partners and individuals

Posted by Gabriel Rubinsky, Senior Product Manager

Today, we’re excited to announce the Device Access Console is available.

The Device Access program lets individuals and qualified partners securely access and control Nest products with their apps and solutions.

At the heart of the Device Access program is the Smart Device Management API. Since we announced the program, Alarm.com, Control4, DISH, OhmConnect, NRG Energy, and Vivint Smart Home have successfully completed the Early Access Program (EAP) with Nest thermostat, camera, or doorbell traits. In the coming months, we expect additional devices to be supported and more smart home partners to launch their new integrations as well.

Enhanced privacy and security

The Device Access program is built on a foundation of privacy and security. The program requires partner submission of qualified use cases and completion of a security assessment before being allowed to utilize the Smart Device Management API for commercial use. The program process gives our users the confidence that commercial partners offering integrated Nest solutions have data protections and safeguards in place that meet our privacy and security standards.

Nest device access and control

The Device Access program currently allows qualified partners to integrate directly with Nest devices, enable control of thermostats, access and view camera feeds, and receive doorbell notifications with images. All qualified partner solutions and services will require end-user consent before being able to access, control, and manage Nest devices as part of their service offerings, either through a partner client app or service platform. Ultimately, this gives users more choice in how to control their home and their own generated data.

If you’re a developer or a Nest user interested in the Device Access program or access to the sandbox development environment,* you can find more information on our Device Access site.

  • Device Access for Commercial Developers

    The Device Access program allows trusted partners to offer access, management, and control of Nest devices within the partner’s app, solution, and ecosystem. It allows developers to test all API traits in the sandbox environment, before moving forward with commercial integration. Learn more

  • Device Access for Individuals

    For individual smart home developer enthusiasts, you can register to access the sandbox development environment, allowing you to directly control your own Nest devices through your private integrations and automations. Learn more

We’re doing the work to make Nest devices more secure and protect user privacy long into the future. This means expanding privacy and data security programs, and delivering flexibility for our customers to use thousands of products from partners to create a connected, helpful home.



* Registration consists of the acceptance of the Google API and Nest Device Access Sandbox Terms of Service, along with a one-time, non-refundable nominal fee per account

Google Pay picks Flutter to drive its global product development

Posted by David Ko, Engineering Director; Jeff Lim, Software Engineer; Pankaj Gupta, Director of Engineering; Will Horn, Software Engineer

Three years ago, when we launched Google Pay India (then called Tez), our vision was to create a simple and secure payment app for everyone in India. We started with the premise of making payments simple and built a user interface that made making payments as easy as starting a conversation. The simplicity of the design resonated with users instantly and over time, we have added functionality to help users do more than just make payments. Today users can pay their bills, recharge their phones, get loans instantly through banks, buy train tickets and much more all within the app. Last year, we also launched the Spot Platform in India, which allows merchants to create branded experiences within the Google Pay app so they can connect with their customers in a more engaging way.

As we looked at scaling our learnings from India to other parts of the world, we wanted to focus on a fast and efficient development environment, which was modern and engaging with the flexibility needed to keep the UI clean. And more importantly one that enabled us to write once and be able to deploy to both iOS and Android reaching the wide variety of users.

It was clear that we would need to build it, and ensure that it worked across a wide variety of payment rails, infrastructure, and operating systems. But with the momentum we had for Google Pay in India, and the fast evolving product features - we had limited engineering resources to put behind this effort.

After evaluating various options, it was easy to pick Flutter as the obvious choice. The three things that made it click for us were:

  • We could write once in Dart and deploy on both iOS and Android, which led to a uniform best-in-class experience on both Android and iOS;
  • Just-in-Time compiler with hot reload during development enabled rapid iteration on UI which tremendously increased developer efficiency; and
  • Ahead-of-time compilation ensured high performance deployment.

Now the task was to get it done. We started with a small team of three software engineers from both Android and iOS. Those days were focused and intense. To start with we created a vertical slice of the app — home page, chat, and payments (with the critical native plugins for payments in India). The team first tried a hybrid approach, and then decided to do a clean rewrite as it was not scalable.

We ran a few small sprints for other engineers on the team to give them an opportunity to rewrite something in Flutter and provide feedback. Everyone loved Flutter — you could see the thrill on people’s faces as they talked about how fast it was to build a user interface. One of the most exciting things was that the team could get instant feedback while developing. We could also leverage the high quality widgets that Flutter provided to make development easier.

After carefully weighing the risks and our case for migration, we decided to go all in with Flutter. It was a monumental rewrite of a moving target, and the existing app continues to evolve while we were rewriting features. After many months of hard work, Google Pay Flutter implementation is now available in open beta in India and Singapore. Our users in India and Singapore can visit the Google Play Store page for Google Pay to opt into the beta program and experience the latest app built on Flutter. Next, we are looking forward to launching Google Pay on Flutter to everyone across the world on iOS and Android.

 Google Pay image Google Pay image Google pay image Google Pay image

We hope this gives you a fair idea of how to approach and launch a complete rewrite of an active app that is used by millions of users and businesses of all sizes. It would not have been possible for us to deliver this without Flutter’s continued advances on the platform. Huge thanks to the Flutter team, as today, we are standing on their shoulders!

When fully migrated, Google Pay will be one of the largest production deployments on the Flutter platform. We look forward to sharing more learnings from our transition to Flutter in the future.

Google Pay picks Flutter to drive its global product development

Posted by David Ko, Engineering Director; Jeff Lim, Software Engineer; Pankaj Gupta, Director of Engineering; Will Horn, Software Engineer

Three years ago, when we launched Google Pay India (then called Tez), our vision was to create a simple and secure payment app for everyone in India. We started with the premise of making payments simple and built a user interface that made making payments as easy as starting a conversation. The simplicity of the design resonated with users instantly and over time, we have added functionality to help users do more than just make payments. Today users can pay their bills, recharge their phones, get loans instantly through banks, buy train tickets and much more all within the app. Last year, we also launched the Spot Platform in India, which allows merchants to create branded experiences within the Google Pay app so they can connect with their customers in a more engaging way.

As we looked at scaling our learnings from India to other parts of the world, we wanted to focus on a fast and efficient development environment, which was modern and engaging with the flexibility needed to keep the UI clean. And more importantly one that enabled us to write once and be able to deploy to both iOS and Android reaching the wide variety of users.

It was clear that we would need to build it, and ensure that it worked across a wide variety of payment rails, infrastructure, and operating systems. But with the momentum we had for Google Pay in India, and the fast evolving product features - we had limited engineering resources to put behind this effort.

After evaluating various options, it was easy to pick Flutter as the obvious choice. The three things that made it click for us were:

  • We could write once in Dart and deploy on both iOS and Android, which led to a uniform best-in-class experience on both Android and iOS;
  • Just-in-Time compiler with hot reload during development enabled rapid iteration on UI which tremendously increased developer efficiency; and
  • Ahead-of-time compilation ensured high performance deployment.

Now the task was to get it done. We started with a small team of three software engineers from both Android and iOS. Those days were focused and intense. To start with we created a vertical slice of the app — home page, chat, and payments (with the critical native plugins for payments in India). The team first tried a hybrid approach, and then decided to do a clean rewrite as it was not scalable.

We ran a few small sprints for other engineers on the team to give them an opportunity to rewrite something in Flutter and provide feedback. Everyone loved Flutter — you could see the thrill on people’s faces as they talked about how fast it was to build a user interface. One of the most exciting things was that the team could get instant feedback while developing. We could also leverage the high quality widgets that Flutter provided to make development easier.

After carefully weighing the risks and our case for migration, we decided to go all in with Flutter. It was a monumental rewrite of a moving target, and the existing app continues to evolve while we were rewriting features. After many months of hard work, Google Pay Flutter implementation is now available in open beta in India and Singapore. Our users in India and Singapore can visit the Google Play Store page for Google Pay to opt into the beta program and experience the latest app built on Flutter. Next, we are looking forward to launching Google Pay on Flutter to everyone across the world on iOS and Android.

 Google Pay image Google Pay image Google pay image Google Pay image

We hope this gives you a fair idea of how to approach and launch a complete rewrite of an active app that is used by millions of users and businesses of all sizes. It would not have been possible for us to deliver this without Flutter’s continued advances on the platform. Huge thanks to the Flutter team, as today, we are standing on their shoulders!

When fully migrated, Google Pay will be one of the largest production deployments on the Flutter platform. We look forward to sharing more learnings from our transition to Flutter in the future.

Doubling down on the edge with Coral’s new accelerator

Posted by The Coral Team

Coral image

Moving into the fall, the Coral platform continues to grow with the release of the M.2 Accelerator with Dual Edge TPU. Its first application is in Google’s Series One room kits where it helps to remove interruptions and makes the audio clearer for better video meetings. To help even more folks build products with Coral intelligence, we’re dropping the prices on several of our products. And for those folks that are looking to level up their at home video production, we’re sharing a demo of a pose based AI director to make multi-camera video easier to make.

Coral M.2 Accelerator with Dual Edge TPU

The newest addition to our product family brings two Edge TPU co-processors to systems in an M.2 E-key form factor. While the design requires a dual bus PCIe M.2 slot, it brings enhanced ML performance (8 TOPS) to tasks such as running two models in parallel or pipelining one large model across both Edge TPUs.

The ability to scale across multiple edge accelerators isn’t limited to only two Edge TPUs. As edge computing expands to local data centers, cell towers, and gateways, multi-Edge TPU configurations will be required to help process increasingly sophisticated ML models. Coral allows the use of a single toolchain to create models for one or more Edge TPUs that can address many different future configurations.

A great example of how the Coral M.2 Accelerator with Dual Edge TPU is being used is in the Series One meeting room kits for Google Meet.

The new Series One room kits for Google Meet run smarter with Coral intelligence

Coral image

Google’s new Series One room kits use our Coral M.2 Accelerator with Dual Edge TPU to bring enhanced audio clarity to video meetings. TrueVoice®, a multi-channel noise cancellation technology, minimizes distractions to ensure every voice is heard with up to 44 channels of echo and noise cancellation, making distracting sounds like snacking or typing on a keyboard a concern of the past.

Enabling the clearest possible communication in challenging environments was the target for the Google Meet hardware team. The consideration of what makes a challenging environment was not limited to unusually noisy environments, such as lunchrooms doubling as conference rooms. Any conference room can present challenging acoustics that make it difficult for all participants to be heard.

The secret to clarity without expensive and cumbersome equipment is to use virtual audio channels and AI driven sound isolation. Read more about how Coral was used to enhance and future-proof the innovative design.

Expanding the AI edge

Earlier this year, we reduced the prices of our prototyping devices and sensors. We are excited to share further price drops on more of our products. Our System-on-Module is now available for $99.99, and our Mini PCIe Accelerator, M.2 Accelerator A+E Key, and M.2 Accelerator B+M key are now available at $24.99. We hope this lower price will make our edge AI more accessible to more creative minds around the world. Later, this month our SoM offering will also expand to include 2 and 4GB RAM options.

Multi-cam with AI

Coral image

As we expand our platform and product family, we continue to keep new edge AI use cases in mind. We are continually inspired by our developer community’s experimentation and implementations. When recently faced with the challenges of multicam video production from home, Markku Lepistö, Solutions Architect at Google Cloud, created this real-time pose-based multicam tool he so aptly dubbed, AI Director.

We love seeing such unique implementations of on-device ML and invite you to share your own projects and feedback at [email protected].

For a list of worldwide distributors, system integrators and partners, visit the Coral partnerships page. Please visit Coral.ai to discover more about our edge ML platform.