Category Archives: Google Developers Blog

News and insights on Google platforms, tools and events

Introducing the Earth Engine Google Developer Experts (GDEs)

Posted by Tyler Erickson, Developer Advocate, Google Earth Engine

One of the greatest things about Earth Engine is the vibrant community of developers who openly share their knowledge about the platform and how it can be used to address real-world sustainability issues. To recognize some of these exceptional community members, in 2022 we launched the initial cohort of Earth Engine Google Developer Experts (GDEs). You can view the current list of Earth Engine GDEs on the GDE Directory page.

Moving 3D image of earth rotating showing locations of members belonging to the initial cohort of Earth Engine GDEs
The initial cohort of Earth Engine Google Developer Experts.
What makes an Earth Engine expert? Earth Engine GDEs are selected based on their expertise in the Earth Engine product (of course), but also for their knowledge sharing. They share their knowledge in many ways, including answering questions from other developers, writing tutorials and blogs, teaching in settings spanning from workshops to university classes, organizing meetups and conference sessions that allow others to share their work, building extensions to the platform, and so much more!

To learn more about the Google Developer Experts program and the Earth Engine GDEs, go to https://developers.google.com/community/experts.

Now that it is 2023, we are re-opening the application process for additional Earth Engine GDEs. If you’re interested in being considered, you can find information about the process in the GDE Program Application guide.

GDE Digital badge logo - Earth

Solution Challenge 2023: Use Google Technologies to Address the United Nations’ Sustainable Development Goals

Posted by Rachel Francois, Google Developer Student Clubs, Global Program Manager

Each year, the Google Developer Student Clubs Solution Challenge invites university students to develop solutions for real-world problems using one or more Google products or platforms. How could you use Android, Firebase, TensorFlow, Google Cloud, Flutter, or any of your favorite Google technologies to promote employment for all, economic growth, and climate action?

Join us to build solutions for one or more of the United Nations 17 Sustainable Development Goals. These goals were agreed upon in 2015 by all 193 United Nations Member States and aim to end poverty, ensure prosperity, and protect the planet by 2030.

One 2022 Solution Challenge participant said, “I love how it provides the opportunity to make a real impact while pursuing undergraduate studies. It helped me practice my expertise in a real-world setting, and I built a project I can proudly showcase on my portfolio.”

Solution Challenge prizes

Participants will receive specialized prizes at different stages:

  • The top 100 teams receive customized mentorship from Google and experts to take solutions to the next level, branded swag, and a certificate.
  • The top 10 finalists receive additional mentorship, a swag box, and the opportunity to showcase their solutions to Google teams and developers all around the world during the virtual 2023 Solution Challenge Demo Day, live on YouTube.
  • Contest finalists - In addition to the swag box, each individual from the seven teams not in the top three will receive a Cash Prize of $1,000 per student. Winnings for each qualifying team will not exceed $4,000.
  • Top 3 winners - In addition to the swag box, each individual from the top 3 winning teams will receive a Cash Prize of $3,000 and a feature on the Google Developers Blog. Winnings for each qualifying team will not exceed $12,000
 

Joining the Solution Challenge

There are four steps to join the Solution Challenge and get started on your project:

  1. Register at goo.gle/solutionchallenge and join a Google Developer Student Club at your college or university. If there is no club at your university, you can join the closest one through our event platform.
  2. Select one or more of the United Nations 17 Sustainable Development Goals to address.
  3. Build a solution using Google technology.
  4. Create a demo and submit your project by March 31, 2023. 

    Google Resources for Solution Challenge participants

    Google will support Solution Challenge participants with resources to help students build strong projects, including:

    • Live online sessions with Q&As
    • Mentorship from Google, Google Developer Experts, and the Google Developer Student Club community
    • Curated Codelabs designed by Google Developers
    • Access to Design Sprint guidelines developed by Google Ventures
    • and more!
    “During the preparation and competition, we learned a great deal,” said a 2022 Solution Challenge team member. “That was part of the reason we chose to participate in this competition: the learning opportunities are endless.”

    Winner announcement dates

    Once all projects are submitted, our panel of judges will evaluate and score each submission using specific criteria.

    After that, winners will be announced in three rounds.

    Round 1 (April): The top 100 teams will be announced.

    Round 2 (June): After the top 100 teams submit their new and improved solutions, 10 finalists will be announced.

    Round 3 (August): The top 3 grand prize winners will be announced live on YouTube during the 2023 Solution Challenge Demo Day.

    We can’t wait to see the solutions you create with your passion for building a better world, coding skills, and a little help from Google technologies.

    Learn more and sign up for the 2023 Solution Challenge here.


    I got the time to push my creativity to the next level. It helped me attain more information from more knowledgeable people by expanding my network. Working together and building something was a great challenge and one of the best experiences, too. I liked the idea of working on the challenge to present a solution.

    ~2022 Solution Challenge participant

    From GDSC Lead to Flutter developer: Lenz Paul’s journey

    Posted by Kübra Zengin, North America Regional Lead, Google Developers

    Switching careers at age 30, after eight years on the job, is a brave thing to do. Lenz Paul spent eight years working in sales at Bell, a large Canadian telecommunications company. During that time, he found his interest sparked by the technology solutions that helped him do his job more effectively. He decided to follow his passion and switch careers to focus on engineering.

    “I found I had a knack for technology and was curious about it,” he says. “Sales has so many manual processes, and I always proposed tech solutions and even learned a little programming to make my daily life easier.”

    At 30, Lenz entered Vancouver Island University in Canada to pursue a computer science degree.

    Becoming a GDSC Lead

    When his department chair, Gara Pruesse, emailed students about the opportunity to start and lead a Google Developer Student Club at the university, Lenz jumped at the chance and applied. He was selected to be the GDSC Vancouver Island University lead in 2019 and led the group for a year. He hoped to meet more technology enthusiasts and to use the programming skills he was learning in his courses to help other students who wanted to learn Google technologies.

    “It was my first year at university, starting in the technology field as a career transition, and I was hoping to network,” Lenz recalls. “I read the [GDSC] description about sharing knowledge and thought this would be a way to use the skills I was learning in school practically, meet people, and impact my community.”

    As part of his role as a GDSC lead, Lenz used Google workshops, Google Cloud Skills Boost, and codelabs to learn Google Cloud and Flutter outside of class. Google provided Cloud credits to GDSC Vancouver Island University, which Lenz shared with his seven core team members, so they could all try new tools and figure out how to teach their fellow students to use them. Then, they taught the rest of the student club how to use Google technologies. They also hosted two Google engineers on-site as speakers.

    “GDSC helped me with personal skills, soft skills, such as public speaking and leadership,” Lenz says. “The highlight was that I learned a lot about Google Cloud technologies, by holding workshops and delivering content.”

    Working for Google’s Tech Equity Collective

    Following his GDSC experience, Lenz worked as a contractor for Google’s Tech Equity Collective, which focuses on improving diversity in tech. Lenz tutored a group of 20 students for about six weeks before being promoted to an instructor role and then becoming the lead teacher for Google Cloud technologies.

    “My GDSC experience really helped me shine in my teaching role,” Lenz says. “I was already very familiar with Google Cloud, due to the many workshops I organized and taught for my GDSC, so it was easy to adapt it for my TEC class.”

    Becoming a Flutter developer

    Lenz began using Flutter in 2019, when the technology was just two years old. He points out that there is nobody in the world with ten years of experience in Flutter; it’s relatively new for everyone.

    “I love the ecosystem, and it was a great time to get started with it,” he says. “It’s such a promising technology.”

    Lenz says Dart, Flutter’s programming language, is a powerful, modern, object-oriented language with C-style syntax that’s easy to learn and easy to use.

    “Dart is clean, easy to scale, can support any size project, and it has the backing of Google,” Lenz says. “It’s a great language for building scripts or server apps. It allows for building mobile apps, web apps, and desktop apps, and it even supports embedded devices via the Flutter Framework.”

    Lenz used Google Cloud Skills Boost and Google codelabs to learn Flutter and is now a full-time Flutter developer.

    “I’m excited about the future of Flutter and Dart,” he says. “I think Flutter is going to continue to be a big player in the app development space and can't wait to see what the Flutter team comes up with next.”

    In August 2022, CMiC (Computer Methods International Corporation) in Toronto, a construction ERP software company, hired Lenz as a full-time Flutter developer. He’s a Level 2 software engineer. CMiC builds enterprise-grade apps using Flutter for iOS, Android and web from a single code base. Flutter also helped him further his understanding of Google Cloud. As he used Flutter and learned more about backend services and how they communicate with frontend apps, he increased his knowledge of Google Cloud.

    While learning Flutter, Lenz built several apps using Firebase backend as a service (BaaS) because Firebase provides many Flutter plugins that are easy to integrate into your apps, such as authentication, storage, and analytics.

    "Firebase will help you get your app up and running quickly and easily,” says Lenz. “I also used Firebase app distribution to share the apps while I was developing them, which allowed me to quickly get feedback from testers without having to go through the app stores.”

    Lenz encourages new developers looking to advance in their careers to try out Google Cloud Skills Boost, codelabs, six-month certificates, and technical development guide.

    “The experience I gained as a GDSC lead and at Google’s Tech Equity Collective prepared me for my software engineer role at CMIC in Toronto, Canada, and I thank GDSC for my current and future opportunities,” Lenz says.

    What’s next for Lenz

    Right now, Lenz is focused on mastering his full-time role, so he’s pacing himself with regard to other commitments. He wants to make an impact and to recruit other Canadians to the technology field. The Tech Equity Collective’s mission is amazing and aligns with his values of enabling community and sharing knowledge. He’d like to continue to participate in something that would align with these values.

    “It’s the greatest feeling when I can make a difference,” he says.

    If you’re a student and would like to join a Google Developer Student Club community, look for a chapter near you here. Interested in becoming a GDSC lead like Lenz? Visit this page for more information. Don’t have a GDSC at your university, but want to start one? Visit the program page to learn more.

    Meet Android Developers from India keen to learn and inspire

    Posted by Vishal Das, Community Manager

    This year the Google Developer Educators India team launched the “Android Learn and Inspire Series” for Android Developers who were eager to learn Jetpack Compose and inspire others to upskill. Meet the developers who completed the series and hosted workshops on Jetpack Compose to find out their motivation to teach others!

    Alankrita Shah, Lead Android Developer, Bolo Live

    How did you get started with Android Development?

    My journey with Android started back in my 3rd year of my undergraduate studies. I got an internship in a startup where I learned to develop an application that lets users watch videos. It was a simple application but that helped me start exploring android development. I was always in awe of the capabilities of Android applications.

    What keeps you motivated to learn and stay up to date ?

    In Android development, there are frequent updates that help developers write fast and efficient code. Keeping up with it would help build good quality products. Becoming part of communities where you can discuss and share best practices is an interesting way to learn and grow.

    Which method of knowledge sharing did you find most effective?

    I experimented with a few methods in the Android Learn and Inspire series. There are a few that I found quite effective.

    • Adding some fun activities helps in bringing energy to the session. You can put up some fun activities that will include the learnings of the session in a fun way.
    • Write up for the topic covered : Post the session, you can share a blog and/or code for the same. The members can access it if they want to revisit what they learned.”



    Amardeep Kumar, Android Engineer, Walmart

    How did you get started with Android Development?

    I completed my Engineering in Information Technology from Siliguri Institute of Technology back in 2011. I was one of those unlucky 10% of students who graduated without any job offer. After a few months of struggle, I got a job offer from a company called Robosoft (this time I was one of the 3 selected out of 2,000+ candidates). Hence, I started as an Android developer from day 1 of joining Robosoft from the Honeycomb and Ice cream sandwich.

    What keeps you motivated to learn and share?

    One thing was consistent in my Android journey and that was connecting with good Android developers. BlrDroid, GDG Bangalore, Udacity Nanodegree and the Android community helped me to connect with people and learn every day. Solving tech problems and Android tech discussions are part of daily life. I like to develop Android apps because of its reach in countries like India. Open source is also one of the reasons to love Android. I got trained in my first job from my seniors on Android and that motivated me to share my Android knowledge in the community.

    Which method of knowledge sharing did you find most effective?

    One tip I would like to share is let’s bring those good engineers in Android who are expert in solving Android problems but shy in sharing knowledge.

    How to use the App Engine Users service (Module 20)

    Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud


    Introduction and background

    The Serverless Migration Station video series and corresponding codelabs aim to help App Engine developers modernize their apps, whether it's upgrading language runtimes like from Python 2 to 3 and Java 8 to 17, or to move laterally to sister serverless platforms like Cloud Functions or Cloud Run. For developers who want more control, like being able to SSH into instances, Compute Engine VMs or GKE, our managed Kubernetes service, are also viable options.

    In order to consider moving App Engine apps to other compute services, developers must move their apps away from its original APIs (now referred to as legacy bundled services), either to Cloud standalone replacement or alternative 3rd-party services. Once no longer dependent on these proprietary services, apps become much more portable. Apps can stay on App Engine while upgrading to its 2nd-generation platform, or move to other compute platforms as listed above.

    Today's Migration Module 20 content focuses on helping developers refamiliarize themselves with App Engine's Users service, a user authentication system serving as a lightweight wrapper around Google Sign-In (now called Google Identity Services). The video and its corresponding codelab (self-paced, hands-on tutorial) demonstrate how to add use of the Users service to the sample baseline app from Module 1. After adding the Users service in Module 20, Module 21 follows, showing developers how to migrate that usage to Cloud Identity Platform.

    How to use the App Engine Users service

    Adding use of Users service


    The sample app's basic functionality consists of registering each page visit in Datastore and displaying the most recent visits. The Users service helps apps support user logins, App Engine administrative ("admin'") users. It also provides convenient functions for generating login/logout links and retrieving basic user information for logged-in users. Below is a screenshot of the modified app which now supports user logins via the user interface (UI):
    Sample app now supports user logins and App Engine admin users (click to enlarge) 
    Below is the pseudocode reflecting the changes made to support user logins for the sample app, including integrating the Users service and updating what shows up in the UI:
    • If the user is logged in, show their "nickname" (display name or email address) and display a Logout button. If the logged-in user is an App Engine app admin, also display an "admin" badge (between nickname and Logout button).
    • If the user is not logged in, display the username generically as "user", remove any admin badge, and display a Login button.
    Because the Users service is primarily a user-facing endeavor, the most significant changes take place in the UI, whereas the data model and core functionality of registering visits remain unchanged. The new support for user management primarily results in additional context to be rendered in the web template. New or altered code is bolded to highlight the updates.
    Table showing code 'Before'(Module 1) on left, and 'After' (Module 20) on the right
     Adding App Engine Users service usage to sample app (click to enlarge)

    Wrap-up


    Today's "migration" consists of adding usage of the App Engine Users service to support user management and recognize App Engine admin users, starting with the Module 1 baseline app and finishing with the Module 20 app. To get hands-on experience doing it yourself, try the codelab and follow along with the video. Then you'll be ready to upgrade to Identity Platform should you choose to do so.

    In Fall 2021, the App Engine team extended support of many of the bundled services to 2nd generation runtimes (that have a 1st generation runtime), meaning you are no longer required to migrate from the Users service to Identity Platform when porting your app to Python 3. You can continue using the Users service in your Python 3 app so long as you retrofit the code to access bundled services from next-generation runtimes.

    If you do want to move to Identity Platform, see the Module 21 content, including its codelab. All Serverless Migration Station content (codelabs, videos, and source code [when available]) are available at its open source repo. While we're initially focusing on Python users, the Cloud team is covering other runtimes soon, so stay tuned. Also check out other videos in the broader Serverless Expeditions series.

    Improving Video Voice Dubbing Through Deep Learning

    Posted by Paul McCartney, Software Engineer, Vivek Kwatra, Research Scientist, Yu Zhang, Research Scientist, Brian Colonna, Software Engineer, and Mor Miller, Software Engineer

    People increasingly look to video as their preferred way to be better informed, to explore their interests, and to be entertained. And yet a video’s spoken language is often a barrier to understanding. For example, a high percentage of YouTube videos are in English but less than 20% of the world's population speaks English as their first or second language. Voice dubbing is increasingly being used to transform video into other languages, by translating and replacing a video’s original spoken dialogue. This is effective in eliminating the language barrier and is also a better accessibility option with regard to both literacy and sightedness in comparison to subtitles.

    In today’s post, we share our research for increasing voice dubbing quality using deep learning, providing a viewing experience closer to that of a video produced directly for the target language. Specifically, we describe our work with technologies for cross-lingual voice transfer and lip reanimation, which keeps the voice similar to the original speaker and adjusts the speaker’s lip movements in the video to better match the audio generated in the target language. Both capabilities were developed using TensorFlow, which provides a scalable platform for multimodal machine learning. We share videos produced using our research prototype, which are demonstrably less distracting and - hopefully - more enjoyable for viewers.

    Cross-Lingual Voice Transfer

    Voice casting is the process of finding a suitable voice to represent each person on screen. Maintaining the audience’s suspension of disbelief by having believable voices for speakers is important in producing a quality dub that supports rather than distracts from the video. We achieve this through cross-lingual voice transfer, where we create synthetic voices in the target language that sound like the original speaker voices. For example, the video below uses an English dubbed voice that was created from the speaker’s original Spanish voice.

    Original “Coding TensorFlow” video clip in Spanish.

    The “Coding TensorFlow” video clip dubbed from Spanish to English, using cross-lingual voice transfer and lip reanimation.

    Inspired by few-shot learning, we first pre-trained a multilingual TTS model based on our cross-language voice transfer approach. This approach uses an attention-based sequence-to-sequence model to generate a series of log-mel spectrogram frames from a multilingual input text sequence with a variational autoencoder-style residual encoder. Subsequently, we fine-tune the model parameters by retraining the decoder and attention modules with a fixed mixing ratio of the adaptation data and original multilingual data as illustrated in Figure 1.

    fine tuning voice imitation architecture
    Figure 1: Voice transfer architecture

    Note that voice transfer and lip reanimation is only done when the content owner and speakers give consent for these techniques on their content.

    Lip Reanimation

    With conventionally dubbed videos, you hear the translated / dubbed voices while seeing the original speakers speaking the original dialogue in the source language. The lip movements that you see in the video generally do not match the newly dubbed words that you hear, making the combined audio/video look unnatural. This can distract viewers from engaging fully with the content. In fact, people often even intentionally look away from the speaker’s mouth while watching dubbed videos as a means to avoid seeing this discrepancy.

    To help with audience engagement, producers of higher quality dubbed videos may put more effort into carefully tailoring the dialogue and voice performance to partially match the new speech with the existing lip motion in video. But this is extremely time consuming and expensive, making it cost prohibitive for many content producers. Furthermore, it requires changes that may slightly degrade the voice performance and translation accuracy.

    To provide the same lip synchronization benefit, but without these problems, we developed a lip reanimation architecture for correcting the video to match the dubbed voice. That is, we adjust speaker lip movements in the video to make the lips move in alignment with the new dubbed dialogue. This makes it appear as though the video was shot with people originally speaking the translated / dubbed dialogue. This approach can be applied when permitted by the content owner and speakers.

    For example, the following clip shows a video that was dubbed in the conventional way (without lip reanimation):

    "Machine Learning Foundations” video clip dubbed from English to Spanish, with voice transfer, but without lip reanimation

    Notice how the speaker’s mouth movements don’t seem to move naturally with the voice. The video below shows the same video with lip reanimation, resulting in lip motion that appears more natural with the translated / dubbed dialogue:

    The dubbed “Machine Learning Foundations” video clip, with both voice transfer and lip reanimation

    For lip reanimation, we train a personalized multistage model that learns to map audio to lip shapes and facial appearance of the speaker, as shown in Figure 2. Using original videos of the speaker for training, we isolate and represent the faces in a normalized space that decouples 3D geometry, head pose, texture, and lighting, as described in this paper. Taking this approach allows our first stage to focus on synthesizing lip-synced 3D geometry and texture compatible with the dubbed audio, without worrying about pose and lighting. Our second stage employs a conditional GAN-based approach to blend these synthesized textures with the original video to generate faces with consistent pose and lighting. This stage is trained adversarially using multiple discriminators to simultaneously preserve visual quality, temporal smoothness and lip-sync consistency. Finally, we refine the output using a custom super-resolution network to generate a photorealistic lip-reanimated video. The comparison videos shown above can also be viewed here.


    Lip-Reanimation Pipeline showing inference blocks in blue, training blocks in red.
    Figure 2: Lip-Reanimation Pipeline: inference blocks in blue, training blocks in red.

    Aligning with our AI Principles

    The techniques described here fall into the broader category of synthetic media generation, which has rightfully attracted scrutiny due to its potential for abuse. Photorealistically manipulating videos could be misused to produce fake or misleading information that can create downstream societal harms, and researchers should be aware of these risks. Our use case of video dubbing, however, highlights one potential socially beneficial outcome of these technologies. Our new research in voice dubbing could help make educational lectures, video-blogs, public discourse, and other formats more widely accessible across a global audience. This is also only applied when consent has been given by the content owners and speakers.

    During our research, we followed our guiding AI Principles for developing and deploying this technology in a responsible manner. First, we work with the creators to ensure that any dubbed content is produced with their consent, and any generated media is identifiable as such. Second, we are actively working on tools and techniques for attributing ownership of original and modified content using provenance and digital watermarking techniques. Finally, our central goal is fidelity to the source-language video. The techniques discussed herein serve that purpose only -- namely, to amplify the potential social benefit to the user, while preserving the content’s original nature, style and creator intent. We are continuing to determine how best to uphold and implement data privacy standards and safeguards before broader deployment of our research.

    The Opportunity Ahead

    We strongly believe that dubbing is a creative process. With these techniques, we strive to make a broader range of content available and enjoyable in a variety of other languages.

    We hope that our research inspires the development of new tools that democratize content in a responsible way. To demonstrate its potential, today we are releasing dubbed content for two online educational series, AI for Anyone and Machine Learning Foundations with Tensorflow on the Google Developers LATAM channel.

    We have been actively working on expanding our scope to more languages and larger demographics of speakers — we have previously detailed this work, along with a broader discussion, in our research papers on voice transfer and lip reanimation.

    Interview with Vanessa Aristizabal, contributor to Google’s Dev Library

    Posted by the Dev Library Team

    We are back with another edition of the Dev Library Contributor Spotlights - a blog series highlighting developers that are supporting the thriving development ecosystem by contributing their resources and tools to Google Dev Library.

    We met with Vanessa Aristizabal, one of the many talented developers contributing to Dev Library, to discuss her journey of learning the Angular framework and what drives her to share insights regarding budding technologies with the developer community.

    What is one thing that surprised you when you started using Google technology?

    Talking about my journey, Angular was my first JavaScript framework. So, I was really surprised when I started using it because with only a few lines of code, I could create a good application.

    What kind of challenges did you face when you were learning how to use Angular? How did you manage to overcome them?

    I would like to share that maybe it’s a common practice for developers that when we are working on some requirement for a project, we look it up on Google or Stack Overflow. And if we find a solution, we copy and paste the code without internalizing that knowledge. The same happened to me. Initially, I implemented bad practices as I did not know Angular completely. This led to the bad performance of my applications.

    I overcame this challenge by checking the documentation properly and doing in-depth research on Google to learn good practices of Angular and implement them effectively in my applications. This approach helped me to solve all the performance-related problems.

    How and why did you start sharing your knowledge by writing blog posts?

    It was really difficult to learn Angular because, in the beginning, I did not have a solid basis for the web. So, I first had to work on that. And during the process of learning Angular, I always had to research something or the other because sometimes I couldn’t find the thing that I needed in the documentation.

    I had to refer to blogs, search on Google, or go through books to solve my requirements. And then I started taking some notes. From there on, I decided to start writing so I could help other developers who might be facing the same set of challenges. The idea was to help people find something useful and add value to their learning process through my articles.
    Google Dev Library Logo is in the top left with Vanessa's headshot corpped into a circle. Vanessa is wearing a dark grey t-shirt and smiling, a quote card reads, 'I decided to start writing so I could help other developers who might be facing the same set of challenges. the idea was to help people find something useful and add value to their learning process through my articles' Vanessa Aristizabal Dev Library Contributor
    Find out more content contributed and authored by Vanessa Aristizabal (@vanessamarely) and discover more unique tools and resources on the Google Dev Library website!

    Google Home is officially ready for your Matter devices and apps

    Posted by Kevin Po, Group Product Manager

    Earlier this Fall, the Connectivity Standards Alliance released the Matter 1.0 standard and certification program, officially launching the industry into a new era of the smart home.

    We are excited to share that Google Nest and Android users are now ready for your Matter-enabled devices and apps. Many Android devices from Google and our OEM partners now support the new Matter APIs in Google Play services so you can update and build apps to support Matter. Google Nest speakers, displays, and Wi-Fi routers have been updated to work as hubs, and we have also updated Nest Wifi Pro, Nest Hub Max and the Nest Hub (2nd gen) to work as Thread border routers, so users can securely connect your Thread devices.

    Our top priority is to ensure both customers and developers have high-quality, reliable Matter devices. We are starting with Android devices and Google Nest speakers and displays, which are now Matter-enabled. These devices are ready to help users set up, automate, and use your devices wherever they interact with Google. Next up, we are working on bringing Google Home app iOS support for Matter devices in early 2023, and support to other Nest devices such as Nest Wifi and Nest Thermostat.

    Building With Google Home

    As companies all over are shifting their focus to prioritize Matter, we have also expanded the resources available in the Google Home Developer Center to better support you in building your Matter devices — from beginning to end. At this one-stop-shop for anyone interested in developing smart home devices and apps with Google, developers can now create and launch seamless Matter integrations with Google Home, apply for Works with Google Home certification, customize their product’s out of box experience in the Google Home app and on Android and more. Let’s dive into what’s new.


    Even More Tools In Our SDKs

    We have been dedicated to building the most helpful tools to assist you in building Matter-enabled products and apps. We announced two software development kits for both device and mobile developers that make it easier to build with the open-source Matter SDK and integrate your devices and apps with Google. We’ve made them available to help with the development of your newest smart devices and apps.

    • Google Home Device SDK
      • Documentation and tutorials
      • Sample apps
    • Google Home Mobile SDK
      • Device commissioning APIs
      • Multi-admin (sharing) APIs
      • Thread credential APIs
      • Documentation and tutorials
      • Google Home Sample app for Matter

    Works With Google Home Certification

    Matter devices integrated and tested through the Google Home Developer Center can carry the Works With Google Home badge, which earlier this year replaced the Works With Hey Google badge. This badge gives users the utmost confidence that your devices work seamlessly with Google Home and Android.


    Early Access Program Partner Testimonials

    We understand that you want to build innovative and high quality product integrations as quickly as possible, and we built our SDKs and tools to help you do just that. Since announcing earlier this year, we have worked closely with dozens of Early Access Program (EAPs) partners to ensure the tools we have created in the Google Home Developer Console can achieve what we set out to do, before making them widely available to you all today.

    We’ve asked some of our EAP partners to share more about their experience building Matter devices with Google, to give you more insight on how building with Google’s end-to-end tools for Matter devices and apps can make a difference in your innovation and development process. After working closely with our partners, we are confident our tools allow you to accelerate time-to-market for your devices, improve reliability, and let you differentiate with Google Home while having interoperability with other Matter platforms.

    • From Eve Systems: “The outstanding expertise and commitment of the teams in Google's Matter Early Access Program enabled us to leverage the potential of our products. We're thrilled to be partnering with Google on Matter, an extraordinary project that has Thread at the heart."
    • From Nanoleaf: “Nanoleaf has been working closely with Google as part of the Matter Early Access Program to bring Matter 1.0 to life. It’s been a pleasure collaborating with Google the past few years; the team’s vision of the helpful home deeply resonates with our goal of creating a smart home that is both intelligent and personalized to each person living in it. We’re very excited to see that vision borne out in Google’s initial Matter offering, and can’t wait to continue building on the potential of Matter together."
    • From Philips Hue: “For us especially, the Matter Early Access Platform releases with documentation and instructions have been very useful. It meant we could already start Matter integration testing between Philips Hue and Google on early builds, to ensure seamless interoperability in the final release.”
    • From Tuya: “As a long-term ecosystem partner and an authorized solution provider of Google, Tuya has contributed to a wider application and implementation of Matter, as well as promotion of Matter globally together. In the future, we will continue to strengthen cooperation between Google and Tuya by integrating both parties' ecosystems, technologies, and channels to support the implementation of Matter to enable global customers to achieve commercial success in the smart home and other industries."

    Ready To Build?

    We are excited to see Matter come to life and the devices you build to further shape the smart home. Get started building your Matter devices today and stay up to date on our recent updates in the Google Home Developer Center.


    Help Shape The Future Of Google Products

    User feedback is critical to ensure we continue building more inclusive and helpful products. Join our developer research program and share feedback on all kinds of Google products & tools. Sign up here!

    Google Home is officially ready for your Matter devices and apps

    Posted by Kevin Po, Group Product Manager

    Earlier this Fall, the Connectivity Standards Alliance released the Matter 1.0 standard and certification program, officially launching the industry into a new era of the smart home.

    We are excited to share that Google Nest and Android users are now ready for your Matter-enabled devices and apps. Many Android devices from Google and our OEM partners now support the new Matter APIs in Google Play services so you can update and build apps to support Matter. Google Nest speakers, displays, and Wi-Fi routers have been updated to work as hubs, and we have also updated Nest Wifi Pro, Nest Hub Max and the Nest Hub (2nd gen) to work as Thread border routers, so users can securely connect your Thread devices.

    Our top priority is to ensure both customers and developers have high-quality, reliable Matter devices. We are starting with Android devices and Google Nest speakers and displays, which are now Matter-enabled. These devices are ready to help users set up, automate, and use your devices wherever they interact with Google. Next up, we are working on bringing Google Home app iOS support for Matter devices in early 2023, and support to other Nest devices such as Nest Wifi and Nest Thermostat.

    Building With Google Home

    As companies all over are shifting their focus to prioritize Matter, we have also expanded the resources available in the Google Home Developer Center to better support you in building your Matter devices — from beginning to end. At this one-stop-shop for anyone interested in developing smart home devices and apps with Google, developers can now create and launch seamless Matter integrations with Google Home, apply for Works with Google Home certification, customize their product’s out of box experience in the Google Home app and on Android and more. Let’s dive into what’s new.


    Even More Tools In Our SDKs

    We have been dedicated to building the most helpful tools to assist you in building Matter-enabled products and apps. We announced two software development kits for both device and mobile developers that make it easier to build with the open-source Matter SDK and integrate your devices and apps with Google. We’ve made them available to help with the development of your newest smart devices and apps.

    • Google Home Device SDK
      • Documentation and tutorials
      • Sample apps
    • Google Home Mobile SDK
      • Device commissioning APIs
      • Multi-admin (sharing) APIs
      • Thread credential APIs
      • Documentation and tutorials
      • Google Home Sample app for Matter

    Works With Google Home Certification

    Matter devices integrated and tested through the Google Home Developer Center can carry the Works With Google Home badge, which earlier this year replaced the Works With Hey Google badge. This badge gives users the utmost confidence that your devices work seamlessly with Google Home and Android.


    Early Access Program Partner Testimonials

    We understand that you want to build innovative and high quality product integrations as quickly as possible, and we built our SDKs and tools to help you do just that. Since announcing earlier this year, we have worked closely with dozens of Early Access Program (EAPs) partners to ensure the tools we have created in the Google Home Developer Console can achieve what we set out to do, before making them widely available to you all today.

    We’ve asked some of our EAP partners to share more about their experience building Matter devices with Google, to give you more insight on how building with Google’s end-to-end tools for Matter devices and apps can make a difference in your innovation and development process. After working closely with our partners, we are confident our tools allow you to accelerate time-to-market for your devices, improve reliability, and let you differentiate with Google Home while having interoperability with other Matter platforms.

    • From Eve Systems: “The outstanding expertise and commitment of the teams in Google's Matter Early Access Program enabled us to leverage the potential of our products. We're thrilled to be partnering with Google on Matter, an extraordinary project that has Thread at the heart."
    • From Nanoleaf: “Nanoleaf has been working closely with Google as part of the Matter Early Access Program to bring Matter 1.0 to life. It’s been a pleasure collaborating with Google the past few years; the team’s vision of the helpful home deeply resonates with our goal of creating a smart home that is both intelligent and personalized to each person living in it. We’re very excited to see that vision borne out in Google’s initial Matter offering, and can’t wait to continue building on the potential of Matter together."
    • From Philips Hue: “For us especially, the Matter Early Access Platform releases with documentation and instructions have been very useful. It meant we could already start Matter integration testing between Philips Hue and Google on early builds, to ensure seamless interoperability in the final release.”
    • From Tuya: “As a long-term ecosystem partner and an authorized solution provider of Google, Tuya has contributed to a wider application and implementation of Matter, as well as promotion of Matter globally together. In the future, we will continue to strengthen cooperation between Google and Tuya by integrating both parties' ecosystems, technologies, and channels to support the implementation of Matter to enable global customers to achieve commercial success in the smart home and other industries."

    Ready To Build?

    We are excited to see Matter come to life and the devices you build to further shape the smart home. Get started building your Matter devices today and stay up to date on our recent updates in the Google Home Developer Center.


    Help Shape The Future Of Google Products

    User feedback is critical to ensure we continue building more inclusive and helpful products. Join our developer research program and share feedback on all kinds of Google products & tools. Sign up here!