Author Archives: Google Developers

Useful Android projects from Google Dev Library to help you #DevelopwithGoogle

Posted by Swathi Dharshna Subbaraj - Project Coordinator, Google Dev Library

Android offers developers a rich set of tools and SDKs/APIs for building innovative and engaging mobile apps. Developers can create applications for a large and growing user base of over 2.5 billion devices worldwide.

Google Dev Library curates open-source Android libraries created and contributed by developers from around the world. Developers can easily leverage the vast array of useful code samples, GitHub repos, and libraries featuring Compose, networking, data storage to user interface design and image processing to build your own Android apps !

In this blog, we are sharing 7 popular projects by android contributors. These projects are some of the highest viewed projects on the platform and we hope these will give you a sneak peak into the type of interesting and innovative projects found on the platform. Let's dive into the list:

Coil by Colin White
Image loading for Android backed by Kotlin Coroutines

Coil is designed to be lightweight, efficient, and easy to use, and it offers a number of features such as automatic image caching, support for various image formats, and integration with popular image loading libraries like Glide and Picasso. If you are working on an Android app and need a reliable way to load and display images, this repository is definitely worth checking out !

LitePal by Lin Guo
An Android library that makes developers use SQLite database extremely easy

If you’re looking to streamline your database management processes, LitePal is an open source library for Android that helps developers with database management in your app development.

Tivi by Chris Banes
Tivi is a TV show tracking app that uses some of the latest Android libraries

Tivi showcases modern development practices, including the use of Android Jetpack and other libraries. This TV show tracking Android project is helpful for developers to learn more about interesting and fun practices for Android development.

Showkase by Vinay Gaba
Showkase is an annotation-processor based Android library

Showkase helps you organize, discover, search and visualize Jetpack Compose UI elements. With minimal configuration it generates a UI browser that helps you easily find your components, colors & typography.

Pokedex by Jaewoong Eum
Pokedex follows Google's official android architecture guidance

Pokedex demonstrates modern Android development with Hilt, Coroutines, Flow, Jetpack (Room, ViewModel), and Material Design based on MVVM architecture. The repository includes the app's layout, features, and functionality, as well as documentation on how to implement and get resourceful.

Resource for learning about the Android Jetpack Compose framework.

If you are looking to learn or improve your knowledge of Jetpack Compose, Learn-Jetpack-Compose-By-Example contains a collection of example code and accompanying explanations for various components and features of Jetpack Compose. This repository aims to show the Jetpack Compose way of building common Android UI that we are accustomed to building.

Material Dialog by Shreyas Patil
MaterialDialog library is built upon Google's Material Design library

The author, Shreyas Patil, goes into detail about how to use the MaterialDialog library and provides code examples to demonstrate its capabilities. The library allows developers to easily create dialogs with a variety of customization options, such as adding buttons, selecting the theme, and setting the title and content. Overall, the MaterialDialog library is a useful tool for Android developers looking to implement Material Design in your apps.


We hope these projects will inspire and help guide your own development efforts. Join our global community of Android developers to showcase your projects and access tools and resources. To contribute, submit your content.

Migrating from App Engine Users to Cloud Identity Platform (Module 21)

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

The Serverless Migration Station series is aimed at helping developers modernize their apps running one of Google Cloud's serverless platforms. The preceding (Migration Module 20) video demonstrates how to add use of App Engine's Users service to a Python 2 App Engine sample app. Today's Module 21 video picks up from where that leaves off, migrating that usage to Cloud Identity Platform.
How to migrate the App Engine Users to Cloud Identity Platform
Moving away from proprietary App Engine bundled services like Users makes apps more portable, giving them enough flexibility to:

    Understanding the overall migration

    Overall, Module 21 features major changes to the Module 20 sample app, implementing a move from App Engine bundled services (NDB & Users) to standalone Cloud services (Cloud Datastore & Identity Platform). Identity Platform doesn't know anything about App Engine admins, so that must be built, requiring the use of the Cloud Resource Manager API. Apps dependent on Python 2 have additional required updates. Let's discuss in a bit more detail.

    Migration "parts"

    The following changes to the sample app are required:

    • Migrate from App Engine Users (server-side) to Cloud Identity Platform (client-side)
    • Migrate from App Engine NDB, the other bundled service used in Module 20, to Cloud NDB (requires use of the Cloud Datastore API)
    • Use the Cloud Resource Manager* (via its API) to fetch the Cloud project's IAM allow policy to collate the set of App Engine admin users for the app.
    • Use the Firebase Admin SDK to validate whether the user is an App Engine admin
    • Migrate from Python 2 to 3 (and possibly back to Python 2 [more on this below])
     
    *At the time of this writing, the Resource Manager documentation only features setup instructions for accessing the API from the lower-level Google APIs client library rather than the Resource Manager client library. To learn how to set up the latter, go to the Resource Manager client library documentation directly. The lower-level client library should only be used in circumstances when a Cloud client library doesn't exist or doesn't have the features your app needs. One such use case is Python 2, and we'll be covering that shortly.
     

      Move from App Engine bundled services to standalone Cloud services

      The NDB to Cloud NDB migration is identical to the Module 2 migration content, so it's not covered in-depth here in Module 21. The primary focus is on switching to Identity Platform to continue supporting user logins as well as implementing use of the Resource Manager and Firebase Admin SDK to build a proxy for recognizing App Engine admin users as provided by the Users service. Below is pseudocode implementing the key changes to the main application where new or updated lines of code are bolded:

      Table showing changes in code 'Before'(Module 20) and 'After'(Module 21)
      Migrating from App Engine Users to Cloud Identity Platform(click to enlarge)

      The key differences to note:

      1. The server-side Users service code vanishes from the main application, moving into the (client-side) web template (not shown here).
      2. Practically all of the new code in the Module 21 app above is for recognizing App Engine admin users. There are no changes to app operations or data models other than Cloud NDB requiring use of Python context managers to wrap all Datastore code (using Python with blocks).

      Complete versions of the app before and after the updates can be found in the Module 20 (Python 2) and Module 21 (Python 3) repo folders, respectively. In addition to the video, be sure to check out the Identity Platform documentation as well as the Module 21 codelab which leads you step-by-step through the migrations discussed.

      Aside from the necessary coding changes as well as moving from server-side to client-side, note that the Users service usage is covered by App Engine's pricing model while Identity Platform is an independent Cloud service billed by MAUs (monthly active users), so costs should be taken into account if migrating. More information can be found in the Identity Platform pricing documentation.

      Python 2 considerations

      With the sunset of Python 2, Java 8, PHP 5, and Go 1.11, by their respective communities, Google Cloud has assured users by expressing continued long-term support of these legacy App Engine runtimes, including maintaining the Python 2 runtime. So while there is no current requirement for users to migrate, developers themselves are expressing interest in updating their applications to the latest language releases.
      The primary Module 21 migration automatically includes a port from Python 2 to 3 as that's where most developers are headed. For those with dependencies requiring remaining on Python 2, some additional effort is required:


        The codelab covers this backport in-depth, so check out the specific section for Python 2 users if you're in this situation. If you don't want to think about it, just head to the repo for a working Python 2 version of the Module 21 app.

        Wrap-up

        Module 21 features migrations of App Engine bundled services to appropriate standalone Cloud services. While we recommend users modernize their App Engine apps by moving to the latest offerings from Google Cloud, these migrations are not required. In Fall 2021, the App Engine team extended support of many of the bundled services to 2nd generation runtimes (that have a 1st generation runtime), meaning you don't have to migrate to standalone services before porting your app to Python 3. You can continue using App Engine NDB and Users in Python 3 so long as you retrofit your code to access bundled services from next-generation runtimes. Then should you opt to migrate, you can do so on your own timeline.

        If you're using other App Engine legacy services be sure to check out the other Migration Modules in this series. All Serverless Migration Station content (codelabs, videos, source code [when available]) can be accessed at its open source repo. While our content initially focuses on Python users, the Cloud team is working on covering other language runtimes, so stay tuned. For additional video content, check out our broader Serverless Expeditions series.

        More Voices = More Bazel

        Posted by Lyra Levin, Technical Writer, Software Engineering

        Takeaways from the BazelCon DEI lunch panel


        In front of a standing-room-only lunch panel, Minu Puranik asks us, “If there is one thing you want to change [about Bazel’s DEI culture], what would it be and why?”

        We’d spent the last hour on three main themes: community culture, fostering trust, and growing our next generation of leaders. Moderated by Minu, the Strategy and Operations leader for DeveloperX & DevRel at Google, our panel brought together a slate of brilliant people from underrepresented genders and populations of color to give a platform to our experiences and ideas. Together with representatives and allies in the community, we explored methods to building inclusivity in our open source community and sought a better understanding of the institutional and systemic barriers to increasing diversity.

        Culture defines how we act, which informs who feels welcome to contribute. Studies show that diverse contributor backgrounds yield more and better results, so how do we create a culture where everyone feels safe to share, ask questions, and contribute? Helen Altshuler, co-founder and CEO of EngFlow, relayed her experience, “Having people that can have your back is important to get past the initial push to submit something and feeling like it’s ok. You don’t need to respond to everything in one go. Last year, Cynthia Coah and I gave a talk on how to make contributions to the Bazel community. Best practices which we can apply as a Bazel community: better beginners’ documentation, classifying GitHub issues as "good first issue", and having Slack channels where code owners can play a more active role.” Diving further, we discussed the need to make sure new contributors get positive, actionable feedback to reward them with context and resources, and encourage them to take the risk of contributing to the codebase.

        This encouragement of new contributors feeds directly into the next generation of technical influencers and leaders. Eva Howe, co-founder and Legal Counsel for Aspect, addressed the current lack of diversity in the community pipeline. “I’d like to see more trainings like the Bazel Community Day. Trainings serve 2 purposes:

        1. People can blend in, start talking to someone in the background and form connections.
        2. When someone goes through a bootcamp or CS course, Bazel is not mentioned. Nobody cares that the plumbing works until it doesn’t work. We need to educate people and give them that avenue and a good experience to move forward. I struggle with the emotional side of it - I count myself out before I get somewhere. It needs to be a safe space, which it hasn’t been in the past.”

        In addition to industry trainings, the audience and panel brought up bootcamps and university classes as rich sources to find and promote diversity, though cautioned that it takes active, ongoing effort to maintain an environment that diverse candidates are willing to stay in. There are fewer opportunities to take risks as part of an underrepresented group, and the feeling that you have to succeed for everyone who looks like you creates a high-pressure environment that is worse for learning outcomes.

        To bypass this pipeline problem, we can recruit promising candidates and sponsor them through getting the necessary experience on the job. Lyra Levin, Bazel’s internal technical writer at Google, spoke to this process of incentivizing and recognizing contributions outside the codebase, as a way to both encourage necessary glue work, and pull people into tech from parallel careers more hospitable to underrepresented candidates.

        She said, “If someone gives you an introduction to another person, recognize that. Knowing a system of people is work. Knowing where to find answers is work. Saying I’m going to be available and responding to emails is work. If you see a conversation where someone is getting unhelpful pushback, jump in and moderate it. Reward those who contribute by creating a space that can be collaborative and supportive.”

        Sophia Vargas, Program Manager in Google’s OSPO (Open Source Programs Office), chimed in, “Create ways to recognize non-code contributions. One example is a markdown file describing other forms of contribution, especially in cases that do not generate activity attached to a name on GitHub.”

        An audience member agreed, “A positive experience for the first few PRs is very critical for building trust in the community.”

        And indeed, open source is all about building trust. So how do we go about building trust? What should we do differently? Radhika Advani, Bazel’s product manager at Google, suggests that the key is to “have some amazing allies”. “Be kind and engage with empathy,” she continued, “Take your chances - there are lots of good people out there. You have to come from a place of vulnerability.”

        Sophia added some ideas for how to be an “amazing ally” and sponsor the careers of those around you. “Create safe spaces to have these conversations. Not everyone is bold enough to speak up or to ask for support, as raising issues in a public forum can be intimidating. Make yourself accessible, or provide anonymous forms for suggestions or feedback — both can serve as opportunities to educate yourself and to increase awareness of diverging opinions.” An audience member added, “If you recognize that an action is alienating to a member of your group, even just acknowledging their experience or saying something to the room can be very powerful to create a sense of safety and belonging.” Another said, “If you’re in a leadership position, when you are forthright about the limits of your knowledge, it gives people the freedom to not know everything.”

        So to Minu’s question, what should we do to improve Bazel’s culture?

        Helen: Create a governance group on Slack to ensure posts are complying with the community code of conduct guidelines. Review how this is managed for other OSS communities.

        Sophia: Institutionalize mentorship; have someone else review what you’ve done and give you the confidence to push a change. Nurture people. We need to connect new and established members of the community.

        Lyra: Recruit people in parallel careers paths with higher representation. Give them sponsorship to transition to tech.

        Radhika: Be more inclusive. All the jargon can get overwhelming, so let’s consider how we can make things simpler, including with non-technical metaphors.

        Eva: Consider what each of us can do to make the experience for people onboarding better.

        There are more ways to be a Bazel contributor than raising PRs. Being courageous, vulnerable and open contributes to the culture that creates the code. Maintainers — practice empathy and remember the human on the other side of the screen. Be a coach and a mentor, knowing that you are opening the door for more people to build the product you love, with you. Developers — be brave and see the opportunities to accept sponsorship into the space. Bazel is for everyone.

        Welcome.

        Introducing the Earth Engine Google Developer Experts (GDEs)

        Posted by Tyler Erickson, Developer Advocate, Google Earth Engine

        One of the greatest things about Earth Engine is the vibrant community of developers who openly share their knowledge about the platform and how it can be used to address real-world sustainability issues. To recognize some of these exceptional community members, in 2022 we launched the initial cohort of Earth Engine Google Developer Experts (GDEs). You can view the current list of Earth Engine GDEs on the GDE Directory page.

        Moving 3D image of earth rotating showing locations of members belonging to the initial cohort of Earth Engine GDEs
        The initial cohort of Earth Engine Google Developer Experts.
        What makes an Earth Engine expert? Earth Engine GDEs are selected based on their expertise in the Earth Engine product (of course), but also for their knowledge sharing. They share their knowledge in many ways, including answering questions from other developers, writing tutorials and blogs, teaching in settings spanning from workshops to university classes, organizing meetups and conference sessions that allow others to share their work, building extensions to the platform, and so much more!

        To learn more about the Google Developer Experts program and the Earth Engine GDEs, go to https://developers.google.com/community/experts.

        Now that it is 2023, we are re-opening the application process for additional Earth Engine GDEs. If you’re interested in being considered, you can find information about the process in the GDE Program Application guide.

        GDE Digital badge logo - Earth

        Solution Challenge 2023: Use Google Technologies to Address the United Nations’ Sustainable Development Goals

        Posted by Rachel Francois, Google Developer Student Clubs, Global Program Manager

        Each year, the Google Developer Student Clubs Solution Challenge invites university students to develop solutions for real-world problems using one or more Google products or platforms. How could you use Android, Firebase, TensorFlow, Google Cloud, Flutter, or any of your favorite Google technologies to promote employment for all, economic growth, and climate action?

        Join us to build solutions for one or more of the United Nations 17 Sustainable Development Goals. These goals were agreed upon in 2015 by all 193 United Nations Member States and aim to end poverty, ensure prosperity, and protect the planet by 2030.

        One 2022 Solution Challenge participant said, “I love how it provides the opportunity to make a real impact while pursuing undergraduate studies. It helped me practice my expertise in a real-world setting, and I built a project I can proudly showcase on my portfolio.”

        Solution Challenge prizes

        Participants will receive specialized prizes at different stages:

        • The top 100 teams receive customized mentorship from Google and experts to take solutions to the next level, branded swag, and a certificate.
        • The top 10 finalists receive additional mentorship, a swag box, and the opportunity to showcase their solutions to Google teams and developers all around the world during the virtual 2023 Solution Challenge Demo Day, live on YouTube.
        • Contest finalists - In addition to the swag box, each individual from the seven teams not in the top three will receive a Cash Prize of $1,000 per student. Winnings for each qualifying team will not exceed $4,000.
        • Top 3 winners - In addition to the swag box, each individual from the top 3 winning teams will receive a Cash Prize of $3,000 and a feature on the Google Developers Blog. Winnings for each qualifying team will not exceed $12,000
         

        Joining the Solution Challenge

        There are four steps to join the Solution Challenge and get started on your project:

        1. Register at goo.gle/solutionchallenge and join a Google Developer Student Club at your college or university. If there is no club at your university, you can join the closest one through our event platform.
        2. Select one or more of the United Nations 17 Sustainable Development Goals to address.
        3. Build a solution using Google technology.
        4. Create a demo and submit your project by March 31, 2023. 

          Google Resources for Solution Challenge participants

          Google will support Solution Challenge participants with resources to help students build strong projects, including:

          • Live online sessions with Q&As
          • Mentorship from Google, Google Developer Experts, and the Google Developer Student Club community
          • Curated Codelabs designed by Google Developers
          • Access to Design Sprint guidelines developed by Google Ventures
          • and more!
          “During the preparation and competition, we learned a great deal,” said a 2022 Solution Challenge team member. “That was part of the reason we chose to participate in this competition: the learning opportunities are endless.”

          Winner announcement dates

          Once all projects are submitted, our panel of judges will evaluate and score each submission using specific criteria.

          After that, winners will be announced in three rounds.

          Round 1 (April): The top 100 teams will be announced.

          Round 2 (June): After the top 100 teams submit their new and improved solutions, 10 finalists will be announced.

          Round 3 (August): The top 3 grand prize winners will be announced live on YouTube during the 2023 Solution Challenge Demo Day.

          We can’t wait to see the solutions you create with your passion for building a better world, coding skills, and a little help from Google technologies.

          Learn more and sign up for the 2023 Solution Challenge here.


          I got the time to push my creativity to the next level. It helped me attain more information from more knowledgeable people by expanding my network. Working together and building something was a great challenge and one of the best experiences, too. I liked the idea of working on the challenge to present a solution.

          ~2022 Solution Challenge participant

          From GDSC Lead to Flutter developer: Lenz Paul’s journey

          Posted by Kübra Zengin, North America Regional Lead, Google Developers

          Switching careers at age 30, after eight years on the job, is a brave thing to do. Lenz Paul spent eight years working in sales at Bell, a large Canadian telecommunications company. During that time, he found his interest sparked by the technology solutions that helped him do his job more effectively. He decided to follow his passion and switch careers to focus on engineering.

          “I found I had a knack for technology and was curious about it,” he says. “Sales has so many manual processes, and I always proposed tech solutions and even learned a little programming to make my daily life easier.”

          At 30, Lenz entered Vancouver Island University in Canada to pursue a computer science degree.

          Becoming a GDSC Lead

          When his department chair, Gara Pruesse, emailed students about the opportunity to start and lead a Google Developer Student Club at the university, Lenz jumped at the chance and applied. He was selected to be the GDSC Vancouver Island University lead in 2019 and led the group for a year. He hoped to meet more technology enthusiasts and to use the programming skills he was learning in his courses to help other students who wanted to learn Google technologies.

          “It was my first year at university, starting in the technology field as a career transition, and I was hoping to network,” Lenz recalls. “I read the [GDSC] description about sharing knowledge and thought this would be a way to use the skills I was learning in school practically, meet people, and impact my community.”

          As part of his role as a GDSC lead, Lenz used Google workshops, Google Cloud Skills Boost, and codelabs to learn Google Cloud and Flutter outside of class. Google provided Cloud credits to GDSC Vancouver Island University, which Lenz shared with his seven core team members, so they could all try new tools and figure out how to teach their fellow students to use them. Then, they taught the rest of the student club how to use Google technologies. They also hosted two Google engineers on-site as speakers.

          “GDSC helped me with personal skills, soft skills, such as public speaking and leadership,” Lenz says. “The highlight was that I learned a lot about Google Cloud technologies, by holding workshops and delivering content.”

          Working for Google’s Tech Equity Collective

          Following his GDSC experience, Lenz worked as a contractor for Google’s Tech Equity Collective, which focuses on improving diversity in tech. Lenz tutored a group of 20 students for about six weeks before being promoted to an instructor role and then becoming the lead teacher for Google Cloud technologies.

          “My GDSC experience really helped me shine in my teaching role,” Lenz says. “I was already very familiar with Google Cloud, due to the many workshops I organized and taught for my GDSC, so it was easy to adapt it for my TEC class.”

          Becoming a Flutter developer

          Lenz began using Flutter in 2019, when the technology was just two years old. He points out that there is nobody in the world with ten years of experience in Flutter; it’s relatively new for everyone.

          “I love the ecosystem, and it was a great time to get started with it,” he says. “It’s such a promising technology.”

          Lenz says Dart, Flutter’s programming language, is a powerful, modern, object-oriented language with C-style syntax that’s easy to learn and easy to use.

          “Dart is clean, easy to scale, can support any size project, and it has the backing of Google,” Lenz says. “It’s a great language for building scripts or server apps. It allows for building mobile apps, web apps, and desktop apps, and it even supports embedded devices via the Flutter Framework.”

          Lenz used Google Cloud Skills Boost and Google codelabs to learn Flutter and is now a full-time Flutter developer.

          “I’m excited about the future of Flutter and Dart,” he says. “I think Flutter is going to continue to be a big player in the app development space and can't wait to see what the Flutter team comes up with next.”

          In August 2022, CMiC (Computer Methods International Corporation) in Toronto, a construction ERP software company, hired Lenz as a full-time Flutter developer. He’s a Level 2 software engineer. CMiC builds enterprise-grade apps using Flutter for iOS, Android and web from a single code base. Flutter also helped him further his understanding of Google Cloud. As he used Flutter and learned more about backend services and how they communicate with frontend apps, he increased his knowledge of Google Cloud.

          While learning Flutter, Lenz built several apps using Firebase backend as a service (BaaS) because Firebase provides many Flutter plugins that are easy to integrate into your apps, such as authentication, storage, and analytics.

          "Firebase will help you get your app up and running quickly and easily,” says Lenz. “I also used Firebase app distribution to share the apps while I was developing them, which allowed me to quickly get feedback from testers without having to go through the app stores.”

          Lenz encourages new developers looking to advance in their careers to try out Google Cloud Skills Boost, codelabs, six-month certificates, and technical development guide.

          “The experience I gained as a GDSC lead and at Google’s Tech Equity Collective prepared me for my software engineer role at CMIC in Toronto, Canada, and I thank GDSC for my current and future opportunities,” Lenz says.

          What’s next for Lenz

          Right now, Lenz is focused on mastering his full-time role, so he’s pacing himself with regard to other commitments. He wants to make an impact and to recruit other Canadians to the technology field. The Tech Equity Collective’s mission is amazing and aligns with his values of enabling community and sharing knowledge. He’d like to continue to participate in something that would align with these values.

          “It’s the greatest feeling when I can make a difference,” he says.

          If you’re a student and would like to join a Google Developer Student Club community, look for a chapter near you here. Interested in becoming a GDSC lead like Lenz? Visit this page for more information. Don’t have a GDSC at your university, but want to start one? Visit the program page to learn more.

          Meet Android Developers from India keen to learn and inspire

          Posted by Vishal Das, Community Manager

          This year the Google Developer Educators India team launched the “Android Learn and Inspire Series” for Android Developers who were eager to learn Jetpack Compose and inspire others to upskill. Meet the developers who completed the series and hosted workshops on Jetpack Compose to find out their motivation to teach others!

          Alankrita Shah, Lead Android Developer, Bolo Live

          How did you get started with Android Development?

          My journey with Android started back in my 3rd year of my undergraduate studies. I got an internship in a startup where I learned to develop an application that lets users watch videos. It was a simple application but that helped me start exploring android development. I was always in awe of the capabilities of Android applications.

          What keeps you motivated to learn and stay up to date ?

          In Android development, there are frequent updates that help developers write fast and efficient code. Keeping up with it would help build good quality products. Becoming part of communities where you can discuss and share best practices is an interesting way to learn and grow.

          Which method of knowledge sharing did you find most effective?

          I experimented with a few methods in the Android Learn and Inspire series. There are a few that I found quite effective.

          • Adding some fun activities helps in bringing energy to the session. You can put up some fun activities that will include the learnings of the session in a fun way.
          • Write up for the topic covered : Post the session, you can share a blog and/or code for the same. The members can access it if they want to revisit what they learned.”



          Amardeep Kumar, Android Engineer, Walmart

          How did you get started with Android Development?

          I completed my Engineering in Information Technology from Siliguri Institute of Technology back in 2011. I was one of those unlucky 10% of students who graduated without any job offer. After a few months of struggle, I got a job offer from a company called Robosoft (this time I was one of the 3 selected out of 2,000+ candidates). Hence, I started as an Android developer from day 1 of joining Robosoft from the Honeycomb and Ice cream sandwich.

          What keeps you motivated to learn and share?

          One thing was consistent in my Android journey and that was connecting with good Android developers. BlrDroid, GDG Bangalore, Udacity Nanodegree and the Android community helped me to connect with people and learn every day. Solving tech problems and Android tech discussions are part of daily life. I like to develop Android apps because of its reach in countries like India. Open source is also one of the reasons to love Android. I got trained in my first job from my seniors on Android and that motivated me to share my Android knowledge in the community.

          Which method of knowledge sharing did you find most effective?

          One tip I would like to share is let’s bring those good engineers in Android who are expert in solving Android problems but shy in sharing knowledge.

          How to use the App Engine Users service (Module 20)

          Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud


          Introduction and background

          The Serverless Migration Station video series and corresponding codelabs aim to help App Engine developers modernize their apps, whether it's upgrading language runtimes like from Python 2 to 3 and Java 8 to 17, or to move laterally to sister serverless platforms like Cloud Functions or Cloud Run. For developers who want more control, like being able to SSH into instances, Compute Engine VMs or GKE, our managed Kubernetes service, are also viable options.

          In order to consider moving App Engine apps to other compute services, developers must move their apps away from its original APIs (now referred to as legacy bundled services), either to Cloud standalone replacement or alternative 3rd-party services. Once no longer dependent on these proprietary services, apps become much more portable. Apps can stay on App Engine while upgrading to its 2nd-generation platform, or move to other compute platforms as listed above.

          Today's Migration Module 20 content focuses on helping developers refamiliarize themselves with App Engine's Users service, a user authentication system serving as a lightweight wrapper around Google Sign-In (now called Google Identity Services). The video and its corresponding codelab (self-paced, hands-on tutorial) demonstrate how to add use of the Users service to the sample baseline app from Module 1. After adding the Users service in Module 20, Module 21 follows, showing developers how to migrate that usage to Cloud Identity Platform.

          How to use the App Engine Users service

          Adding use of Users service


          The sample app's basic functionality consists of registering each page visit in Datastore and displaying the most recent visits. The Users service helps apps support user logins, App Engine administrative ("admin'") users. It also provides convenient functions for generating login/logout links and retrieving basic user information for logged-in users. Below is a screenshot of the modified app which now supports user logins via the user interface (UI):
          Sample app now supports user logins and App Engine admin users (click to enlarge) 
          Below is the pseudocode reflecting the changes made to support user logins for the sample app, including integrating the Users service and updating what shows up in the UI:
          • If the user is logged in, show their "nickname" (display name or email address) and display a Logout button. If the logged-in user is an App Engine app admin, also display an "admin" badge (between nickname and Logout button).
          • If the user is not logged in, display the username generically as "user", remove any admin badge, and display a Login button.
          Because the Users service is primarily a user-facing endeavor, the most significant changes take place in the UI, whereas the data model and core functionality of registering visits remain unchanged. The new support for user management primarily results in additional context to be rendered in the web template. New or altered code is bolded to highlight the updates.
          Table showing code 'Before'(Module 1) on left, and 'After' (Module 20) on the right
           Adding App Engine Users service usage to sample app (click to enlarge)

          Wrap-up


          Today's "migration" consists of adding usage of the App Engine Users service to support user management and recognize App Engine admin users, starting with the Module 1 baseline app and finishing with the Module 20 app. To get hands-on experience doing it yourself, try the codelab and follow along with the video. Then you'll be ready to upgrade to Identity Platform should you choose to do so.

          In Fall 2021, the App Engine team extended support of many of the bundled services to 2nd generation runtimes (that have a 1st generation runtime), meaning you are no longer required to migrate from the Users service to Identity Platform when porting your app to Python 3. You can continue using the Users service in your Python 3 app so long as you retrofit the code to access bundled services from next-generation runtimes.

          If you do want to move to Identity Platform, see the Module 21 content, including its codelab. All Serverless Migration Station content (codelabs, videos, and source code [when available]) are available at its open source repo. While we're initially focusing on Python users, the Cloud team is covering other runtimes soon, so stay tuned. Also check out other videos in the broader Serverless Expeditions series.

          Improving Video Voice Dubbing Through Deep Learning

          Posted by Paul McCartney, Software Engineer, Vivek Kwatra, Research Scientist, Yu Zhang, Research Scientist, Brian Colonna, Software Engineer, and Mor Miller, Software Engineer

          People increasingly look to video as their preferred way to be better informed, to explore their interests, and to be entertained. And yet a video’s spoken language is often a barrier to understanding. For example, a high percentage of YouTube videos are in English but less than 20% of the world's population speaks English as their first or second language. Voice dubbing is increasingly being used to transform video into other languages, by translating and replacing a video’s original spoken dialogue. This is effective in eliminating the language barrier and is also a better accessibility option with regard to both literacy and sightedness in comparison to subtitles.

          In today’s post, we share our research for increasing voice dubbing quality using deep learning, providing a viewing experience closer to that of a video produced directly for the target language. Specifically, we describe our work with technologies for cross-lingual voice transfer and lip reanimation, which keeps the voice similar to the original speaker and adjusts the speaker’s lip movements in the video to better match the audio generated in the target language. Both capabilities were developed using TensorFlow, which provides a scalable platform for multimodal machine learning. We share videos produced using our research prototype, which are demonstrably less distracting and - hopefully - more enjoyable for viewers.

          Cross-Lingual Voice Transfer

          Voice casting is the process of finding a suitable voice to represent each person on screen. Maintaining the audience’s suspension of disbelief by having believable voices for speakers is important in producing a quality dub that supports rather than distracts from the video. We achieve this through cross-lingual voice transfer, where we create synthetic voices in the target language that sound like the original speaker voices. For example, the video below uses an English dubbed voice that was created from the speaker’s original Spanish voice.

          Original “Coding TensorFlow” video clip in Spanish.

          The “Coding TensorFlow” video clip dubbed from Spanish to English, using cross-lingual voice transfer and lip reanimation.

          Inspired by few-shot learning, we first pre-trained a multilingual TTS model based on our cross-language voice transfer approach. This approach uses an attention-based sequence-to-sequence model to generate a series of log-mel spectrogram frames from a multilingual input text sequence with a variational autoencoder-style residual encoder. Subsequently, we fine-tune the model parameters by retraining the decoder and attention modules with a fixed mixing ratio of the adaptation data and original multilingual data as illustrated in Figure 1.

          fine tuning voice imitation architecture
          Figure 1: Voice transfer architecture

          Note that voice transfer and lip reanimation is only done when the content owner and speakers give consent for these techniques on their content.

          Lip Reanimation

          With conventionally dubbed videos, you hear the translated / dubbed voices while seeing the original speakers speaking the original dialogue in the source language. The lip movements that you see in the video generally do not match the newly dubbed words that you hear, making the combined audio/video look unnatural. This can distract viewers from engaging fully with the content. In fact, people often even intentionally look away from the speaker’s mouth while watching dubbed videos as a means to avoid seeing this discrepancy.

          To help with audience engagement, producers of higher quality dubbed videos may put more effort into carefully tailoring the dialogue and voice performance to partially match the new speech with the existing lip motion in video. But this is extremely time consuming and expensive, making it cost prohibitive for many content producers. Furthermore, it requires changes that may slightly degrade the voice performance and translation accuracy.

          To provide the same lip synchronization benefit, but without these problems, we developed a lip reanimation architecture for correcting the video to match the dubbed voice. That is, we adjust speaker lip movements in the video to make the lips move in alignment with the new dubbed dialogue. This makes it appear as though the video was shot with people originally speaking the translated / dubbed dialogue. This approach can be applied when permitted by the content owner and speakers.

          For example, the following clip shows a video that was dubbed in the conventional way (without lip reanimation):

          "Machine Learning Foundations” video clip dubbed from English to Spanish, with voice transfer, but without lip reanimation

          Notice how the speaker’s mouth movements don’t seem to move naturally with the voice. The video below shows the same video with lip reanimation, resulting in lip motion that appears more natural with the translated / dubbed dialogue:

          The dubbed “Machine Learning Foundations” video clip, with both voice transfer and lip reanimation

          For lip reanimation, we train a personalized multistage model that learns to map audio to lip shapes and facial appearance of the speaker, as shown in Figure 2. Using original videos of the speaker for training, we isolate and represent the faces in a normalized space that decouples 3D geometry, head pose, texture, and lighting, as described in this paper. Taking this approach allows our first stage to focus on synthesizing lip-synced 3D geometry and texture compatible with the dubbed audio, without worrying about pose and lighting. Our second stage employs a conditional GAN-based approach to blend these synthesized textures with the original video to generate faces with consistent pose and lighting. This stage is trained adversarially using multiple discriminators to simultaneously preserve visual quality, temporal smoothness and lip-sync consistency. Finally, we refine the output using a custom super-resolution network to generate a photorealistic lip-reanimated video. The comparison videos shown above can also be viewed here.


          Lip-Reanimation Pipeline showing inference blocks in blue, training blocks in red.
          Figure 2: Lip-Reanimation Pipeline: inference blocks in blue, training blocks in red.

          Aligning with our AI Principles

          The techniques described here fall into the broader category of synthetic media generation, which has rightfully attracted scrutiny due to its potential for abuse. Photorealistically manipulating videos could be misused to produce fake or misleading information that can create downstream societal harms, and researchers should be aware of these risks. Our use case of video dubbing, however, highlights one potential socially beneficial outcome of these technologies. Our new research in voice dubbing could help make educational lectures, video-blogs, public discourse, and other formats more widely accessible across a global audience. This is also only applied when consent has been given by the content owners and speakers.

          During our research, we followed our guiding AI Principles for developing and deploying this technology in a responsible manner. First, we work with the creators to ensure that any dubbed content is produced with their consent, and any generated media is identifiable as such. Second, we are actively working on tools and techniques for attributing ownership of original and modified content using provenance and digital watermarking techniques. Finally, our central goal is fidelity to the source-language video. The techniques discussed herein serve that purpose only -- namely, to amplify the potential social benefit to the user, while preserving the content’s original nature, style and creator intent. We are continuing to determine how best to uphold and implement data privacy standards and safeguards before broader deployment of our research.

          The Opportunity Ahead

          We strongly believe that dubbing is a creative process. With these techniques, we strive to make a broader range of content available and enjoyable in a variety of other languages.

          We hope that our research inspires the development of new tools that democratize content in a responsible way. To demonstrate its potential, today we are releasing dubbed content for two online educational series, AI for Anyone and Machine Learning Foundations with Tensorflow on the Google Developers LATAM channel.

          We have been actively working on expanding our scope to more languages and larger demographics of speakers — we have previously detailed this work, along with a broader discussion, in our research papers on voice transfer and lip reanimation.