Author Archives: Google Developers

Year in Review: 12 awesome ways for developers to learn, build, and grow with Google Workspace

Posted by Matthew Izatt, Product Lead, Google Workspace Platform

For millions of our customers, 2022 brought an abundance of change in the way they connect, collaborate, and get things done. Frontline workers at customers like Globe Telecom and general contractor BHI benefited from digital transformation on all fronts by quickly getting the apps they needed to do their jobs in the field. Office and remote workers, meanwhile, adjusted to hybrid work by leveraging ready-made tools from partners like DocuSign and Asana or they built custom desk booking applications.

2022 was also a year of growth. Google Workspace now has more than 3 billion users and over 8 million paying customers across the globe. And the Google Workspace Marketplace passed a lifetime milestone of driving more than 5 billion app installs. To wrap up a year marked by so much change, we’ve recapped some of the biggest updates that make Google Workspace the most open and extensible platform for users, customers, and developers alike.

1.    Build software with more agility with our DevOps integrations

    Google Workspace gives you real-time visibility into project progress and decisions to help you ship quality code fast and stay connected with your stakeholders, all without switching tools and tabs. By leveraging integrated applications from our partners, you can pull valuable information out of silos, making collaborating on requirements docs, code reviews, bug triage, deployment updates, and monitoring operations easy for the whole team. This year we partnered with popular DevOps tools to help you do your job better:

    • Asana: Plan and execute together, with Asana integrations you can coordinate and manage everything from daily tasks to cross-functional strategic initiatives.
    • GitHub: Teams can quickly push new commits, make pull requests, do code reviews, and provide real-time feedback that improves the quality of their code—all from Google Chat.
    • Jira: Accelerate the entire QA process in the development workflow. The Jira for Google Chat app acts as a team member in the conversation, sending new issues and contextual updates as they are reported to improve the quality of your code and keep everyone informed on your Jira projects.
    • PagerDuty: Enables developers, DevOps, IT operations, and business leaders to prevent and resolve business-impacting incidents for an exceptional customer experience—all from Google Chat.
     
    2.    Apply to our Developer Preview Program: get early access to upcoming platform features

    This year we launched the Google Workspace Developer Preview program to get you access to the new APIs and stay in touch with the latest updates on the Google Workspace platform. Features in developer preview have already completed early development phases, so they're ready for implementation. This program gives you the chance to shape the final stages of feature development with feedback, get pre-release support, and have your integration ready for public use on launch day. Apply to the Developer Preview Program today.

    For Google Chat this year we announced that you could programmatically create new spaces and add members on behalf of users via the Google Chat API. These latest additions to the Chat API unlock some sought-after scenarios for developers looking for new ways to extend Chat. For example, PagerDuty leveraged the API as part of their PagerDuty for Google Chat app. The app allows the incident team to isolate and focus on the problem at hand without being distracted by having to set up a new space, or further distract any folks in the current space who aren’t a part of the resolution team for a specific incident. All of this will be done seamlessly through PagerDuty for Chat as part of the natural flow of working with Google Chat.

    Screen grab of PagerDuty for Google Chat keeping a demo business up to date on service-impacting incidents.
    PagerDuty for Google Chat keeps the business up to date on service-impacting incidents.

    We are adding functionality to Chat apps so developers can soon add widgets like a date time picker, or design their layout with multiple columns to make better use of space. We believe these new layout options will open more ways for developers to build engaging apps for users. To help users find and learn more about apps we’ve added “About pages” for apps and making apps discoverable in the compose bar in Chat. Apply to our Developer Preview Program to get early access to the Google Chat APIs.

    We also announced new functionality for app developers to leverage the Google Meet video conferencing product through our new Meet Live Sharing API. Users can now come together and share experiences with each other inside an app, such as streaming a TV show, queuing up videos to watch on YouTube, collaborating on a music playlist, joining in a dance party, or working out together though Google Meet. If you want to try out the APIs, you can apply for access through the Developer Preview Program.

    Moving image showing how Miro for Google Meet uses the new Meet APIs for an integrated experience within Meet.
    Miro for Google Meet uses the new Meet APIs for an integrated experience within Meet.

    3.    Connect your customers with critical information with Smart Chips for Google Docs

      We expanded smart chips to our ecosystem of partners, allowing our users to add even more rich data, more context, and critical information right into the flow of their work. With these new third-party smart chips, you will be able to tag and see critical information from partner applications using @-mentions, and easily insert interactive information and previews from third-party apps directly into a Google Doc. Several of our partners, including AO Docs, Atlassian, Asana, Figma, Miro, Tableau, and ZenDesk, are now developing third-party smart chips to add more value to your Google Docs experience. Smart chips will be available to developers to build out their app integrations in 2023.

      Moving image showing how Smart Chips work in Google docs
      Smart Chips will be available to third-party developers in 2023.

      4.    Grow your business with the Recommended for Google Workspace program

      Each year, we evaluate the apps on Google Workspace Marketplace and recommend a select number that are enhancing the Google Workspace experience and helping people work in powerful new ways. Each undergoes reviews by both Google and an independent third-party security firm to ensure they meet our highest standards of integration and security requirements. For 2022 here’s the selection of Recommended for Google Workspace: AODocs, Copper, Dialpad, DocuSign, LumApps, Mailmeteor, Miro, RingCentral, Sheetgo, Signeasy, Supermetrics, and Yet Another Mail Merge. Our application for Recommended for Google Workspace is now open, apply today.

      Recommended for Google Workspace 2023 application is open!
      Become an app Recommended for Google Workspace for 2023, apply today.

      5.    Manage Google Workspace APIs with ease

        We recently added a unified way to access Google Workspace APIs through the Google Cloud Console—APIs for Gmail, Google Drive, Docs, Sheets, Chat, Slides, Calendar, and many more. From there, you now have a central location to manage all your Google Workspace APIs and view all of the aggregated metrics for the API in use. Watch this how-to video to get started.

        Google worksspace APIs in Cloud Console
        Developers can now manage their Google Workspace APIs from within the Google Cloud Console.

        6.    Create surveys, questionnaires, and quizzes and evaluate the results programmatically

          The new Google Forms API joins the large family of APIs available to developers under Google Workspace. The Forms API provides programmatic access for managing forms, acting on responses, and empowering developers to build powerful integrations on top of Forms. Watch this introduction to the Google Forms API to get started.

          Customer satisfaction Surveys created in Google Forms shown on desktop and mobile
          The new Google Forms API allows you to programmatically create and manage Forms.

          7.    Build intelligent business apps with No-Code and Low-Code

            Google Apps Script is a low-code, cloud based JavaScript development environment for Google Workspace that makes it easy for anyone to build custom business solutions across several Google products. This year we completed the updates for our new IDE v2, offering a more modern and simplified development experience which makes it quicker and easier to build solutions that make Google Workspace apps more useful for your organization.

            If you are new to Apps Script, figuring out where to begin can be a hurdle, this year we released 10 new sample solutions to help you get started to bring our number to more than 30! From data analysis to automated emails, you’ll find sample solutions to get you started quickly.

            AppSheet is Google’s platform for building no-code custom apps and workflows to automate business processes. It lets app creators build and deploy end-to-end apps and automations without writing code.

            The new Apps Script connector for AppSheet, launched this year, ties everything together: AppSheet, Apps Script, Google Workspace, and Google Workspace’s many developer APIs. This integration lets no-code app developers using AppSheet greatly extend the capabilities of their no-code apps by allowing them to call and pass data to Apps Script functions. One way to think about this integration is that it bridges no-code (AppSheet) with low-code (Apps Script).

            AppSheet databases, announced in preview this year, is a built-in database for professional and citizen developers to easily and securely manage their data. AppSheet databases will give users access to an easy to use, first party database for creating and managing data. Get started and try AppSheet for free.

            AppSheet database
            AppSheet databases are now available in preview.

            8.    Learn to build amazing solutions on our YouTube channel

              This year, we introduced our dedicated YouTube channel for Google Workspace Developers. The channel serves as an ever-growing collection of our most helpful videos, allowing developers of all skill levels and interests to learn about building solutions with Google Workspace.

              An example of a video you will find on the Google Workspace Developers channel: Anatomy of Google Chat apps - Basic interaction
              Our new YouTube channel for Google Workspace developers has dozens of how-to videos for you.

              9.    Connect with Cloud experts and community as a Google Cloud Innovator

                Community building is one of the most effective ways to support developers, which is why we created Google Cloud Innovators.This new community program was designed for developers and technical practitioners using Google Cloud and everyone is welcome. In 2022, we kicked off the inaugural Innovators Hive, a live, interactive, and virtual event for our global Innovators community. Hive offered rich technical content presented by Champion Innovators and Google engineering leaders. Become a Google Cloud Innovator today.

                Google Cloud Innovators logo in a solid black frame with text that reads 'Welcome, Innovators'
                The Google Cloud Innovators program is open to all levels of creators and developers.

                10.    Integrate and extend Google Workspace: top sessions from Google I/O

                  Learn about the latest innovations and discover how developers can integrate and extend Google Workspace. Here are a few of my favorite sessions from I/O:

                  Google I/O Logo

                  11.    Build the future of work: top sessions from Google Cloud Next

                  Watch on-demand videos from our biggest Cloud event of the year and learn from product experts and partners to level up your skills.


                  12.    Engage with the Google Workspace team and ecosystem at our Developer Summits

                  We also had our inaugural Google Workspace Developer Summit series take place in Paris and London. It was an amazing time meeting developers and IT teams from customers and partners that attended from throughout the EMEA region. Watch out for a summit near you in 2023 to learn more about the latest development features for Google Workspace from our Developer Advocates and build connections with the developer community, subscribe to our newsletter to get notified.

                  Photo of Developers listening to a presentation at Google Paris during Google Workplace Developer Summit
                  Developers gather at Google Paris for the Google Workspace Developer Summit.

                  2022 Wrap-up

                  We are thankful to you in helping make 2022 a great year for the Google Workspace developer community. We look forward to announcing more innovations and having more conversations with you in 2023. To keep track of all the latest announcements and developer updates for Google Workspace please subscribe to our monthly newsletter. Happy holidays and a peaceful New Year!

                  Dev Library Letters: 15th Issue

                  Posted by Garima Mehra, Program Manager

                  Our monthly newsletter curates some of the best projects developed with Google tech that have been submitted to the Google Dev Library platform. We hope this brings you the inspiration you need for your next project!

                  Content of the month

                  Check out our shortlisted Content from Google Cloud, Angular, Android, & Flutter


                  Google Cloud

                  Solve the common question, “who parked their car in my spot?” with this clever tutorial.

                  Designing a data schema 

                  by Mustapha Adekunle

                  Better understand what aspects come into consideration when designing a data schema.

                  Learn how to set up aggregated logging in an organization that has VPC Service Controls and find a Terraform module that lets you automate the setup for your own Google Cloud infrastructure.

                  Explore how to generate accurate business forecasts at a large scale using state of the art ML capabilities on the Google Cloud Platform.

                  Angular


                  Understand how to implement the Compound Component Pattern in Angular using Dependency Injection and Content Projection to create an excellent API for your components.

                  Android


                  Design patterns and architecture: The Android Developer roadmap — Part 4 

                  by Jaewoong Eum

                  Check out the 2022 Android Developer Roadmap- a multi-part series covering important Android fundamentals like Languages, App Manifest, App Components, Android Jetpack, and more.

                  Geofencing:boost your digital campaign 

                  by Veronica Putri Anggraini

                  Read this fun application of geofencing to manage the dilemma of where to eat lunch based on which restaurant has the best deal.

                  Flutter


                  Data structures with Dart: Set 

                  by Daria Orlova

                  Get over your fear of data structures and algorithms with this helpful and snappy how-to focused on the Set.


                  Want to read more?
                  Check out the latest projects and community-authored content by visiting Google Dev Library.
                  Submit your projects to showcase your work and inspire developers!



                  GDE community highlight: Lars Knudsen

                  Posted by Monika Janota, Community Manager

                  Lars Knudsen is a Google Developer Expert; we talked to him about how a $10 device can make computers more accessible for people with disabilities.
                   

                  Monika: What inspired you to become a developer? What’s your current professional focus?

                  Lars: I got my MSc in engineering, but in fact my interest in tech started much earlier. When I was a kid in the 80s, my father owned a computing company working with graphic design. Sometimes, especially during the summer holidays, he would take me to work with him. At times, some of his employees would keep an eye on me. There was this really smart guy who once said to me, “Lars, I need to get some work done, but here's a C manual, and there’s a computer over there. Here’s how you start a C compiler. If you have any questions, come and ask me.” I started to write short texts that were translated into something the computer could understand. It seemed magical to me. I was 11 years old when I started and around seventh grade, I was able to create small applications for my classmates or to be used at school. That’s how it started.

                  Over the years, I’ve worked for many companies, including Nokia, Maersk, and Openwave. At the beginning, like in many other professions, because you know a little, you feel like you can do everything, but with time you learn each company has a certain way of doing things.

                  After a few years of working for a medical company, I started my own business in 1999. I worked as a freelance contractor and, thanks to that, had the chance to get to know multiple organizations quickly. After completing the first five contracts, I found out that every company thinks they’ve found the perfect setup, but all of them are completely different. At that time, I was also exposed to a lot of different technologies, operating systems etc. Around my early twenties, my mindset changed. At the beginning, I was strictly focused on one technology and wanted to learn all about it. With time, I started to think about combining technologies as a way of improving our lives. I have a particular interest in narrowing the gap between what we call the A and the B team in the world. I try to transfer as much knowledge as possible to regions where people don’t have the luxury of owning a computer or studying at university free of charge.

                  I continue to work as a contractor for external partners but, whenever possible, I try to choose projects that have some kind of positive impact on the environment or society. I’m currently working on embedded software for a hearing-aid company called Oticon. Software-wise, I’ve been working on everything from the tiniest microcontrollers to the cloud; a lot of what I do revolves around the web. I’m trying to combine technologies whenever it makes sense.

                  Monika: Were you involved in developer communities before joining the Google Developer Experts program?

                  Lars: Yes, I was engaged in meetups and conferences. I first connected with the community while working for Nokia. Around 2010, I met Kenneth Rohde Christiansen, who became a GDE before me. He inspired me to see how web technologies can be useful for aspiring tech professionals in developing countries. Developing and deploying solutions using C++, C# or Java requires some years of experience, but everyone who has access to a computer, browser, and notepad can start developing web-based applications and learn really fast. It’s possible to build a fully functional application with limited resources, and ramp up from nothing. That’s why I call the web a very democratizing technology stack.

                  But back to the community—after a while I got interested in web standardization and what problems bleeding edge web technologies could solve. I experimented with new capabilities in a browser before release. I was working for Nokia at the time, developing for a Linux-based flagship device, the N9. The browser we built was WebKit based and I got some great experience developing features for a large open source project. In the years after leaving Nokia, I got involved in web conferences and meetups, so it made sense to join the GDE community in 2017.

                  I really enjoy the community work and everything we’re doing together, especially the pre-pandemic Chrome Developer Summits, where I got to help with booth duty alongside a bunch of awesome Google Engineers and other GDEs.

                  Monika: What advice would you give to a young developer who’s just starting their professional career and is not sure which path to take?

                  Lars: I’d say from my own experience—if you can afford it—consider freelancing for a couple of different companies. This way, you’ll be exposed to code in many different forms and stages of development. You’ll get to know a multitude of operating systems and languages, and learn how to resolve problems in many ways. This helped me a lot. I gained experience as senior developer in my twenties. This approach will help you achieve your professional goals faster.

                  Besides that, have fun, explore, play with the hardware and software. Consider building something that solves a real problem—maybe for your friends, family, or a local business. Don’t be afraid to jump into something you’ve never done before.

                  Monika: What does the future hold for web technologies?

                  Lars: I think that for a couple of years now the web has been fully capable of providing a platform for large field applications, both for the consumer and for business. On the server side of things, web technologies offer a seamless experience, especially for frontend developers who want to build a backend component. It’s easier for them to get started now. I know people who were using both Firebase and Heroku to get the job done. And this trend will grow—web technologies will be enough to build complex solutions of any kind. I believe that the Web Capabilities - Project Fugu ? really unlocks that potential.

                  Looking at it from a slightly different point of view, I also think that if we provide full documentation and in-depth articles not only in English but also in other languages (for example, Spanish and Portuguese), we would unlock a lot of potential in Latin America—and other regions, of course. Developers there often don’t know English well enough to fully understand all the relevant articles. We should also give them the opportunity to learn as early as possible, even before they start university, while still in their hometowns. They may use those skills to help local communities and businesses before they leave home and maybe never come back.

                  Thomas: You came a long way from doing C development on a random computer to hacking on hardware. How did you do that?

                  Lars: I started taking apart a lot of hardware I had at home. My dad was not always happy when I couldn’t put it back together. With time, I learned how to build some small devices, but it really took off much later, around the time I joined Nokia, where I got my embedded experience. I had the chance to build small screensavers, components for the Series 30 phones. I was really passionate about it and could really think outside the box. They assigned me a task to build a Snake game for those devices. It was a very interesting experience. The main difference between building embedded systems and most other things (including web) is that you leave a small footprint—you don’t have much space or memory to use. While building Snake, the RAM that I had available was less than one-third of the frame buffer (around 120 x 120 pixels). I had to come up with ways to algorithmically rejoin components on screen so they’d look static, as if they were tiles. I learned a lot—that was the move from larger systems to small, embedded solutions.

                  Thomas: The skill set of a typical frontend developer is very different from the skill set of someone who builds embedded hardware. How would you encourage a frontend developer to look into hardware and to start thinking in binary?

                  Lars: I think that the first step is to look at some of the Fugu APIs that work in Chrome and Edge, and are built into all the major systems today. That’s all you need at the start.

                  Another thing is that the toolchains for building embedded solutions have a steep learning curve. If you want to build your own custom hardware, start with Arduino or ESP32—something that is easy to buy and fairly cheap. With the right development environment, you can get your project up and running in no time.

                  You could also buy a heart rate monitor or a multisensor unit, which are already using Bluetooth GATT services, so you don’t have to build your own hardware or firmware—you can use what’s already there and start experimenting with the Web Bluetooth API to start communicating with it.

                  There are also devices that use a serial protocol—for these, you can use the Web Serial API (also Fugu). Recently I’ve been looking into using the WebHID API, which enables you to talk to all the human interface devices that everyone has access to. I found some old ones in my basement that had not been supported by any operating system for years, but thanks to reverse engineering it took me a few hours to re-enable them.

                  There are different approaches depending on what you want to build, but to a web developer I would say, get a solid sensor unit, maybe a Thingy 52 from Nordic Semiconductor; it has a lot of sensors, and you can hook up to your web application with very little effort.

                  Thomas: Connecting to the device is the first step, but then speaking to it effectively—that’s a whole other thing. How come you did not give up after facing obstacles? What kept you motivated to continue working?

                  Lars: For me personally the social aspect of solving a problem was the most important. When I started working on my own embedded projects, I had a vision and a desire to build a science lab in a box for developing regions. My wife is from Mexico and I saw some of the schools there; some that are located outside of the big cities are pretty shabby, without access to the materials and equipment that we have in our part of the world.

                  The passion for building something that can potentially be used to help others—that’s what kept me going. I also really enjoyed the community support. I reached out to some people at Google and all were extremely helpful and patiently answered all of my questions.

                  Thomas: A lot of people have some sort of hardware at home, but don’t know what to do with it. How do you find inspiration for all your amazing projects, in particular the one under the working name SimpleMouse?

                  Lars: Well, recently I have been in fact reviving a lot of old hardware, but for this particular project—the name has not been set yet, but let’s call it SimpleMouse—I used my experience. I worked with some accessibility solutions earlier and I saw how some of them just don’t work anymore; you’d need to have an old Windows XP with certain software installed to run them. You can’t really update those, you can only use those at home because you can’t move your setup.

                  Because of that, I wondered how to combine my skills from the embedded world with project Fugu and what is now possible on the web to create cheap, affordable hardware combined with easy-to-understand software on both sides, so people can build on that.

                  For that particular project, I took a small USB dongle with a reflexive chip, the nRF52840. It communicates with Bluetooth on one side and USB on the other. You can basically program it to be anything on both sides. And then I thought about the devices that control a computer—a mouse and a keyboard. Some people with disabilities may find it difficult to operate those devices, and I wanted to help them.

                  The first thing I did was to make sure that any operating system would see the USB dongle as a mouse. You can control it from a native application or a web application—directly into Bluetooth. After that, I built a web application—a simple template that people can extend the way they want using web components. Thanks to that, everyone can control their computer with a web app that I made in just a couple of hours on an Android phone.

                  Having that set up will enable anyone in the world with some web experience to build, in a matter of days, a very customized solution for anyone with a disability who wants to control their computer. The cool thing is that you can take it with you anywhere you go and use it with other devices as well. It will be the exact same experience. To me, the portability and affordability of the device are very important because people are no longer confined to using their own devices, and are no longer limited to one location.

                  Thomas: Did you have a chance to test the device in real life?

                  Lars: Actually during my last trip to Mexico I discussed it with a web professional living there; he’s now looking into the possibilities of using the device locally. Over there the equipment is really expensive, but a USB dongle normally costs around ten US dollars. He’s now checking if we could build local setups there to try it out. But I haven’t done official trials yet here in Denmark.

                  Thomas: Many devices designed to assist people with disabilities are really expensive. Are you planning on cooperating with any particular company and putting it into production for a fraction of the price of that expensive equipment?

                  Lars: Yes, definitely! I’ve already been talking to a local hardware manufacturer about that. Of course, the device won’t replace all those highly specialized solutions, but it can be the first step to building something bigger—for example, using voice recognition, already available for web technologies. It’ll be an easy way of controlling devices using your Android phone; it can work with a device of any kind.

                  Just being able to build whatever you want on the web and to use that to control any host computer opens up a lot of possibilities.

                  Thomas: Are you releasing your Zephyr project as open source? What kind of license do you use? Are there plans to monetize the project?

                  Lars: Yes, the solution is open source. I did not put a specific license on it, but I think Apache 2.0 would be the way to go. Many major companies use this license, including Google. When I worked on SimpleMouse, I did not think about monetizing the project—that was not my goal. But I also think it would make sense to try to put it into production in some way, and with this comes cost. The ultimate goal is to make it available. I’d love to see it being implemented at a low cost and on a large scale.

                  How to use App Engine pull tasks (Module 18)

                  Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

                  Introduction and background

                  The Serverless Migration Station mini-series helps App Engine developers modernize their apps to the latest language runtimes, such as from Python 2 to 3 or Java 8 to 17, or to sister serverless platforms Cloud Functions and Cloud Run. Another goal of this series is to demonstrate how to move away from App Engine's original APIs (now referred to as legacy bundled services) to Cloud standalone replacement services. Once no longer dependent on these proprietary services, apps become much more portable, making them flexible enough to:

                  App Engine's Task Queue service provides infrastructure for executing tasks outside of the standard request-response workflow. Tasks may consist of workloads exceeding request timeouts or periodic tangential work. The Task Queue service provides two different queue types, push and pull, for developers to perform auxiliary work.

                  Push queues are covered in Migration Modules 7-9, demonstrating how to add use of push tasks to an existing baseline app followed by steps to migrate that functionality to Cloud Tasks, the standalone successor to the Task Queues push service. We turn to pull queues in today's video where Module 18 demonstrates how to add use of pull tasks to the same baseline sample app. Module 19 follows, showing how to migrate that usage to Cloud Pub/Sub.

                  Adding use of pull queues

                  In addition to registering page visits, the sample app needs to be modified to track visitors. Visits are comprised of a timestamp and visitor information such as the IP address and user agent. We'll modify the app to use the IP address and track how many visits come from each address seen. The home page is modified to show the top visitors in addition to the most recent visits:

                  Screen grab of the sample app's updated home page tracking visits and visitors
                  The sample app's updated home page tracking visits and visitors

                  When visits are registered, pull tasks are created to track the visitors. The pull tasks sit patiently in the queue until they are processed in aggregate periodically. Until that happens, the top visitors table stays static. These tasks can be processed in a number of ways: periodically by a cron or Cloud Scheduler job, a separate App Engine backend service, explicitly by a user (via browser or command-line HTTP request), event-triggered Cloud Function, etc. In the tutorial, we issue a curl request to the app's endpoint to process the enqueued tasks. When all tasks have completed, the table then reflects any changes to the current top visitors and their visit counts:

                  Screen grab of processed pull tasks updated in the top visitors table
                  Processed pull tasks update the top visitors table

                  Below is some pseudocode representing the core part of the app that was altered to add Task Queue pull task usage, namely a new data model class, VisitorCount, to track visitor counts, enqueuing a (pull) task to update visitor counts when registering individual visits in store_visit(), and most importantly, a new function fetch_counts(), accessible via /log, to process enqueued tasks and update overall visitor counts. The bolded lines represent the new or altered code.

                  Adding App Engine Task Queue pull task usage to sample app showing 'Before'[Module 1] on the left and 'After' [Module 18] with altered code on the right
                  Adding App Engine Task Queue pull task usage to sample app

                  Wrap-up

                  This "migration" is comprised of adding Task Queue pull task usage to support tracking visitor counts to the Module 1 baseline app and arrives at the finish line with the Module 18 app. To get hands-on experience doing it yourself, do the codelab by hand and follow along with the video. Then you'll be ready to upgrade to Cloud Pub/Sub should you choose to do so.

                  In Fall 2021, the App Engine team extended support of many of the bundled services to 2nd generation runtimes (that have a 1st generation runtime), meaning you are no longer required to migrate pull tasks to Pub/Sub when porting your app to Python 3. You can continue using Task Queue in your Python 3 app so long as you retrofit the code to access bundled services from next-generation runtimes.

                  If you do want to move to Pub/Sub, see Module 19, including its codelab. All Serverless Migration Station content (codelabs, videos, and source code) are available at its open source repo. While we're initially focusing on Python users, the Cloud team is covering other runtimes soon, so stay tuned. Also check out other videos in the broader Serverless Expeditions series.

                  When to step-up your Google Pay transactions as a PSP

                  Posted by Dominik Mengelt, Developer Relations Engineer, Google Pay and Nick Alteen, Technical Writer, Engineering, Wallet

                  What is step-up authentication?

                  When processing payments, step-up authentication (or simply “step-up”) is the practice of requiring additional authentication measures based on user activity and certain risk signals. For example, redirecting the user to 3D Secure to authenticate a transaction. This can help to reduce potential fraud and chargebacks. The following graphic shows the high-level flow of a transaction to determine what's to be done if step-up is needed.

                  graphic showing the high-level flow of a transaction
                  Figure 1: Trigger your Risk Engine before sending the transaction to authorization if step-up is needed

                  It depends! When making a transaction, the Google Pay API response will return one of the following:

                  • An authenticated payload that can be processed without any further step-up or challenge. For example, when a user adds a payment card to Google Wallet. In this case, the user has already completed identity verification with their issuing bank.
                  • A primary account number (PAN) that requires additional authentication measures, such as 3D Secure. For example, a user making a purchase with a payment card previously stored through Chrome Autofill.

                  You can use the allowedAuthMethods parameter to indicate which authentication methods you want to support for Google Pay transactions:

                  "allowedAuthMethods": [
                      "CRYPTOGRAM_3DS",
                      "PAN_ONLY"

                  ]


                  In this case, you’re asking Google Pay to display the payment sheet for both types. For example, if the user selects a PAN_ONLY card (a card not tokenized, not enabled for contactless) from the payment sheet during checkout, step-up is needed. Let's have a look at two concrete scenarios:


                  In the first scenario, the Google Pay sheet shows a card previously added to Google Wallet. The card art and name of the user's issuing bank are displayed. If the user selects this card during the checkout process, no step-up is required because it would fall under the CRYPTOGRAM_3DS authentication method.

                  On the other hand, the sheet in the second scenario shows a generic card network icon. This indicates a PAN_ONLY authentication method and therefore needs step-up.

                  PAN_ONLY vs. CRYPTOGRAM_3DS

                  Whether or not you decide to accept both forms of payments is your decision. For CRYPTOGRAM_3DS, the Google Pay API additionally returns a cryptogram and, depending on the network, an eciIndicator. Make sure to use those properties when continuing with authorization.

                  PAN_ONLY

                  This authentication method is associated with payment cards from a user’s Google Account. Returned payment data includes the PAN with the expiration month and year.

                  CRYPTOGRAM_3DS

                  This authentication method is associated with cards stored as Android device tokens provided by the issuers. Returned payment data includes a cryptogram generated on the device.

                  When should you step-up Google Pay transactions?

                  When calling the loadPaymentData method, the Google Pay API will return an encrypted payment token (paymentData.paymentMethodData.tokenizationData.token). After decryption, the paymentMethodDetails object contains a property, assuranceDetails, which has the following format:

                  "assuranceDetails": {
                      "cardHolderAuthenticated": true,
                      "accountVerified": true
                  }

                  Depending on the values of cardHolderAuthenticated and accountVerified, step-up authentication may be required. The following table indicates the possible scenarios and when Google recommends step-up authentication for a transaction:

                  cardHolderAuthenticated

                  accountVerified

                  Step-up needed

                  true

                  true

                  No

                  false

                  true

                  Yes

                  Step-up can be skipped only when both cardHolderAuthenticated and accountVerified return true.

                  Next steps

                  If you are not using assuranceDetails yet, consider doing so now and make sure to step-uptransactions if needed. Also, make sure to check out our guide on Strong Customer Authentication (SCA) if you are processing payments within the European Economic Area (EEA). Follow @GooglePayDevs on Twitter for future updates. If you have questions, mention @GooglePayDevs and include #AskGooglePayDevs in your tweets.

                  Experts share insights on Firebase, Flutter and the developer community

                  Posted by Komal Sandhu - Global Program Manager, Google Developer Groups

                  Rich Hyndman, Manager, Firebase DevRel (left) and Eric Windmill, Developer Relations Engineer, Firebase and Flutter (right)

                  Firebase and Flutter offer many tools that ‘just work’, which is something that all apps need. I think you’d be hard pressed to find another combination of front end framework and back end services that let developers make apps quickly without sacrificing quality.” 

                  moving images of Sparky and Dart, respective mascots for Firebase and Flutter
                  Among the many inspiring experts in the developer communities for Firebase and Flutter are Rich Hyndman and Eric Windmill. Each Googler serves their respective product team from the engineering and community sides and has a keen eye towards the future. Read on to see their outlook on their favorite Firebase and Flutter tools and the developers that inspire them.

                  ===

                  What is your title, and how long have you been at Google?

                  Rich: I run Firebase Developer Relations,, I’ve been at Google for around 11 years

                  Eric: I’m an engineer on the Flutter team and I’ve been at Google for a year.


                  Tell us about yourself:

                  Rich: I’ve always loved tech, from techy toys as a kid to anything that flies. I still get tech-joy when I see new gadgets and devices. I built and raced drones for a while, but mobile/cell phones are the ultimate gadget for me and enabled my career.

                  Eric: I’m a software engineer, and these days I’m specifically a Developer Relations Engineer. I’m not surprised I’ve ended up here, as I like to joke “I like computers but I like people more.” Outside of work, most of my time is spent thinking about music. I’m pretty poor at playing music, but I’ve always consumed as much as I could. If I had to choose a different job and start over, I’d be a music journalist.


                  How did you get started in this space?

                  Rich: I've always loved mobile apps: being able to carry my work in my pocket, play with it, test it, demo it, and be proud of it. From the beginning of my career right up till today, it's still the best. I worked on a few mobile projects pre-Android and was part of an exciting mobile tech startup for a few years, but it was Android that really kick-started my career.

                  I quickly fell in love with the little green droid and the entire platform, and through a combination of meetups, competition entries and conferences I ended up in contact with Android DevRel at Google.

                  Firebase is a natural counterpart to Android and I love being able to support developers from a different angle. Firebase also supports Flutter, Web and iOS, Firebase, which has also given me the opportunity to learn more about other platforms and meet more developers.

                  Eric: I got into this space by accident. At my first software job, the company was already using Dart for their web application, and started rebuilding their mobile apps in Flutter soon after I joined. I think that was around 2016 or 2017. Flutter was still in its Alpha stage. I was introduced to Firebase at the same job, and I’ve used various tools from the Firebase SDK ever since.


                  What are some challenges that you have seen developers being facing?

                  Rich: Developers often want to get up and running with new projects quickly, but then iterate and improve their apps. No-code solutions can be great to start with but aren’t flexible enough down the road. A lower-code solution like Firebase can be quick to get started, and it can also provide control. Bringing Flutter and Firebase together creates a powerful and flexible combination.

                  Eric: Regardless of the technology, I think the biggest challenge developers face is actually with documentation. It doesn’t matter how good a product is if the docs are hard to find or hard to understand. We’ve seen this ourselves recently as Flutter became an “official” supported platform on Firebase in May 2022. When that happened, we moved the documentation from the Flutter site to the Firebase site, and folks didn’t know how to find the docs. It was an oversight on our part, but it’s a good example of the importance of docs. They deserve way more attention than they get in many, many cases.

                  image of Sparky and Dart, respective mascots for Firebase and Flutter
                  What do you think is the most interesting or useful resource to learn more about Firebase & Flutter? Is there a particular library or codelab that everyone should learn?

                  Rich: The official docs have to be first, located at firebase.google.com. We have a great repository of Learning Pathways, including Add Firebase to your Flutter App. We’re also just launching our new Solutions Portal with over 60 solutions guides indexed already.

                  Eric: If I have to name only one resource, it’d be this codelab: Get to know Firebase for Flutter
                  But Firebase offers so many tools. This codelab is just an introduction to what’s possible.


                  What are some inspiring ways that developers are building together Firebase and Flutter?

                  Rich: We’ve had an interesting couple of years at Firebase. Firebase has always been known for powering real-time data driven apps. If you used a Covid stats app during the pandemic there’s a fair chance it was running on Firebase; there was a big surge of new apps.

                  Eric: Lately I’ve seen an interest in using Flutter to make 2D games, and using some Firebase tools for the back end of the game. I love this. Games are just more fun than apps, of course, but it’s also great to see folks using these technologies in ways that aren’t the explicit purposes. It shows creativity and excellent problem solving.


                  What’s a specific use case of Firebase & Flutter technology that excites you?

                  Rich: Firebase Extensions are very exciting. They are pre-packaged bundles of code that make it easy to add new features to your app from Google and partners like Stripe and Vonage. We just launched the Extensions Marketplace and opened up the ability for developers to build extensions for their own apps through our Provider Alpha program.

                  Eric: Flutter web and Firebase hosting is just a no brainer. You can deploy a Flutter app to the web in no time.


                  How can developers be successful building on Firebase & Flutter?

                  Rich: There’s a very powerful combination with Crashlytics, Performance Monitoring, A/B Testing and Remote Config. Developers can quickly improve the stability of their apps whilst also iterating on features to deliver the best experience for their users. We’ve had a lot of success with improving monetization, too. Check out some of our case studies for more details.

                  Eric: Flutter developers can be successful by leveraging all that Firebase offers. Firebase might seem intimidating because it offers so much, but it excels at being easy to use, and I encourage all web and mobile developers to poke around. They’re likely to find something that makes their lives easier.

                  image of Firebase and Flutter logos against a dot matrix background
                  What’s next for the Firebase & Flutter Communities? What might the future look like?

                  Rich: Over the next year we’ll be focusing on modern app development and some more opinionated guides. Better support for Flutter, Kotlin, Jetpack Compose, Swift/SwiftUI and modern web frameworks.

                  Eric: There is a genuine effort amongst both teams to support each other. Flutter and Firebase are just such a great pair, that it makes sense for us to encourage our communities to check out one another. In the future, I think this will continue. I think you’ll see a lot of Flutter at Firebase events, and vice versa.


                  How does Firebase & Flutter help expand the impact of developers?

                  Rich: Firebase has always focused on helping developers get their apps up and running by providing tools to streamline time-consuming tasks. Enabling developers to focus on delivering the best app experiences and the most value to their users.

                  Eric: Flutter is an app-building SDK that is a joy to use. It seriously increases velocity because it’s cross-platform. Firebase and Flutter offer many tools that “just work”, which is something that all apps need. I think you’d be hard pressed to find another combination of front end framework and back end services that let developers make apps quickly without sacrificing quality.


                  Find a Google Developer Group hosting a DevFest near you.

                  Want to learn more about Google Technologies like Firebase & Flutter? Hoping to attend a DevFest or Google Developer Groups (GDG)? Find a GDG hosting a DevFest near you here.

                  #WeArePlay | Discover what inspired 4 game creators around the world

                  Posted by Leticia Lago, Developer Marketing

                  From exploring the great outdoors to getting your first computer - a seemingly random moment in your life might one day be the very thing which inspires you to go out there and follow your dreams. That’s what happened to four game studio founders featured in our latest release of #WeArePlay stories. Find out what inspired them to create games which are entertaining millions around the globe.

                  Born and raised in Salvador, Brazil, Filipe was so inspired by the city’s cultural heritage that he studied History before becoming a teacher. One day, he realised games could be a powerful medium to share Brazilian history and culture with the world. So he founded Aoca Game Lab, and their first title, ÁRIDA: Backland’s Awakening, is a survival game based in the historic town of Canudos. Aoca Game Lab took part in the Indie Games Accelerator and have also been selected to receive the Indie Games Fund. With the help from these Google Play programs, they will take the game and studio to the next level.

                  #WeArePlay Marko Peaskel Nis, Serbia

                  Next, Marko from Serbia. As a chemistry student, he was never really interested in tech - then he received his first computer and everything changed. He quit his degree to focus on his new passion and now owns his successful studio Peaksel with over 480 million downloads. One of their most popular titles is 100 Doors Games: School Escape, with over 100 levels to challenge the minds of even the most experienced players.

                  #WeArePlay Liene Roadgames Riga Latvia

                  And now onto Liene from Latvia. She often braves the big outdoors and discovers what nature has to offer - so much so that she organizes team-building, orienteering based games for the team at work. Seeing their joy as they explore the world around them inspired her to create Roadgames. It guides players through adventurous scavenger hunts, discovering new terrain.

                  #WeArePlay Xin Savy Soda Melbourne, Australia

                  And lastly, Xin from Australia. After years working in corporate tech, he gave it all up to pursue his dream of making mobile games inspired by the 90’s video games he played as a child. Now he owns his studio, Pixel Starships, and despite all his success with millions of downloads, his five-year-old child gives him plenty of feedback.

                  Check out all the stories now at g.co/play/weareplay and stay tuned for even more coming soon.



                  How useful did you find this blog post?

                  #WeArePlay Xin Savy Soda Melbourne, Australia Google Play g.co/play/weareplay

                  Open Source Pass Converter for Mobile Wallets

                  Posted by Stephen McDonald, Developer Programs Engineer, and Nick Alteen, Technical Writer, Engineering, Wallet

                  Each of the mobile wallet apps implement their own technical specification for passes that can be saved to the wallet. Pass structure and configuration varies by both the wallet application and the specific type of pass, meaning developers have to build and maintain code bases for each platform.

                  As part of Developer Relations for Google Wallet, our goal is to make life easier for those who want to integrate passes into their mobile or web applications. Today, we're excited to release the open-source Pass Converter project. The Pass Converter lets you take existing passes for one wallet application, convert them, and make them available in your mobile or web application for another wallet platform.

                  Moving image of Pass Converter successfully converting an external pkpass file to a Google Wallet pass

                  The Pass Converter launches with support for Google Wallet and Apple Wallet apps, with plans to add support for others in the future. For example, if you build an event ticket pass for one wallet, you can use the converter to automatically create a pass for another wallet. The following list of pass types are supported for their respective platforms:

                  • Event tickets
                  • Generic passes
                  • Loyalty/Store cards
                  • Offers/Coupons
                  • Flight/Boarding passes
                  • Other transit passes

                  We designed the Pass Converter with flexibility in mind. The following features provide additional customization to your needs.

                  • hints.json file can be provided to the Pass Converter to map Google Wallet pass properties to custom properties in other passes.
                  • For pass types that require certificate signatures, you can simply generate the pass structure and hand it off to your existing signing process
                  • Since images in Google Wallet passes are referenced by URLs, the Pass Converter can host the images itself, store them in Google Cloud Storage, or send them to another image host you manage.

                  If you want to quickly test converting different passes, the Pass Converter includes a demo mode where you can load a simple webpage to test converting passes. Later, you can run the tool via the command line to convert existing passes you manage. When you’re ready to automate pass conversion, the tool can be run as a web service within your environment.

                  The following command provides a demo web page on http://localhost:3000 to test converting passes.

                  node app.js demo

                  The next command converts passes locally. If the output path is omitted, the Pass Converter will output JSON to the terminal (for PKPass files, this will be the contents of pass.json).

                  node app.js <pass input path> <pass output path>

                  Lastly, the following command runs the Pass Converter as a web service. This service accepts POST requests to the root URL (e.g. https://localhost:3000/) with multipart/form-data encoding. The request body should include a single pass file.

                  node app.js


                  Ready to get started? Check out the GitHub repository where you can try converting your own passes. We welcome contributions back to the project as well!

                  Machine Learning Communities: Q3 ‘22 highlights and achievements

                  Posted by Nari Yoon, Hee Jung, DevRel Community Manager / Soonson Kwon, DevRel Program Manager

                  Let’s explore highlights and accomplishments of vast Google Machine Learning communities over the third quarter of the year! We are enthusiastic and grateful about all the activities by the global network of ML communities. Here are the highlights!


                  TensorFlow/Keras

                  Load-testing TensorFlow Serving’s REST Interface

                  Load-testing TensorFlow Serving’s REST Interface by ML GDE Sayak Paul (India) and Chansung Park (Korea) shares the lessons and findings they learned from conducting load tests for an image classification model across numerous deployment configurations.

                  TFUG Taipei hosted events (Python + Hugging Face-Translation+ tf.keras.losses, Python + Object detection, Python+Hugging Face-Token Classification+tf.keras.initializers) in September and helped community members learn how to use TF and Hugging face to implement machine learning model to solve problems.

                  Neural Machine Translation with Bahdanau’s Attention Using TensorFlow and Keras and the related video by ML GDE Aritra Roy Gosthipaty (India) explains the mathematical intuition behind neural machine translation.

                  Serving a TensorFlow image classification model as RESTful and gRPC based services with TFServing, Docker, and Kubernetes

                  Automated Deployment of TensorFlow Models with TensorFlow Serving and GitHub Actions by ML GDE Chansung Park (Korea) and Sayak Paul (India) explains how to automate TensorFlow model serving on Kubernetes with TensorFlow Serving and GitHub Action.

                  Deploying ? ViT on Kubernetes with TF Serving by ML GDE Sayak Paul (India) and Chansung Park (Korea) shows how to scale the deployment of a ViT model from ? Transformers using Docker and Kubernetes.

                  Screenshot of the TensorFlow Forum in the Chinese Language run by the tf.wiki team

                  Long-term TensorFlow Guidance on tf.wiki Forum by ML GDE Xihan Li (China) provides TensorFlow guidance by answering the questions from Chinese developers on the forum.

                  photo of a phone with the Hindi letter 'Ohm' drawn on the top half of the screen. Hinidi Character recognition shows the letter Ohm as the Predicted Result below.

                  Hindi Character Recognition on Android using TensorFlow Lite by ML GDE Nitin Tiwari (India) shares an end-to-end tutorial on training a custom computer vision model to recognize Hindi characters. In TFUG Pune event, he also gave a presentation titled Building Computer Vision Model using TensorFlow: Part 1.

                  Using TFlite Model Maker to Complete a Custom Audio Classification App by ML GDE Xiaoxing Wang (China) shows how to use TFLite Model Maker to build a custom audio classification model based on YAMNet and how to import and use the YAMNet-based custom models in Android projects.

                  SoTA semantic segmentation in TF with ? by ML GDE Sayak Paul (India) and Chansung Park (Korea). The SegFormer model was not available on TensorFlow.

                  Text Augmentation in Keras NLP by ML GDE Xiaoquan Kong (China) explains what text augmentation is and how the text augmentation feature in Keras NLP is designed.

                  The largest vision model checkpoint (public) in TF (10 Billion params) through ? transformers by ML GDE Sayak Paul (India) and Aritra Roy Gosthipaty (India). The underlying model is RegNet, known for its ability to scale.

                  A simple TensorFlow implementation of a DCGAN to generate CryptoPunks

                  CryptoGANs open-source repository by ML GDE Dimitre Oliveira (Brazil) shows simple model implementations following TensorFlow best practices that can be extended to more complex use-cases. It connects the usage of TensorFlow with other relevant frameworks, like HuggingFace, Gradio, and Streamlit, building an end-to-end solution.


                  TFX

                  TFX Machine Learning Pipeline from data injection in TFRecord to pushing out Vertex AI

                  MLOps for Vision Models from ? with TFX by ML GDE Chansung Park (Korea) and Sayak Paul (India) shows how to build a machine learning pipeline for a vision model (TensorFlow) from ? Transformers using the TF ecosystem.

                  First release of TFX Addons Package by ML GDE Hannes Hapke (United States). The package has been downloaded a few thousand times (source). Google and other developers maintain it through bi-weekly meetings. Google’s Open Source Peer Award has recognized the work.

                  TFUG São Paulo hosted TFX T1 | E4 & TFX T1 | E5. And ML GDE Vinicius Caridá (Brazil) shared how to train a model in a TFX pipeline. The fifth episode talks about Pusher: publishing your models with TFX.

                  Semantic Segmentation model within ML pipeline by ML GDE Chansung Park (Korea) and Sayak Paul (India) shows how to build a machine learning pipeline for semantic segmentation task with TFX and various GCP products such as Vertex Pipeline, Training, and Endpoints.


                  JAX/Flax

                  Screen shot of Tutorial 2 (JAX): Introduction to JAX+Flax with GitHub Repo and Codelab via university of Amseterdam

                  JAX Tutorial by ML GDE Phillip Lippe (Netherlands) is meant to briefly introduce JAX, including writing and training neural networks with Flax.


                  TFUG Malaysia hosted Introduction to JAX for Machine Learning (video) and Leong Lai Fong gave a talk. The attendees learned what JAX is and its fundamental yet unique features, which make it efficient to use when executing deep learning workloads. After that, they started training their first JAX-powered deep learning model.

                  TFUG Taipei hosted Python+ JAX + Image classification and helped people learn JAX and how to use it in Colab. They shared knowledge about the difference between JAX and Numpy, the advantages of JAX, and how to use it in Colab.

                  Introduction to JAX by ML GDE João Araújo (Brazil) shared the basics of JAX in Deep Learning Indaba 2022.

                  A comparison of the performance and overview of issues resulting from changing from NumPy to JAX

                  Should I change from NumPy to JAX? by ML GDE Gad Benram (Portugal) compares the performance and overview of the issues that may result from changing from NumPy to JAX.

                  Introduction to JAX: efficient and reproducible ML framework by ML GDE Seunghyun Lee (Korea) introduced JAX/Flax and their key features using practical examples. He explained the pure function and PRNG, which make JAX explicit and reproducible, and XLA and mapping functions which make JAX fast and easily parallelized.

                  Data2Vec Style pre-training in JAX by ML GDE Vasudev Gupta (India) shares a tutorial for demonstrating how to pre-train Data2Vec using the Jax/Flax version of HuggingFace Transformers.

                  Distributed Machine Learning with JAX by ML GDE David Cardozo (Canada) delivered what makes JAX different from TensorFlow.

                  Image classification with JAX & Flax by ML GDE Derrick Mwiti (Kenya) explains how to build convolutional neural networks with JAX/Flax. And he wrote several articles about JAX/Flax: What is JAX?, How to load datasets in JAX with TensorFlow, Optimizers in JAX and Flax, Flax vs. TensorFlow, etc..


                  Kaggle

                  DDPMs - Part 1 by ML GDE Aakash Nain (India) and cait-tf by ML GDE Sayak Paul (India) were announced as Kaggle ML Research Spotlight Winners.

                  Forward process in DDPMs from Timestep 0 to 100

                  Fresher on Random Variables, All you need to know about Gaussian distribution, and A deep dive into DDPMs by ML GDE Aakash Nain (India) explain the fundamentals of diffusion models.

                  In Grandmasters Journey on Kaggle + The Kaggle Book, ML GDE Luca Massaron (Italy) explained how Kaggle helps people in the data science industry and which skills you must focus on apart from the core technical skills.


                  Cloud AI

                  How Cohere is accelerating language model training with Google Cloud TPUs by ML GDE Joanna Yoo (Canada) explains what Cohere engineers have done to solve scaling challenges in large language models (LLMs).

                  ML GDE Hannes Hapke (United States) chats with Fillipo Mandella, Customer Engineering Manager at Google

                  In Using machine learning to transform finance with Google Cloud and Digits, ML GDE Hannes Hapke (United States) chats with Fillipo Mandella, Customer Engineering Manager at Google, about how Digits leverages Google Cloud’s machine learning tools to empower accountants and business owners with near-zero latency.

                  A tour of Vertex AI by TFUG Chennai for ML, cloud, and DevOps engineers who are working in MLOps. This session was about the introduction of Vertex AI, handling datasets and models in Vertex AI, deployment & prediction, and MLOps.

                  TFUG Abidjan hosted two events with GDG Cloud Abidjan for students and professional developers who want to prepare for a Google Cloud certification: Introduction session to certifications and Q&A, Certification Study Group.

                  Flow chart showing shows how to deploy a ViT B/16 model on Vertex AI

                  Deploying ? ViT on Vertex AI by ML GDE Sayak Paul (India) and Chansung Park (Korea) shows how to deploy a ViT B/16 model on Vertex AI. They cover some critical aspects of a deployment such as auto-scaling, authentication, endpoint consumption, and load-testing.

                  Photo collage of AI generated images

                  TFUG Singapore hosted The World of Diffusion - DALL-E 2, IMAGEN & Stable Diffusion. ML GDE Martin Andrews (Singapore) and Sam Witteveen (Singapore) gave talks named “How Diffusion Works” and “Investigating Prompt Engineering on Diffusion Models” to bring people up-to-date with what has been going on in the world of image generation.

                  ML GDE Martin Andrews (Singapore) have done three projects: GCP VM with Nvidia set-up and Convenience Scripts, Containers within a GCP host server, with Nvidia pass-through, Installing MineRL using Containers - with linked code.

                  Jupyter Services on Google Cloud by ML GDE Gad Benram (Portugal) explains the differences between Vertex AI Workbench, Colab, and Deep Learning VMs.

                  Google Cloud's Two Towers Recommender and TensorFlow

                  Train and Deploy Google Cloud's Two Towers Recommender by ML GDE Rubens de Almeida Zimbres (Brazil) explains how to implement the model and deploy it in Vertex AI.


                  Research & Ecosystem

                  WOMEN DATA SCIENCE, LA PAZ Club de lectura de papers de Machine Learning Read, Learn and Share the knowledge #MLPaperReadingClubs, Nathaly Alarcón, @WIDS_LaPaz #MLPaperReadingClubs

                  The first session of #MLPaperReadingClubs (video) by ML GDE Nathaly Alarcon Torrico (Bolivia) and Women in Data Science La Paz. Nathaly led the session, and the community members participated in reading the ML paper “Zero-shot learning through cross-modal transfer.”

                  In #MLPaperReadingClubs (video) by TFUG Lesotho, Arnold Raphael volunteered to lead the first session “Zero-shot learning through cross-modal transfer.”

                  Screenshot of a screenshare of Zero-shot learning through cross-modal transfer to 7 participants in a virtual call

                  ML Paper Reading Clubs #1: Zero Shot Learning Paper (video) by TFUG Agadir introduced a model that can recognize objects in images even if no training data is available for the objects. TFUG Agadir prepared this event to make people interested in machine learning research and provide them with a broader vision of differentiating good contributions from great ones.

                  Opening of the Machine Learning Paper Reading Club (video) by TFUG Dhaka introduced ML Paper Reading Club and the group’s plan.

                  EDA on SpaceX Falcon 9 launches dataset (Kaggle) (video) by TFUG Mysuru & TFUG Chandigarh organizer Aashi Dutt (presenter) walked through exploratory data analysis on SpaceX Falcon 9 launches dataset from Kaggle.

                  Screenshot of ML GDE Qinghua Duan (China) showing how to apply the MRC paradigm and BERT to solve the dialogue summarization problem.

                  Introduction to MRC-style dialogue summaries based on BERT by ML GDE Qinghua Duan (China) shows how to apply the MRC paradigm and BERT to solve the dialogue summarization problem.

                  Plant disease classification using Deep learning model by ML GDE Yannick Serge Obam Akou (Cameroon) talked on plant disease classification using deep learning model : an end to end Android app (open source project) that diagnoses plant diseases.

                  TensorFlow/Keras implementation of Nystromformer

                  Nystromformer Github repository by Rishit Dagli provides TensorFlow/Keras implementation of Nystromformer, a transformer variant that uses the Nyström method to approximate standard self-attention with O(n) complexity which allows for better scalability.

                  From a personal notebook to 100k YouTube subscriptions: How Carlos Azaustre turned his notes into a YouTube channel

                  Posted by Kevin Hernandez, Developer Relations Community Manager

                  Carlos Azaustre, smiling while holding his Silver Button Creator Award from YouTube
                  Carlos Azaustre with his Silver Button Creator Award from YouTube
                  When Carlos Azaustre, Web Technologies GDE, finished university, he started a blog to share his personal notes and learnings to teach others about Angular and JavaScript. These personal notes later evolved into tutorials that then turned into a blossoming YouTube channel with 105k subscriptions at the time of this writing. With his 10 years of experience as a Telecommunications Engineer focused on front end development, he has a breadth of experience that he shares with his viewers in a sea of competing content currently on YouTube. Carlos has successfully created a channel focused on technical topics related to JavaScript and has some valuable advice for those looking to educate on the platform.

                  How he got started with his channel

                  Carlos started his blog with the primary mission of using it as a personal notebook that he could reference in the future. As he wrote increasingly, he started to notice that people were coming across his notebooks and sharing with others. This inspired him to record tutorials based on the topics of his blogs, but when he was beginning to record these tutorials, a secondary mission came to fruition: he wanted to make technical content accessible to the Spanish-speaking community. He reflects, “In the Spanish community, English is difficult for some people, so I started to create content in Spanish to eliminate barriers for people who are interested in learning new technologies. Learning new things is hard, but it’s easier when it’s in your natural language.”

                  In the beginning of his YouTube journey, he used the platform for side projects and would post irregularly. Then, 2 years ago, he started putting more effort into creating new content and started to post one video a week while promoting on social media. This change sparked more comments, and his view and total subscribers increased in tandem.


                  Tips and tricks he’s applied to his channel

                  Carlos leverages analytics data to adjust his strategy. He explains, “YouTube provides a lot of analytics tools to see if people are engaging and when they leave the video. So you can adjust your content and the timing (video length) because the timing is important.” The data taught Carlos that longer videos generally don’t do as well. He learned the ideal video length for lecture videos where he’s primarily speaking is about 6-8 minutes. But when it comes to tutorials, videos that are about 40 - 60 minutes in length tend to get more views.

                  Carlos has also taken advantage of YouTube Shorts, a short-form video-sharing platform. “I started to see that Shorts are great to increase your reach because the algorithm pushes your content to people who aren’t subscribed to your channel,” he pointed out. He recommends using YouTube Shorts as an effective way of getting started. When asked about other resources, Carlos mentioned that he primarily draws from his own experience but also turns to books and blogs to help with his channel and to stay up to date with technology.


                  Choosing video topics

                  Creating fresh weekly content can be a challenge. To address this, Carlos keeps a notebook of ideas and inspiration for his next videos. For example, he may come across a problem that lacks a clear solution at work and will jot this down. He also keeps track of articles or other tutorials that he feels can either be explained in a more straightforward way or can be translated into Spanish.

                  Carlos also draws inspiration from the comment section of his videos. He engages with his audience to show there is a real person behind the videos that can guide them. He adds, “this is one of the parts I like the most. They propose new ideas for content that I might’ve missed”.


                  Advice for starting a channel on technical topics

                  Carlos’ advice for people looking to start a channel based on technical content is simple: just get started. “If you’re creating great content, people will eventually reach you,” he comments. When he first started his channel, Carlos wasn’t preoccupied with the number of views, comments, or subscriptions. He started his content with himself in mind and would ask himself what kind of content he would want to see. He says, “As long as you’re engaged with the community, you’ll have a great channel. If you try to optimize the content for the algorithm, you’re going to go crazy.” He recommends new content creators start with YouTube Shorts, and once they gain an audience they can create more detailed videos.

                  It’s also necessary to spark conversation in the comments, and one way you can achieve this is through the title and description of your video. A great title that catches the attention of the viewer, sparks conversation, and implements keywords is essential. A simple way to do this is by asking a question in the title. For example, one of his videos is titled, “How do Promises and Async / Await function in JavaScript?” and also asks a question in the description. This video alone has 250+ comments with viewers answering the question posed by the title and the description. He’s also mindful of what keywords he’s including in his title and finds these keywords by looking at the most popular content with similar topics.

                  When asked about gear and equipment recommendations, he states that the most important piece of equipment is your microphone, since your voice can be more important than the image, especially if you’re filming a tutorial video. He goes on, “With time, you can update your setup. Maybe your camera is next and then the lighting. Start with your phone or your regular laptop - just start!”

                  So remember to just get started, and maybe in time, you’ll become the next big content creator for Machine Learning, Google Cloud, Android, or Web Technologies.


                  You can check out Carlos’ YouTube Channel, find him live on Twitch, or follow him on Twitter or Instagram.

                  The Google Developer Experts (GDE) program is a global network of highly experienced technology experts, influencers, and thought leaders who actively support developers, companies, and tech communities by speaking at events and publishing content.