Category Archives: Google Developers Blog

News and insights on Google platforms, tools and events

How to use App Engine pull tasks (Module 18)

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Introduction and background

The Serverless Migration Station mini-series helps App Engine developers modernize their apps to the latest language runtimes, such as from Python 2 to 3 or Java 8 to 17, or to sister serverless platforms Cloud Functions and Cloud Run. Another goal of this series is to demonstrate how to move away from App Engine's original APIs (now referred to as legacy bundled services) to Cloud standalone replacement services. Once no longer dependent on these proprietary services, apps become much more portable, making them flexible enough to:

App Engine's Task Queue service provides infrastructure for executing tasks outside of the standard request-response workflow. Tasks may consist of workloads exceeding request timeouts or periodic tangential work. The Task Queue service provides two different queue types, push and pull, for developers to perform auxiliary work.

Push queues are covered in Migration Modules 7-9, demonstrating how to add use of push tasks to an existing baseline app followed by steps to migrate that functionality to Cloud Tasks, the standalone successor to the Task Queues push service. We turn to pull queues in today's video where Module 18 demonstrates how to add use of pull tasks to the same baseline sample app. Module 19 follows, showing how to migrate that usage to Cloud Pub/Sub.

Adding use of pull queues

In addition to registering page visits, the sample app needs to be modified to track visitors. Visits are comprised of a timestamp and visitor information such as the IP address and user agent. We'll modify the app to use the IP address and track how many visits come from each address seen. The home page is modified to show the top visitors in addition to the most recent visits:

Screen grab of the sample app's updated home page tracking visits and visitors
The sample app's updated home page tracking visits and visitors

When visits are registered, pull tasks are created to track the visitors. The pull tasks sit patiently in the queue until they are processed in aggregate periodically. Until that happens, the top visitors table stays static. These tasks can be processed in a number of ways: periodically by a cron or Cloud Scheduler job, a separate App Engine backend service, explicitly by a user (via browser or command-line HTTP request), event-triggered Cloud Function, etc. In the tutorial, we issue a curl request to the app's endpoint to process the enqueued tasks. When all tasks have completed, the table then reflects any changes to the current top visitors and their visit counts:

Screen grab of processed pull tasks updated in the top visitors table
Processed pull tasks update the top visitors table

Below is some pseudocode representing the core part of the app that was altered to add Task Queue pull task usage, namely a new data model class, VisitorCount, to track visitor counts, enqueuing a (pull) task to update visitor counts when registering individual visits in store_visit(), and most importantly, a new function fetch_counts(), accessible via /log, to process enqueued tasks and update overall visitor counts. The bolded lines represent the new or altered code.

Adding App Engine Task Queue pull task usage to sample app showing 'Before'[Module 1] on the left and 'After' [Module 18] with altered code on the right
Adding App Engine Task Queue pull task usage to sample app

Wrap-up

This "migration" is comprised of adding Task Queue pull task usage to support tracking visitor counts to the Module 1 baseline app and arrives at the finish line with the Module 18 app. To get hands-on experience doing it yourself, do the codelab by hand and follow along with the video. Then you'll be ready to upgrade to Cloud Pub/Sub should you choose to do so.

In Fall 2021, the App Engine team extended support of many of the bundled services to 2nd generation runtimes (that have a 1st generation runtime), meaning you are no longer required to migrate pull tasks to Pub/Sub when porting your app to Python 3. You can continue using Task Queue in your Python 3 app so long as you retrofit the code to access bundled services from next-generation runtimes.

If you do want to move to Pub/Sub, see Module 19, including its codelab. All Serverless Migration Station content (codelabs, videos, and source code) are available at its open source repo. While we're initially focusing on Python users, the Cloud team is covering other runtimes soon, so stay tuned. Also check out other videos in the broader Serverless Expeditions series.

When to step-up your Google Pay transactions as a PSP

Posted by Dominik Mengelt, Developer Relations Engineer, Google Pay and Nick Alteen, Technical Writer, Engineering, Wallet

What is step-up authentication?

When processing payments, step-up authentication (or simply “step-up”) is the practice of requiring additional authentication measures based on user activity and certain risk signals. For example, redirecting the user to 3D Secure to authenticate a transaction. This can help to reduce potential fraud and chargebacks. The following graphic shows the high-level flow of a transaction to determine what's to be done if step-up is needed.

graphic showing the high-level flow of a transaction
Figure 1: Trigger your Risk Engine before sending the transaction to authorization if step-up is needed

It depends! When making a transaction, the Google Pay API response will return one of the following:

  • An authenticated payload that can be processed without any further step-up or challenge. For example, when a user adds a payment card to Google Wallet. In this case, the user has already completed identity verification with their issuing bank.
  • A primary account number (PAN) that requires additional authentication measures, such as 3D Secure. For example, a user making a purchase with a payment card previously stored through Chrome Autofill.

You can use the allowedAuthMethods parameter to indicate which authentication methods you want to support for Google Pay transactions:

"allowedAuthMethods": [
    "CRYPTOGRAM_3DS",
    "PAN_ONLY"

]


In this case, you’re asking Google Pay to display the payment sheet for both types. For example, if the user selects a PAN_ONLY card (a card not tokenized, not enabled for contactless) from the payment sheet during checkout, step-up is needed. Let's have a look at two concrete scenarios:


In the first scenario, the Google Pay sheet shows a card previously added to Google Wallet. The card art and name of the user's issuing bank are displayed. If the user selects this card during the checkout process, no step-up is required because it would fall under the CRYPTOGRAM_3DS authentication method.

On the other hand, the sheet in the second scenario shows a generic card network icon. This indicates a PAN_ONLY authentication method and therefore needs step-up.

PAN_ONLY vs. CRYPTOGRAM_3DS

Whether or not you decide to accept both forms of payments is your decision. For CRYPTOGRAM_3DS, the Google Pay API additionally returns a cryptogram and, depending on the network, an eciIndicator. Make sure to use those properties when continuing with authorization.

PAN_ONLY

This authentication method is associated with payment cards from a user’s Google Account. Returned payment data includes the PAN with the expiration month and year.

CRYPTOGRAM_3DS

This authentication method is associated with cards stored as Android device tokens provided by the issuers. Returned payment data includes a cryptogram generated on the device.

When should you step-up Google Pay transactions?

When calling the loadPaymentData method, the Google Pay API will return an encrypted payment token (paymentData.paymentMethodData.tokenizationData.token). After decryption, the paymentMethodDetails object contains a property, assuranceDetails, which has the following format:

"assuranceDetails": {
    "cardHolderAuthenticated": true,
    "accountVerified": true
}

Depending on the values of cardHolderAuthenticated and accountVerified, step-up authentication may be required. The following table indicates the possible scenarios and when Google recommends step-up authentication for a transaction:

cardHolderAuthenticated

accountVerified

Step-up needed

true

true

No

false

true

Yes

Step-up can be skipped only when both cardHolderAuthenticated and accountVerified return true.

Next steps

If you are not using assuranceDetails yet, consider doing so now and make sure to step-uptransactions if needed. Also, make sure to check out our guide on Strong Customer Authentication (SCA) if you are processing payments within the European Economic Area (EEA). Follow @GooglePayDevs on Twitter for future updates. If you have questions, mention @GooglePayDevs and include #AskGooglePayDevs in your tweets.

Experts share insights on Firebase, Flutter and the developer community

Posted by Komal Sandhu - Global Program Manager, Google Developer Groups

Rich Hyndman, Manager, Firebase DevRel (left) and Eric Windmill, Developer Relations Engineer, Firebase and Flutter (right)

Firebase and Flutter offer many tools that ‘just work’, which is something that all apps need. I think you’d be hard pressed to find another combination of front end framework and back end services that let developers make apps quickly without sacrificing quality.” 

moving images of Sparky and Dart, respective mascots for Firebase and Flutter
Among the many inspiring experts in the developer communities for Firebase and Flutter are Rich Hyndman and Eric Windmill. Each Googler serves their respective product team from the engineering and community sides and has a keen eye towards the future. Read on to see their outlook on their favorite Firebase and Flutter tools and the developers that inspire them.

===

What is your title, and how long have you been at Google?

Rich: I run Firebase Developer Relations,, I’ve been at Google for around 11 years

Eric: I’m an engineer on the Flutter team and I’ve been at Google for a year.


Tell us about yourself:

Rich: I’ve always loved tech, from techy toys as a kid to anything that flies. I still get tech-joy when I see new gadgets and devices. I built and raced drones for a while, but mobile/cell phones are the ultimate gadget for me and enabled my career.

Eric: I’m a software engineer, and these days I’m specifically a Developer Relations Engineer. I’m not surprised I’ve ended up here, as I like to joke “I like computers but I like people more.” Outside of work, most of my time is spent thinking about music. I’m pretty poor at playing music, but I’ve always consumed as much as I could. If I had to choose a different job and start over, I’d be a music journalist.


How did you get started in this space?

Rich: I've always loved mobile apps: being able to carry my work in my pocket, play with it, test it, demo it, and be proud of it. From the beginning of my career right up till today, it's still the best. I worked on a few mobile projects pre-Android and was part of an exciting mobile tech startup for a few years, but it was Android that really kick-started my career.

I quickly fell in love with the little green droid and the entire platform, and through a combination of meetups, competition entries and conferences I ended up in contact with Android DevRel at Google.

Firebase is a natural counterpart to Android and I love being able to support developers from a different angle. Firebase also supports Flutter, Web and iOS, Firebase, which has also given me the opportunity to learn more about other platforms and meet more developers.

Eric: I got into this space by accident. At my first software job, the company was already using Dart for their web application, and started rebuilding their mobile apps in Flutter soon after I joined. I think that was around 2016 or 2017. Flutter was still in its Alpha stage. I was introduced to Firebase at the same job, and I’ve used various tools from the Firebase SDK ever since.


What are some challenges that you have seen developers being facing?

Rich: Developers often want to get up and running with new projects quickly, but then iterate and improve their apps. No-code solutions can be great to start with but aren’t flexible enough down the road. A lower-code solution like Firebase can be quick to get started, and it can also provide control. Bringing Flutter and Firebase together creates a powerful and flexible combination.

Eric: Regardless of the technology, I think the biggest challenge developers face is actually with documentation. It doesn’t matter how good a product is if the docs are hard to find or hard to understand. We’ve seen this ourselves recently as Flutter became an “official” supported platform on Firebase in May 2022. When that happened, we moved the documentation from the Flutter site to the Firebase site, and folks didn’t know how to find the docs. It was an oversight on our part, but it’s a good example of the importance of docs. They deserve way more attention than they get in many, many cases.

image of Sparky and Dart, respective mascots for Firebase and Flutter
What do you think is the most interesting or useful resource to learn more about Firebase & Flutter? Is there a particular library or codelab that everyone should learn?

Rich: The official docs have to be first, located at firebase.google.com. We have a great repository of Learning Pathways, including Add Firebase to your Flutter App. We’re also just launching our new Solutions Portal with over 60 solutions guides indexed already.

Eric: If I have to name only one resource, it’d be this codelab: Get to know Firebase for Flutter
But Firebase offers so many tools. This codelab is just an introduction to what’s possible.


What are some inspiring ways that developers are building together Firebase and Flutter?

Rich: We’ve had an interesting couple of years at Firebase. Firebase has always been known for powering real-time data driven apps. If you used a Covid stats app during the pandemic there’s a fair chance it was running on Firebase; there was a big surge of new apps.

Eric: Lately I’ve seen an interest in using Flutter to make 2D games, and using some Firebase tools for the back end of the game. I love this. Games are just more fun than apps, of course, but it’s also great to see folks using these technologies in ways that aren’t the explicit purposes. It shows creativity and excellent problem solving.


What’s a specific use case of Firebase & Flutter technology that excites you?

Rich: Firebase Extensions are very exciting. They are pre-packaged bundles of code that make it easy to add new features to your app from Google and partners like Stripe and Vonage. We just launched the Extensions Marketplace and opened up the ability for developers to build extensions for their own apps through our Provider Alpha program.

Eric: Flutter web and Firebase hosting is just a no brainer. You can deploy a Flutter app to the web in no time.


How can developers be successful building on Firebase & Flutter?

Rich: There’s a very powerful combination with Crashlytics, Performance Monitoring, A/B Testing and Remote Config. Developers can quickly improve the stability of their apps whilst also iterating on features to deliver the best experience for their users. We’ve had a lot of success with improving monetization, too. Check out some of our case studies for more details.

Eric: Flutter developers can be successful by leveraging all that Firebase offers. Firebase might seem intimidating because it offers so much, but it excels at being easy to use, and I encourage all web and mobile developers to poke around. They’re likely to find something that makes their lives easier.

image of Firebase and Flutter logos against a dot matrix background
What’s next for the Firebase & Flutter Communities? What might the future look like?

Rich: Over the next year we’ll be focusing on modern app development and some more opinionated guides. Better support for Flutter, Kotlin, Jetpack Compose, Swift/SwiftUI and modern web frameworks.

Eric: There is a genuine effort amongst both teams to support each other. Flutter and Firebase are just such a great pair, that it makes sense for us to encourage our communities to check out one another. In the future, I think this will continue. I think you’ll see a lot of Flutter at Firebase events, and vice versa.


How does Firebase & Flutter help expand the impact of developers?

Rich: Firebase has always focused on helping developers get their apps up and running by providing tools to streamline time-consuming tasks. Enabling developers to focus on delivering the best app experiences and the most value to their users.

Eric: Flutter is an app-building SDK that is a joy to use. It seriously increases velocity because it’s cross-platform. Firebase and Flutter offer many tools that “just work”, which is something that all apps need. I think you’d be hard pressed to find another combination of front end framework and back end services that let developers make apps quickly without sacrificing quality.


Find a Google Developer Group hosting a DevFest near you.

Want to learn more about Google Technologies like Firebase & Flutter? Hoping to attend a DevFest or Google Developer Groups (GDG)? Find a GDG hosting a DevFest near you here.

#WeArePlay | Discover what inspired 4 game creators around the world

Posted by Leticia Lago, Developer Marketing

From exploring the great outdoors to getting your first computer - a seemingly random moment in your life might one day be the very thing which inspires you to go out there and follow your dreams. That’s what happened to four game studio founders featured in our latest release of #WeArePlay stories. Find out what inspired them to create games which are entertaining millions around the globe.

Born and raised in Salvador, Brazil, Filipe was so inspired by the city’s cultural heritage that he studied History before becoming a teacher. One day, he realised games could be a powerful medium to share Brazilian history and culture with the world. So he founded Aoca Game Lab, and their first title, ÁRIDA: Backland’s Awakening, is a survival game based in the historic town of Canudos. Aoca Game Lab took part in the Indie Games Accelerator and have also been selected to receive the Indie Games Fund. With the help from these Google Play programs, they will take the game and studio to the next level.

#WeArePlay Marko Peaskel Nis, Serbia

Next, Marko from Serbia. As a chemistry student, he was never really interested in tech - then he received his first computer and everything changed. He quit his degree to focus on his new passion and now owns his successful studio Peaksel with over 480 million downloads. One of their most popular titles is 100 Doors Games: School Escape, with over 100 levels to challenge the minds of even the most experienced players.

#WeArePlay Liene Roadgames Riga Latvia

And now onto Liene from Latvia. She often braves the big outdoors and discovers what nature has to offer - so much so that she organizes team-building, orienteering based games for the team at work. Seeing their joy as they explore the world around them inspired her to create Roadgames. It guides players through adventurous scavenger hunts, discovering new terrain.

#WeArePlay Xin Savy Soda Melbourne, Australia

And lastly, Xin from Australia. After years working in corporate tech, he gave it all up to pursue his dream of making mobile games inspired by the 90’s video games he played as a child. Now he owns his studio, Pixel Starships, and despite all his success with millions of downloads, his five-year-old child gives him plenty of feedback.

Check out all the stories now at g.co/play/weareplay and stay tuned for even more coming soon.



How useful did you find this blog post?

#WeArePlay Xin Savy Soda Melbourne, Australia Google Play g.co/play/weareplay

Open Source Pass Converter for Mobile Wallets

Posted by Stephen McDonald, Developer Programs Engineer, and Nick Alteen, Technical Writer, Engineering, Wallet

Each of the mobile wallet apps implement their own technical specification for passes that can be saved to the wallet. Pass structure and configuration varies by both the wallet application and the specific type of pass, meaning developers have to build and maintain code bases for each platform.

As part of Developer Relations for Google Wallet, our goal is to make life easier for those who want to integrate passes into their mobile or web applications. Today, we're excited to release the open-source Pass Converter project. The Pass Converter lets you take existing passes for one wallet application, convert them, and make them available in your mobile or web application for another wallet platform.

Moving image of Pass Converter successfully converting an external pkpass file to a Google Wallet pass

The Pass Converter launches with support for Google Wallet and Apple Wallet apps, with plans to add support for others in the future. For example, if you build an event ticket pass for one wallet, you can use the converter to automatically create a pass for another wallet. The following list of pass types are supported for their respective platforms:

  • Event tickets
  • Generic passes
  • Loyalty/Store cards
  • Offers/Coupons
  • Flight/Boarding passes
  • Other transit passes

We designed the Pass Converter with flexibility in mind. The following features provide additional customization to your needs.

  • hints.json file can be provided to the Pass Converter to map Google Wallet pass properties to custom properties in other passes.
  • For pass types that require certificate signatures, you can simply generate the pass structure and hand it off to your existing signing process
  • Since images in Google Wallet passes are referenced by URLs, the Pass Converter can host the images itself, store them in Google Cloud Storage, or send them to another image host you manage.

If you want to quickly test converting different passes, the Pass Converter includes a demo mode where you can load a simple webpage to test converting passes. Later, you can run the tool via the command line to convert existing passes you manage. When you’re ready to automate pass conversion, the tool can be run as a web service within your environment.

The following command provides a demo web page on http://localhost:3000 to test converting passes.

node app.js demo

The next command converts passes locally. If the output path is omitted, the Pass Converter will output JSON to the terminal (for PKPass files, this will be the contents of pass.json).

node app.js <pass input path> <pass output path>

Lastly, the following command runs the Pass Converter as a web service. This service accepts POST requests to the root URL (e.g. https://localhost:3000/) with multipart/form-data encoding. The request body should include a single pass file.

node app.js


Ready to get started? Check out the GitHub repository where you can try converting your own passes. We welcome contributions back to the project as well!

Machine Learning Communities: Q3 ‘22 highlights and achievements

Posted by Nari Yoon, Hee Jung, DevRel Community Manager / Soonson Kwon, DevRel Program Manager

Let’s explore highlights and accomplishments of vast Google Machine Learning communities over the third quarter of the year! We are enthusiastic and grateful about all the activities by the global network of ML communities. Here are the highlights!


TensorFlow/Keras

Load-testing TensorFlow Serving’s REST Interface

Load-testing TensorFlow Serving’s REST Interface by ML GDE Sayak Paul (India) and Chansung Park (Korea) shares the lessons and findings they learned from conducting load tests for an image classification model across numerous deployment configurations.

TFUG Taipei hosted events (Python + Hugging Face-Translation+ tf.keras.losses, Python + Object detection, Python+Hugging Face-Token Classification+tf.keras.initializers) in September and helped community members learn how to use TF and Hugging face to implement machine learning model to solve problems.

Neural Machine Translation with Bahdanau’s Attention Using TensorFlow and Keras and the related video by ML GDE Aritra Roy Gosthipaty (India) explains the mathematical intuition behind neural machine translation.

Serving a TensorFlow image classification model as RESTful and gRPC based services with TFServing, Docker, and Kubernetes

Automated Deployment of TensorFlow Models with TensorFlow Serving and GitHub Actions by ML GDE Chansung Park (Korea) and Sayak Paul (India) explains how to automate TensorFlow model serving on Kubernetes with TensorFlow Serving and GitHub Action.

Deploying ? ViT on Kubernetes with TF Serving by ML GDE Sayak Paul (India) and Chansung Park (Korea) shows how to scale the deployment of a ViT model from ? Transformers using Docker and Kubernetes.

Screenshot of the TensorFlow Forum in the Chinese Language run by the tf.wiki team

Long-term TensorFlow Guidance on tf.wiki Forum by ML GDE Xihan Li (China) provides TensorFlow guidance by answering the questions from Chinese developers on the forum.

photo of a phone with the Hindi letter 'Ohm' drawn on the top half of the screen. Hinidi Character recognition shows the letter Ohm as the Predicted Result below.

Hindi Character Recognition on Android using TensorFlow Lite by ML GDE Nitin Tiwari (India) shares an end-to-end tutorial on training a custom computer vision model to recognize Hindi characters. In TFUG Pune event, he also gave a presentation titled Building Computer Vision Model using TensorFlow: Part 1.

Using TFlite Model Maker to Complete a Custom Audio Classification App by ML GDE Xiaoxing Wang (China) shows how to use TFLite Model Maker to build a custom audio classification model based on YAMNet and how to import and use the YAMNet-based custom models in Android projects.

SoTA semantic segmentation in TF with ? by ML GDE Sayak Paul (India) and Chansung Park (Korea). The SegFormer model was not available on TensorFlow.

Text Augmentation in Keras NLP by ML GDE Xiaoquan Kong (China) explains what text augmentation is and how the text augmentation feature in Keras NLP is designed.

The largest vision model checkpoint (public) in TF (10 Billion params) through ? transformers by ML GDE Sayak Paul (India) and Aritra Roy Gosthipaty (India). The underlying model is RegNet, known for its ability to scale.

A simple TensorFlow implementation of a DCGAN to generate CryptoPunks

CryptoGANs open-source repository by ML GDE Dimitre Oliveira (Brazil) shows simple model implementations following TensorFlow best practices that can be extended to more complex use-cases. It connects the usage of TensorFlow with other relevant frameworks, like HuggingFace, Gradio, and Streamlit, building an end-to-end solution.


TFX

TFX Machine Learning Pipeline from data injection in TFRecord to pushing out Vertex AI

MLOps for Vision Models from ? with TFX by ML GDE Chansung Park (Korea) and Sayak Paul (India) shows how to build a machine learning pipeline for a vision model (TensorFlow) from ? Transformers using the TF ecosystem.

First release of TFX Addons Package by ML GDE Hannes Hapke (United States). The package has been downloaded a few thousand times (source). Google and other developers maintain it through bi-weekly meetings. Google’s Open Source Peer Award has recognized the work.

TFUG São Paulo hosted TFX T1 | E4 & TFX T1 | E5. And ML GDE Vinicius Caridá (Brazil) shared how to train a model in a TFX pipeline. The fifth episode talks about Pusher: publishing your models with TFX.

Semantic Segmentation model within ML pipeline by ML GDE Chansung Park (Korea) and Sayak Paul (India) shows how to build a machine learning pipeline for semantic segmentation task with TFX and various GCP products such as Vertex Pipeline, Training, and Endpoints.


JAX/Flax

Screen shot of Tutorial 2 (JAX): Introduction to JAX+Flax with GitHub Repo and Codelab via university of Amseterdam

JAX Tutorial by ML GDE Phillip Lippe (Netherlands) is meant to briefly introduce JAX, including writing and training neural networks with Flax.


TFUG Malaysia hosted Introduction to JAX for Machine Learning (video) and Leong Lai Fong gave a talk. The attendees learned what JAX is and its fundamental yet unique features, which make it efficient to use when executing deep learning workloads. After that, they started training their first JAX-powered deep learning model.

TFUG Taipei hosted Python+ JAX + Image classification and helped people learn JAX and how to use it in Colab. They shared knowledge about the difference between JAX and Numpy, the advantages of JAX, and how to use it in Colab.

Introduction to JAX by ML GDE João Araújo (Brazil) shared the basics of JAX in Deep Learning Indaba 2022.

A comparison of the performance and overview of issues resulting from changing from NumPy to JAX

Should I change from NumPy to JAX? by ML GDE Gad Benram (Portugal) compares the performance and overview of the issues that may result from changing from NumPy to JAX.

Introduction to JAX: efficient and reproducible ML framework by ML GDE Seunghyun Lee (Korea) introduced JAX/Flax and their key features using practical examples. He explained the pure function and PRNG, which make JAX explicit and reproducible, and XLA and mapping functions which make JAX fast and easily parallelized.

Data2Vec Style pre-training in JAX by ML GDE Vasudev Gupta (India) shares a tutorial for demonstrating how to pre-train Data2Vec using the Jax/Flax version of HuggingFace Transformers.

Distributed Machine Learning with JAX by ML GDE David Cardozo (Canada) delivered what makes JAX different from TensorFlow.

Image classification with JAX & Flax by ML GDE Derrick Mwiti (Kenya) explains how to build convolutional neural networks with JAX/Flax. And he wrote several articles about JAX/Flax: What is JAX?, How to load datasets in JAX with TensorFlow, Optimizers in JAX and Flax, Flax vs. TensorFlow, etc..


Kaggle

DDPMs - Part 1 by ML GDE Aakash Nain (India) and cait-tf by ML GDE Sayak Paul (India) were announced as Kaggle ML Research Spotlight Winners.

Forward process in DDPMs from Timestep 0 to 100

Fresher on Random Variables, All you need to know about Gaussian distribution, and A deep dive into DDPMs by ML GDE Aakash Nain (India) explain the fundamentals of diffusion models.

In Grandmasters Journey on Kaggle + The Kaggle Book, ML GDE Luca Massaron (Italy) explained how Kaggle helps people in the data science industry and which skills you must focus on apart from the core technical skills.


Cloud AI

How Cohere is accelerating language model training with Google Cloud TPUs by ML GDE Joanna Yoo (Canada) explains what Cohere engineers have done to solve scaling challenges in large language models (LLMs).

ML GDE Hannes Hapke (United States) chats with Fillipo Mandella, Customer Engineering Manager at Google

In Using machine learning to transform finance with Google Cloud and Digits, ML GDE Hannes Hapke (United States) chats with Fillipo Mandella, Customer Engineering Manager at Google, about how Digits leverages Google Cloud’s machine learning tools to empower accountants and business owners with near-zero latency.

A tour of Vertex AI by TFUG Chennai for ML, cloud, and DevOps engineers who are working in MLOps. This session was about the introduction of Vertex AI, handling datasets and models in Vertex AI, deployment & prediction, and MLOps.

TFUG Abidjan hosted two events with GDG Cloud Abidjan for students and professional developers who want to prepare for a Google Cloud certification: Introduction session to certifications and Q&A, Certification Study Group.

Flow chart showing shows how to deploy a ViT B/16 model on Vertex AI

Deploying ? ViT on Vertex AI by ML GDE Sayak Paul (India) and Chansung Park (Korea) shows how to deploy a ViT B/16 model on Vertex AI. They cover some critical aspects of a deployment such as auto-scaling, authentication, endpoint consumption, and load-testing.

Photo collage of AI generated images

TFUG Singapore hosted The World of Diffusion - DALL-E 2, IMAGEN & Stable Diffusion. ML GDE Martin Andrews (Singapore) and Sam Witteveen (Singapore) gave talks named “How Diffusion Works” and “Investigating Prompt Engineering on Diffusion Models” to bring people up-to-date with what has been going on in the world of image generation.

ML GDE Martin Andrews (Singapore) have done three projects: GCP VM with Nvidia set-up and Convenience Scripts, Containers within a GCP host server, with Nvidia pass-through, Installing MineRL using Containers - with linked code.

Jupyter Services on Google Cloud by ML GDE Gad Benram (Portugal) explains the differences between Vertex AI Workbench, Colab, and Deep Learning VMs.

Google Cloud's Two Towers Recommender and TensorFlow

Train and Deploy Google Cloud's Two Towers Recommender by ML GDE Rubens de Almeida Zimbres (Brazil) explains how to implement the model and deploy it in Vertex AI.


Research & Ecosystem

WOMEN DATA SCIENCE, LA PAZ Club de lectura de papers de Machine Learning Read, Learn and Share the knowledge #MLPaperReadingClubs, Nathaly Alarcón, @WIDS_LaPaz #MLPaperReadingClubs

The first session of #MLPaperReadingClubs (video) by ML GDE Nathaly Alarcon Torrico (Bolivia) and Women in Data Science La Paz. Nathaly led the session, and the community members participated in reading the ML paper “Zero-shot learning through cross-modal transfer.”

In #MLPaperReadingClubs (video) by TFUG Lesotho, Arnold Raphael volunteered to lead the first session “Zero-shot learning through cross-modal transfer.”

Screenshot of a screenshare of Zero-shot learning through cross-modal transfer to 7 participants in a virtual call

ML Paper Reading Clubs #1: Zero Shot Learning Paper (video) by TFUG Agadir introduced a model that can recognize objects in images even if no training data is available for the objects. TFUG Agadir prepared this event to make people interested in machine learning research and provide them with a broader vision of differentiating good contributions from great ones.

Opening of the Machine Learning Paper Reading Club (video) by TFUG Dhaka introduced ML Paper Reading Club and the group’s plan.

EDA on SpaceX Falcon 9 launches dataset (Kaggle) (video) by TFUG Mysuru & TFUG Chandigarh organizer Aashi Dutt (presenter) walked through exploratory data analysis on SpaceX Falcon 9 launches dataset from Kaggle.

Screenshot of ML GDE Qinghua Duan (China) showing how to apply the MRC paradigm and BERT to solve the dialogue summarization problem.

Introduction to MRC-style dialogue summaries based on BERT by ML GDE Qinghua Duan (China) shows how to apply the MRC paradigm and BERT to solve the dialogue summarization problem.

Plant disease classification using Deep learning model by ML GDE Yannick Serge Obam Akou (Cameroon) talked on plant disease classification using deep learning model : an end to end Android app (open source project) that diagnoses plant diseases.

TensorFlow/Keras implementation of Nystromformer

Nystromformer Github repository by Rishit Dagli provides TensorFlow/Keras implementation of Nystromformer, a transformer variant that uses the Nyström method to approximate standard self-attention with O(n) complexity which allows for better scalability.

From a personal notebook to 100k YouTube subscriptions: How Carlos Azaustre turned his notes into a YouTube channel

Posted by Kevin Hernandez, Developer Relations Community Manager

Carlos Azaustre, smiling while holding his Silver Button Creator Award from YouTube
Carlos Azaustre with his Silver Button Creator Award from YouTube
When Carlos Azaustre, Web Technologies GDE, finished university, he started a blog to share his personal notes and learnings to teach others about Angular and JavaScript. These personal notes later evolved into tutorials that then turned into a blossoming YouTube channel with 105k subscriptions at the time of this writing. With his 10 years of experience as a Telecommunications Engineer focused on front end development, he has a breadth of experience that he shares with his viewers in a sea of competing content currently on YouTube. Carlos has successfully created a channel focused on technical topics related to JavaScript and has some valuable advice for those looking to educate on the platform.

How he got started with his channel

Carlos started his blog with the primary mission of using it as a personal notebook that he could reference in the future. As he wrote increasingly, he started to notice that people were coming across his notebooks and sharing with others. This inspired him to record tutorials based on the topics of his blogs, but when he was beginning to record these tutorials, a secondary mission came to fruition: he wanted to make technical content accessible to the Spanish-speaking community. He reflects, “In the Spanish community, English is difficult for some people, so I started to create content in Spanish to eliminate barriers for people who are interested in learning new technologies. Learning new things is hard, but it’s easier when it’s in your natural language.”

In the beginning of his YouTube journey, he used the platform for side projects and would post irregularly. Then, 2 years ago, he started putting more effort into creating new content and started to post one video a week while promoting on social media. This change sparked more comments, and his view and total subscribers increased in tandem.


Tips and tricks he’s applied to his channel

Carlos leverages analytics data to adjust his strategy. He explains, “YouTube provides a lot of analytics tools to see if people are engaging and when they leave the video. So you can adjust your content and the timing (video length) because the timing is important.” The data taught Carlos that longer videos generally don’t do as well. He learned the ideal video length for lecture videos where he’s primarily speaking is about 6-8 minutes. But when it comes to tutorials, videos that are about 40 - 60 minutes in length tend to get more views.

Carlos has also taken advantage of YouTube Shorts, a short-form video-sharing platform. “I started to see that Shorts are great to increase your reach because the algorithm pushes your content to people who aren’t subscribed to your channel,” he pointed out. He recommends using YouTube Shorts as an effective way of getting started. When asked about other resources, Carlos mentioned that he primarily draws from his own experience but also turns to books and blogs to help with his channel and to stay up to date with technology.


Choosing video topics

Creating fresh weekly content can be a challenge. To address this, Carlos keeps a notebook of ideas and inspiration for his next videos. For example, he may come across a problem that lacks a clear solution at work and will jot this down. He also keeps track of articles or other tutorials that he feels can either be explained in a more straightforward way or can be translated into Spanish.

Carlos also draws inspiration from the comment section of his videos. He engages with his audience to show there is a real person behind the videos that can guide them. He adds, “this is one of the parts I like the most. They propose new ideas for content that I might’ve missed”.


Advice for starting a channel on technical topics

Carlos’ advice for people looking to start a channel based on technical content is simple: just get started. “If you’re creating great content, people will eventually reach you,” he comments. When he first started his channel, Carlos wasn’t preoccupied with the number of views, comments, or subscriptions. He started his content with himself in mind and would ask himself what kind of content he would want to see. He says, “As long as you’re engaged with the community, you’ll have a great channel. If you try to optimize the content for the algorithm, you’re going to go crazy.” He recommends new content creators start with YouTube Shorts, and once they gain an audience they can create more detailed videos.

It’s also necessary to spark conversation in the comments, and one way you can achieve this is through the title and description of your video. A great title that catches the attention of the viewer, sparks conversation, and implements keywords is essential. A simple way to do this is by asking a question in the title. For example, one of his videos is titled, “How do Promises and Async / Await function in JavaScript?” and also asks a question in the description. This video alone has 250+ comments with viewers answering the question posed by the title and the description. He’s also mindful of what keywords he’s including in his title and finds these keywords by looking at the most popular content with similar topics.

When asked about gear and equipment recommendations, he states that the most important piece of equipment is your microphone, since your voice can be more important than the image, especially if you’re filming a tutorial video. He goes on, “With time, you can update your setup. Maybe your camera is next and then the lighting. Start with your phone or your regular laptop - just start!”

So remember to just get started, and maybe in time, you’ll become the next big content creator for Machine Learning, Google Cloud, Android, or Web Technologies.


You can check out Carlos’ YouTube Channel, find him live on Twitch, or follow him on Twitter or Instagram.

The Google Developer Experts (GDE) program is a global network of highly experienced technology experts, influencers, and thought leaders who actively support developers, companies, and tech communities by speaking at events and publishing content.

Extending support for App Engine bundled services (Module 17)

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Background

App Engine initially launched in 2008, providing a suite of bundled services making it convenient for applications to access a database (Datastore), caching service (Memcache), independent task execution (Task Queue), Google Sign-In authentication (Users), or large "blob" storage (Blobstore), or other companion services. However, apps leveraging those services can only run their apps on App Engine.

To increase app portability and help Google move towards its goal of having the most open cloud on the market, App Engine launched its 2nd-generation service in 2018, initially removing those legacy services. The newer platform allows developers to upgrade apps to the latest language runtimes, such as moving from Python 2 to 3 or Java 8 to 11 (and today, Java 17). One of the major drawbacks to the 1st-generation runtimes is that they're customized, proprietary, and restrictive in what you can use or can't.

Instead, the 2nd-generation platform uses open source runtimes, meaning ability to follow standard development practices, use common/known idioms, and have fewer restrictions of 3rd-party libraries, and obviating the need to copy or "vendor" them with your code. Unfortunately, to use these newer runtimes, migrating away from App Engine services were required because while you could upgrade language releases, there was no access to bundled services, breaking apps or requiring complete rewrites, making it a showstopper for many users.

Due to their popularity and the desire to ease the upgrade process for customers, the App Engine team restored access to most (but not all) of those services in Fall 2021. Today's Serverless Migration Station video demonstrates how to continue usage of bundled services available to Python 3 developers.

Showing App Engine users how to use bundled services on Python 3


Performing the upgrade

Modernizing the typical Python 2 App Engine app looks something like this:
  1. Migrate from the webapp2 framework (not available in Python 3)
  2. Port from Python 2 to 3, preserve use of bundled services
  3. Optional migration to Cloud standalone or similar 3rd-party services

The first step is to move to a standard Python web framework like Flask, Django, Pyramid, etc. Below is some pseudocode from Migration Module 1 demonstrating how to migrate from webapp2 to Flask:

codeblocks for porting Python 2 sample app from webapp2 to Flask
Step 1: Port Python 2 sample app from webapp2 to Flask

The key changes are bolded in the above code snippets. Notice how the App Engine NDB code [the Visit class definition plus store_visit() and fetch_visits() functions] are unaffected by this web framework migration. The full webapp2 code sample can be found in the Module 0 repo folder while the completed migration to Flask sample is located in the Module 1 repo folder.

After your app has ported frameworks, you're free to upgrade to Python 3 while preserving access to the bundled services if your app uses any. Below is pseudocode demonstrating how to upgrade the same sample app to Python 3 as well as the code changes needed to continue to use App Engine NDB:

codeblocks for porting sample app to Python 3, preserving use of NDB bundled service
Step 2: Port sample app to Python 3, preserving use of NDB bundled service

The original app was designed to work under both Python 2 and 3 interpreters, so no language changes were required in this case. We added an import of the new App Engine SDK followed by the key update wrapping the WSGI object so the app can access the bundled services. As before, the key updates are bolded. Some updates to configuration are also required, and those are outlined in the documentation and the (Module 17) codelab.

The NDB code is also left untouched in this migration. Not all of the bundled services feature such a hands-free migration, and we hope to cover some of the more complex ones ahead in Module 22. Java, PHP, and Go users have it even better, requiring fewer or no code changes at all. The Python 2 Flask sample is located in the Module 1 repo folder, and the resulting Python 3 app can be found in the Module 1b repo folder.

The immediate benefit of step two is the ability to upgrade to a more current version of language runtime. This leaves the third step of migrating off the bundled services as optional, especially if you plan on staying on App Engine for the long-term.


Additional options

If you decide to migrate off the bundled services, you can do so on your own timeline. It should be a consideration should you ever want to move to modern serverless platforms such as Cloud Functions or Cloud Run, to lower-level platforms because you want more control, like GKE, our managed Kubernetes service, or Compute Engine VMs.

Step three is also where the rest of the Serverless Migration Station content may be useful:

*code samples and codelabs available; videos forthcoming

As far as moving to modern serverless platforms, if you want to break apart a large App Engine app into multiple microservices, consider Cloud Functions. If your organization has added containerization as part of your software development workflow, consider Cloud Run. It's suitable for apps if you're familiar with containers and Docker, but even if you or your team don't have that experience, Cloud Buildpacks can do the heavy lifting for you. Here are the relevant migration modules to explore:


    Wrap-up

    Early App Engine users appreciate the convenience of the platform's bundled services, and after listening to user feedback, adding them back to 2nd-generation runtimes is another way we can help developers modernize their apps. Whether upgrading to newer language runtimes to stay on App Engine and continue to use its bundled services, migrating to Cloud standalone products, or shifting to other serverless platforms, the Google Cloud team aims to provide the tools to help streamline your modernization efforts.

    All Serverless Migration Station content (codelabs, videos, source code [when available]) can be accessed at its open source repo. While our content initially focuses on Python users, the Cloud team is working on covering other language runtimes, so stay tuned. Today's video features a special guest to provide a teaser of what to expect for Java. For additional video content, check out the broader Serverless Expeditions series.

    Introducing Developer Journey: November 2022

    Posted by Lyanne Alfaro, DevRel Program Manager, Google Developer Studio

    Developer Journey is a new monthly series to spotlight diverse and global developers sharing relatable challenges, opportunities, and wins in their journey. Every month, we will spotlight developers around the world, the Google tools they leverage, and the kind of products they are building.

    We are kicking off #DevJourney in November to give members of our community the chance to share their stories through our social platforms. This month, it’s our pleasure to feature four members spanning products including Google Developer Expert, Android, and Cloud. Enjoy reading through their entries below and be on the lookout on social media platforms, where we will also showcase their work.

    Headshot of Sierra Obryan smiling
    Sierra OBryan, Google Developer Expert, Android


















    Sierra OBryan

    Google Developer Expert, Android
    Cincinnati, OH
    Twitter and Instagram: @_sierraOBryan

    What Google tools have you used?

    As an Android developer, I use many Google tools every day like Jetpack Compose and other Android libraries, Android Studio, and Material Design. I also like to explore some of the other Google tools in personal projects. I’ve built a Flutter app, poked around in Firebase, and trained my own ML model using the model maker.

    Which tool has been your favorite to use? Why?

    It’s hard to choose one but I’m really excited about Jetpack Compose! It’s really exciting to be able to work with a new and evolving framework with so much energy and input coming from the developer community. Compose makes it easier to quickly build things that previously could be quite complex like animations and custom layouts, and has some very cool tooling in Android Studio like Live Edit and recomposition counts; all of which improve developer efficiency and app quality. One of my favorite things about Compose in general is that I think it will make Android development more accessible to more people because it is more intuitive and easier to get started and so we’ll see the Android community continue to grow with new perspectives and backgrounds bringing in new ideas.

    Google also provides a lot of really helpful tools for building more accessible mobile apps and I’m really glad these important tools also exist! The Accessibility Scanner is available on Google Play and can identify some common accessibility pitfalls in your app with tips about how to fix them and why it’s important. The “Accessibility in Jetpack Compose” code lab is a great starting place for learning more about these concepts.

    Please share with us about something you’ve built in the past using Google tools.

    A favorite personal project is a (very) simple flower identifying app built using ML Kit ’s Image Labeling API and Android. After the 2020 ML-focused Android Developer Challenge, I was very curious about ML Kit but also still quite intimidated by the idea of machine learning. It was surprisingly easy to follow the documentation to build and tinker with a custom model and then add it to an Android app. I just recently migrated the app to Jetpack Compose.

    What advice would you give someone starting in their developer journey?

    Find a community! Like most things, developing is more fun with friends.


    Photo of Harun Wangereka smiling
    Harun Wangereka, Google Developer Expert, Android

















    Harun Wangereka

    Google Developer Expert, Android

    What Google tools have you used?

    I'm an Android Engineer by profession. The tools I use on a day-to-day basis are Android as the framework, Android Studio as the IDE, and some of the Jetpack Libraries from the Android Team at Google.

    Which tool has been your favorite to use? Why?

    Jetpack libraries. I love these libraries because they solve most of the common pain points we, as Android developers, faced before they came along. They also concisely solve them and provide best practices for Android developers to follow.

    Please share with us about something you've built in the past using Google tools.

    At my workplace, Apollo Agriculture, I collaborate with cross-functional teams to define, design and ship new features for the agent's and agro-dealer’s Android apps, which are entirely written in Kotlin. We have Apollo for Agents, an app for agents to perform farmer-related tasks and Apollo Checkout, which helps farmers check out various Apollo products. With these two apps, I'm assisting Apollo Agriculture to make financing for small-scale farmers accessible to everyone.

    What advice would you give someone starting in their developer journey?

    Be nice to yourself as you learn. The journey can be quite hard at times but remember to give yourself time. You can never know all the things at once, so try to learn one thing at a time. Do it consistently and it will pay off in the very end. Remember also to join existing developer communities in your area. They help a lot!


    Selfie of Richard Knowles at the beach
    Richard Knowles, Android Developer
























    Richard Knowles

    Android Developer
    Los Angeles, CA

    What Google tools have you used?

    I’ve been building Android apps since 2011, when I was in graduate school studying for my Master’s Degree in Computer Engineering. I built my first Android app using Eclipse which seemed to be a great tool at the time, at least until Google’s Android Studio was released for the first time in 2014. Android Studio is such a powerful and phenomenal IDE! I’ve been using it to build apps for Android phones, tablets, smartwatches, and TV. It is amazing how the Android Accessibility Test Framework integrates with Android Studio to help us catch accessibility issues in our layouts early on.

    Which tool has been your favorite to use? Why?

    My favorite tool by far is the Accessibility Scanner. As a developer with a hearing disability, accessibility is very important to me. I was born with a sensorineural hearing loss, and wore hearing aids up until I was 18 when I decided to get a cochlear implant. I am a heavy closed-captioning user and I rely on accessibility every single day. When I was younger, before the smartphone era, even through the beginning of the smartphone era, it was challenging for me to fully enjoy TV or videos that didn’t have captions. I’m so glad that the world is starting to adapt to those with disabilities and the awareness of accessibility has increased. In fact, I chose the software engineering field because I wanted to create software or apps that would improve other people’s lives, the same way that technology has made my life easier. Making sure the apps I build are accessible has always been my top priority. This is why the Accessibility Scanner is one of my favorite tools: It allows me to efficiently test how accessible my user-facing changes are, especially for those with visual disabilities.

    Please share with us about something you’ve built in the past using Google tools.

    As an Android engineer on Twitter’s Accessibility Experience Team, one of our initiatives is to improve the experience of image descriptions and the use of alt text. Did you know that when you put images in your Tweets on Twitter, you can add descriptions to make them accessible to people who can’t see images? If yes, that is great! But do you always remember to do it? Don’t worry if not - you’re not alone. Many people including myself forget to add image descriptions. So, we implemented Alt Text reminders which allow users to opt in to be notified when they tweet images without descriptions. We also have been working to expose alt text for all images and GIFs. What that means is, we are now displaying an “ALT” badge on images that have associated alternative text or image descriptions. In general, alt text is primarily used for Talkback users but we wanted to allow users not using a screen reader to know which images have alternative text, and of course allow them to view the image description by selecting the “ALT” badge. This feature helped achieve two things: 1) Users that may have low-vision or other disabilities that would benefit from available alternative text can now access that text; 2) Users can know which images have alternative text before retweeting those images. I personally love this feature because it increases the awareness of Alt text.

    What advice would you give someone starting in their developer journey?

    What an exciting time to start! I have three tips I'd love to share:

    1) Don’t start coding without reviewing the specifications and designs carefully. Draw and map out the architecture and technical design of your work before you jump into the code. In other words, work smarter, not harder.

    2) Take the time to read through the developer documentation and the source code. You will become an expert more quickly if you understand what is happening behind the scenes. When you call a function from a library or SDK, get in the habit of looking at the source code and implementation of that function so that you can not only learn as you code, but also find opportunities to improve performance.

    3) Learn about accessibility as early as possible, preferably at the same time as learning everything else, so that it becomes a habit and not something you have to force later on.


    Headshot of Lynne Langit smiling
    Lynn Langit, GDE/Cloud


























    Lynn Langit

    GDE/Cloud
    Minnesota
    Twitter: @lynnlangit

    What Google tools have you used?

    So many! My favorite Google Cloud services are CloudRun, BigQuery, Dataproc. Favorite Tools are Cloud Shell Editor, SSH-in browser for Compute Engine and Big Query Execution Details.

    Which tool has been your favorite to use? Why?

    I love to use the open source Variant Transforms tool for VCF [or genomic] data files. This tool gets bioinformaticians working with BigQuery quickly. Researchers use the VariantTransforms tool to validate and load VCF files into BigQuery. VariantTransforms supports genome-scale data analysis workloads. These workloads can contain hundreds of thousands of files, millions of genomic samples, and billions of input records.

    Please share with us about something you’ve built in the past using Google tools.

    I have been working with teams around the world to build, scale, and deploy multiple genomic-scale data pipelines for human health. Recent use cases are data analysis in support of Covid or cancer drug development.

    What advice would you give someone starting in their developer journey?

    Expect to spend 20-25% of your professional time learning for the duration of your career. All public cloud services, including Google Cloud, evolve constantly. Building effectively requires knowing both cloud patterns and services at a deep level.

    Paul Kinlan shares his passion for web development and how to get involved at DevFest

    Posted by Komal Sandhu - Global Program Manager, Google Developer Groups

    “The pace of technology is changing so quickly that it’s impossible sometimes to know where to start and how. What are the things I need to focus on? It’s just too hard to work out. I’m motivated to give developers a clear direction that cuts through a lot of this challenge.”

    Learn Chrome tools and tips from Chrome Lead, Paul Kinlan, and hear from him first-hand on how to get involved.


    Among the many inspiring experts in the Chrome developer community is Paul Kinlan, a Googler who serves as the Lead Chrome & Web Platform Developer Relations team. Read on to see Paul’s outlook on his favorite Chrome tools and the Chrome developers that inspire him.

    Tell us about yourself:

    My name is Paul Kinlan, and I lead the Chrome & Web Platform Developer Relations team. I’m in a very lucky position, in that I get to work with a huge range of people who are passionate about the web and put their whole careers into continuing to help the web thrive for decades to come. If you are interested, you can follow my site: paul.kinlan.me

    What is your origin story?

    “I grew up on the Wirral in the UK, a peninsula located in North West England and part of Wales. I’ve been surrounded by computers since my earliest childhood memories, like watching my dad fix computers in the house (it's hard to count how many warnings I got to not touch the capacitor at the back of the monitor... but it looked fun).

    I also was going to computer clubs and watching the demo and cracking scenes (I might have “loaned” some games from people) and was keen on finding friends in school who were just like me and liked games & computers.”

    How did you get started in this space? Why did you get into Web technology specifically?

    “When I was a kid, my dad tried to get me to program, but I just didn’t get it. Then, when I was about 12 years old and first saw the Street Fighter arcade game, it clicked. I got the concept of loops, reading joysticks, and getting things on the screen.

    At the same time, my grandad was struggling to pick his lottery numbers, and I thought I could help him with some software. I fired up QBasic, read the manual and got started. I almost quit though, when I didn’t realize the US had a different spelling for colour... (I do wonder how life would have been different if I’d stopped there).

    Jump forward a couple of years, and the web came about, and I was just tinkering, and I realized that I could build simple sites and applications with a bit of Perl and HTML. I was hooked, started a business, and went from there. Now I’m here, on the Chrome team, hoping that I can offer the same opportunities to developers that I had.”

    What are some challenges that you have observed developers being faced with?

    “Information overload. The pace of technology is changing so quickly that it’s impossible sometimes to know where to start and how. What are the things I need to focus on? It’s just too hard to work out. I’m motivated to give developers a clear direction that cuts through a lot of this challenge.”

    What do you think is the most interesting or useful learning resource for learning more about Chrome & Web? Is there a particular library or codelab that everyone should learn?

    “I’m biased, but https://web.dev/learn is a great resource that covers some core fundamentals of web development, and we’re always improving it with the latest guidance on how to do good web development.

    I know most people aren’t like me, but I found engrossing myself in programming reference materials (combined with a lot of tinkering) was a great way to start, and if you combine MDN (Mozilla Developer Network) with sites like glitch.com or GitHub, you have the ability to quickly learn and test ideas without having to have any installed software. It’s a really incredible time to be a developer.”

    What are some most surprising or inspiring ways developers and technologists are building together using Chrome and Web?

    "Oh – amazing question! 

    Right now, the intersection of Web and ML is incredibly exciting. People are building sites and apps that do things that we never thought were possible and are then able to give people access to it via a simple URL." 

    "I was watching the folks over at Corridor Crew (Visual effects technologists), and they had this challenge to rotoscope a person out of a video, replace the background with a different video, and then put the person back on top - the fastest solution was built in the browser using ML. ?

    At the same time, I also love that people are bringing Apps to the web that we never thought would be possible on the web, such as Photoshop and Audacity. People are now building full blown video editors on the web, enabling anyone with a browser to become a video producer. It’s amazing.

    The web enables so much, and so much that I never thought possible, just at the click of a link. Every day, I see something that excites me, and that’s why I love it.”

    What’s a specific use case of Chrome / Web technology that excites you?

    “I’m personally very passionate about the Fugu (deep hardware) set of APIs because they enable entire classes of businesses to come to the web for the first time.

    I’m also very excited about the new range of CSS and UI related APIs because they make once complex things incredibly simple. The Web is primarily a visual medium; however, the perception of quality has lagged what people get on other platforms (such as Android and iOS apps), and these new primitives and concepts will enable richer and more fluid user interfaces, with less work needed from the developer or designer.”

    How can developers be successful building on Chrome & Web?

    “It all depends on the stage you're at - if you’re an established site, then I would look to improve the user experience with things like Core Web Vitals.

    If you are just starting, just start - there are so many tools that now let you start to prototype in the browser and get something that people can use incredibly quickly. In the past, you used to have to worry about the full-stack (Hosting -> Front-end), now that is getting less of an issue.”

    What’s next for Chrome & Web Community? What might the future look like?

    “Whatever I say will be wrong ? - But I like these questions, so I hope people will humor me.... It looks like it takes about 3-5 years for a feature launched in one browser to become available across Blink, WebKit and Gecko, so with that in mind, the near future probably looks a lot like right now, but more evenly spread (in terms of compatibility) - projects like Interop 202X are making it easier to build sites that work everywhere.

    The further future though....? I made a talk years ago about the concept of “The Headless Web” - where I see a lot of opportunities for services or assistants like Siri or Google Assistant to make more sense of a web page and let you interact with it (and not just read it back).

    At the same time, there are heaps of other platforms that are changing the definition of what the Web means. Facebook, WeChat, and others - are browsers and platforms in their own right, with hooks back into their own platforms. When I look at the billions of people that have come online in the last couple of years, as the world went mobile (and the billions more still to come online) - will they use the browser as we know it? Or will they use these ‘alternative browser’ platforms...

    All I know is that we need to keep making the experience of the web better for everyone.”

    What is the focus for Web & Chrome currently and why?

    Chrome is still focused on the principles that it set out at its launch: “a web that is Speedy, Simple and Secure.” - when you look at that lensing, so much of our work has been in service of these. Take, for example, “Core Web Vitals” - we worked out a set of metrics that could be used to determine if your site had a great user experience, and I believe it’s fundamentally changed the web. Or, on another axis, you look at technologies like WASM, which are enabling native code (e.g C/C++) to run safely in a sandbox in the browser, at speeds that are getting close to what you would expect an installed application to reach.”

    How do Web & Chrome help expand the impact of developers?

    Universal access. The link enables this, and we need to fight to keep it open and accessible to all.’

    Anything else you would like to share with the community of Google developers around the world?

    There is a lot of turmoil right now in the world; spend time listening to people, supporting them, and raising them up. When I got started, the community around me was so supportive and helped me more than I could help it - I use my time now to give people from all backgrounds the opportunities that I was fortunate to have access to. I hope that others can do the same.”

     

    Find a Google Developer Group hosting a DevFest near you.

    Want to learn more about Google Web Technologies and Google Chrome? Hoping to attend a DevFest or Google Developer Groups (GDG)? Find a GDG hosting a DevFest near you here.