Tag Archives: developers

Machine Learning Communities: Q3 ‘22 highlights and achievements

Posted by Nari Yoon, Hee Jung, DevRel Community Manager / Soonson Kwon, DevRel Program Manager

Let’s explore highlights and accomplishments of vast Google Machine Learning communities over the third quarter of the year! We are enthusiastic and grateful about all the activities by the global network of ML communities. Here are the highlights!


TensorFlow/Keras

Load-testing TensorFlow Serving’s REST Interface

Load-testing TensorFlow Serving’s REST Interface by ML GDE Sayak Paul (India) and Chansung Park (Korea) shares the lessons and findings they learned from conducting load tests for an image classification model across numerous deployment configurations.

TFUG Taipei hosted events (Python + Hugging Face-Translation+ tf.keras.losses, Python + Object detection, Python+Hugging Face-Token Classification+tf.keras.initializers) in September and helped community members learn how to use TF and Hugging face to implement machine learning model to solve problems.

Neural Machine Translation with Bahdanau’s Attention Using TensorFlow and Keras and the related video by ML GDE Aritra Roy Gosthipaty (India) explains the mathematical intuition behind neural machine translation.

Serving a TensorFlow image classification model as RESTful and gRPC based services with TFServing, Docker, and Kubernetes

Automated Deployment of TensorFlow Models with TensorFlow Serving and GitHub Actions by ML GDE Chansung Park (Korea) and Sayak Paul (India) explains how to automate TensorFlow model serving on Kubernetes with TensorFlow Serving and GitHub Action.

Deploying ? ViT on Kubernetes with TF Serving by ML GDE Sayak Paul (India) and Chansung Park (Korea) shows how to scale the deployment of a ViT model from ? Transformers using Docker and Kubernetes.

Screenshot of the TensorFlow Forum in the Chinese Language run by the tf.wiki team

Long-term TensorFlow Guidance on tf.wiki Forum by ML GDE Xihan Li (China) provides TensorFlow guidance by answering the questions from Chinese developers on the forum.

photo of a phone with the Hindi letter 'Ohm' drawn on the top half of the screen. Hinidi Character recognition shows the letter Ohm as the Predicted Result below.

Hindi Character Recognition on Android using TensorFlow Lite by ML GDE Nitin Tiwari (India) shares an end-to-end tutorial on training a custom computer vision model to recognize Hindi characters. In TFUG Pune event, he also gave a presentation titled Building Computer Vision Model using TensorFlow: Part 1.

Using TFlite Model Maker to Complete a Custom Audio Classification App by ML GDE Xiaoxing Wang (China) shows how to use TFLite Model Maker to build a custom audio classification model based on YAMNet and how to import and use the YAMNet-based custom models in Android projects.

SoTA semantic segmentation in TF with ? by ML GDE Sayak Paul (India) and Chansung Park (Korea). The SegFormer model was not available on TensorFlow.

Text Augmentation in Keras NLP by ML GDE Xiaoquan Kong (China) explains what text augmentation is and how the text augmentation feature in Keras NLP is designed.

The largest vision model checkpoint (public) in TF (10 Billion params) through ? transformers by ML GDE Sayak Paul (India) and Aritra Roy Gosthipaty (India). The underlying model is RegNet, known for its ability to scale.

A simple TensorFlow implementation of a DCGAN to generate CryptoPunks

CryptoGANs open-source repository by ML GDE Dimitre Oliveira (Brazil) shows simple model implementations following TensorFlow best practices that can be extended to more complex use-cases. It connects the usage of TensorFlow with other relevant frameworks, like HuggingFace, Gradio, and Streamlit, building an end-to-end solution.


TFX

TFX Machine Learning Pipeline from data injection in TFRecord to pushing out Vertex AI

MLOps for Vision Models from ? with TFX by ML GDE Chansung Park (Korea) and Sayak Paul (India) shows how to build a machine learning pipeline for a vision model (TensorFlow) from ? Transformers using the TF ecosystem.

First release of TFX Addons Package by ML GDE Hannes Hapke (United States). The package has been downloaded a few thousand times (source). Google and other developers maintain it through bi-weekly meetings. Google’s Open Source Peer Award has recognized the work.

TFUG São Paulo hosted TFX T1 | E4 & TFX T1 | E5. And ML GDE Vinicius Caridá (Brazil) shared how to train a model in a TFX pipeline. The fifth episode talks about Pusher: publishing your models with TFX.

Semantic Segmentation model within ML pipeline by ML GDE Chansung Park (Korea) and Sayak Paul (India) shows how to build a machine learning pipeline for semantic segmentation task with TFX and various GCP products such as Vertex Pipeline, Training, and Endpoints.


JAX/Flax

Screen shot of Tutorial 2 (JAX): Introduction to JAX+Flax with GitHub Repo and Codelab via university of Amseterdam

JAX Tutorial by ML GDE Phillip Lippe (Netherlands) is meant to briefly introduce JAX, including writing and training neural networks with Flax.


TFUG Malaysia hosted Introduction to JAX for Machine Learning (video) and Leong Lai Fong gave a talk. The attendees learned what JAX is and its fundamental yet unique features, which make it efficient to use when executing deep learning workloads. After that, they started training their first JAX-powered deep learning model.

TFUG Taipei hosted Python+ JAX + Image classification and helped people learn JAX and how to use it in Colab. They shared knowledge about the difference between JAX and Numpy, the advantages of JAX, and how to use it in Colab.

Introduction to JAX by ML GDE João Araújo (Brazil) shared the basics of JAX in Deep Learning Indaba 2022.

A comparison of the performance and overview of issues resulting from changing from NumPy to JAX

Should I change from NumPy to JAX? by ML GDE Gad Benram (Portugal) compares the performance and overview of the issues that may result from changing from NumPy to JAX.

Introduction to JAX: efficient and reproducible ML framework by ML GDE Seunghyun Lee (Korea) introduced JAX/Flax and their key features using practical examples. He explained the pure function and PRNG, which make JAX explicit and reproducible, and XLA and mapping functions which make JAX fast and easily parallelized.

Data2Vec Style pre-training in JAX by ML GDE Vasudev Gupta (India) shares a tutorial for demonstrating how to pre-train Data2Vec using the Jax/Flax version of HuggingFace Transformers.

Distributed Machine Learning with JAX by ML GDE David Cardozo (Canada) delivered what makes JAX different from TensorFlow.

Image classification with JAX & Flax by ML GDE Derrick Mwiti (Kenya) explains how to build convolutional neural networks with JAX/Flax. And he wrote several articles about JAX/Flax: What is JAX?, How to load datasets in JAX with TensorFlow, Optimizers in JAX and Flax, Flax vs. TensorFlow, etc..


Kaggle

DDPMs - Part 1 by ML GDE Aakash Nain (India) and cait-tf by ML GDE Sayak Paul (India) were announced as Kaggle ML Research Spotlight Winners.

Forward process in DDPMs from Timestep 0 to 100

Fresher on Random Variables, All you need to know about Gaussian distribution, and A deep dive into DDPMs by ML GDE Aakash Nain (India) explain the fundamentals of diffusion models.

In Grandmasters Journey on Kaggle + The Kaggle Book, ML GDE Luca Massaron (Italy) explained how Kaggle helps people in the data science industry and which skills you must focus on apart from the core technical skills.


Cloud AI

How Cohere is accelerating language model training with Google Cloud TPUs by ML GDE Joanna Yoo (Canada) explains what Cohere engineers have done to solve scaling challenges in large language models (LLMs).

ML GDE Hannes Hapke (United States) chats with Fillipo Mandella, Customer Engineering Manager at Google

In Using machine learning to transform finance with Google Cloud and Digits, ML GDE Hannes Hapke (United States) chats with Fillipo Mandella, Customer Engineering Manager at Google, about how Digits leverages Google Cloud’s machine learning tools to empower accountants and business owners with near-zero latency.

A tour of Vertex AI by TFUG Chennai for ML, cloud, and DevOps engineers who are working in MLOps. This session was about the introduction of Vertex AI, handling datasets and models in Vertex AI, deployment & prediction, and MLOps.

TFUG Abidjan hosted two events with GDG Cloud Abidjan for students and professional developers who want to prepare for a Google Cloud certification: Introduction session to certifications and Q&A, Certification Study Group.

Flow chart showing shows how to deploy a ViT B/16 model on Vertex AI

Deploying ? ViT on Vertex AI by ML GDE Sayak Paul (India) and Chansung Park (Korea) shows how to deploy a ViT B/16 model on Vertex AI. They cover some critical aspects of a deployment such as auto-scaling, authentication, endpoint consumption, and load-testing.

Photo collage of AI generated images

TFUG Singapore hosted The World of Diffusion - DALL-E 2, IMAGEN & Stable Diffusion. ML GDE Martin Andrews (Singapore) and Sam Witteveen (Singapore) gave talks named “How Diffusion Works” and “Investigating Prompt Engineering on Diffusion Models” to bring people up-to-date with what has been going on in the world of image generation.

ML GDE Martin Andrews (Singapore) have done three projects: GCP VM with Nvidia set-up and Convenience Scripts, Containers within a GCP host server, with Nvidia pass-through, Installing MineRL using Containers - with linked code.

Jupyter Services on Google Cloud by ML GDE Gad Benram (Portugal) explains the differences between Vertex AI Workbench, Colab, and Deep Learning VMs.

Google Cloud's Two Towers Recommender and TensorFlow

Train and Deploy Google Cloud's Two Towers Recommender by ML GDE Rubens de Almeida Zimbres (Brazil) explains how to implement the model and deploy it in Vertex AI.


Research & Ecosystem

WOMEN DATA SCIENCE, LA PAZ Club de lectura de papers de Machine Learning Read, Learn and Share the knowledge #MLPaperReadingClubs, Nathaly Alarcón, @WIDS_LaPaz #MLPaperReadingClubs

The first session of #MLPaperReadingClubs (video) by ML GDE Nathaly Alarcon Torrico (Bolivia) and Women in Data Science La Paz. Nathaly led the session, and the community members participated in reading the ML paper “Zero-shot learning through cross-modal transfer.”

In #MLPaperReadingClubs (video) by TFUG Lesotho, Arnold Raphael volunteered to lead the first session “Zero-shot learning through cross-modal transfer.”

Screenshot of a screenshare of Zero-shot learning through cross-modal transfer to 7 participants in a virtual call

ML Paper Reading Clubs #1: Zero Shot Learning Paper (video) by TFUG Agadir introduced a model that can recognize objects in images even if no training data is available for the objects. TFUG Agadir prepared this event to make people interested in machine learning research and provide them with a broader vision of differentiating good contributions from great ones.

Opening of the Machine Learning Paper Reading Club (video) by TFUG Dhaka introduced ML Paper Reading Club and the group’s plan.

EDA on SpaceX Falcon 9 launches dataset (Kaggle) (video) by TFUG Mysuru & TFUG Chandigarh organizer Aashi Dutt (presenter) walked through exploratory data analysis on SpaceX Falcon 9 launches dataset from Kaggle.

Screenshot of ML GDE Qinghua Duan (China) showing how to apply the MRC paradigm and BERT to solve the dialogue summarization problem.

Introduction to MRC-style dialogue summaries based on BERT by ML GDE Qinghua Duan (China) shows how to apply the MRC paradigm and BERT to solve the dialogue summarization problem.

Plant disease classification using Deep learning model by ML GDE Yannick Serge Obam Akou (Cameroon) talked on plant disease classification using deep learning model : an end to end Android app (open source project) that diagnoses plant diseases.

TensorFlow/Keras implementation of Nystromformer

Nystromformer Github repository by Rishit Dagli provides TensorFlow/Keras implementation of Nystromformer, a transformer variant that uses the Nyström method to approximate standard self-attention with O(n) complexity which allows for better scalability.

Why startups are thriving in South East Asia and Pakistan

In the past few years, startups throughout South East Asia and Pakistan have been steadily growing and taking on the regions’ most pressing challenges. From agriculture to healthcare, these startups are building digital solutions to tackle their area of focus.

Recently, the digital audiences in these regions have expanded significantly. In South East Asia alone, 80 million new users have come online since March 2020, boosting activity for startups developing digital products and services across a variety of industries. And we’ve seen that growth as venture funding reached new heights in both South East Asia and Pakistan. In South East Asia, deal activity hit a record US$11.5 billion in the first half of 2021. Meanwhile in Pakistan, startups raised US$350 million in funding in 2021, five times the amount in 2020.

One explanation for this acceleration is that Pakistan and South East Asia both have a thriving youth population. More than half of the population of South East Asia is under 30 years old. In Pakistan too, the median age is only 22. These young people tend to be tech-savvy, have an interest in entrepreneurship, and are more in tune with global trends. With that mindset, they’re often more inclined to use emerging technologies like artificial intelligence and blockchain to solve problems and build digital products.

Government support across the regions has certainly helped as well, as local governments have recognised the critical role of startups in their economies — specifically in digital transformation and creating job opportunities. Government-driven initiatives like Thailand 4.0, Indonesia’s 1000 startups, Singapore’s Startup SG Founders, as well as Pakistan’s Prime Minister’s Youth Program, will continue to help aspiring founders get their startups off the ground.

Google is excited to nurture this next wave of tech startup founders with the Google for Startups Accelerator (South East Asia and Pakistan), particularly those that are focused on e-commerce, finance, healthcare, SME-focused B2B solutions, education, agriculture and logistics.

We’re looking for 10 to 15 startups based in Indonesia, Malaysia, Pakistan, Philippines, Singapore, Thailand or Vietnam, that are in the seed or Series A stage. The Accelerator will support these startups by providing the best of Google’s resources: Googler mentors, a network of new contacts to help them on their journeys, and the most cutting edge technology.

Interested startups are encouraged to applyby Oct 7, 2022.

How Annabel turned her app idea into a growing business

One day, Annabel Angwenyi was running errands in Nairobi, Kenya when her car refused to start. She called her usual mechanic, but he was busy helping another customer on the opposite side of town. She knew there must be another mechanic close by, but because many local businesses don’t have an online presence, she had trouble finding and contacting someone else. Annabel was frustrated — but she also saw an opportunity to solve a problem.

After a lot of research, hard work and perseverance, she and her co-founder Patrick launched Ziada, an app that connects people across Kenya to local service providers. Today, Ziada has a team of seven people and over 60,000 downloads on Google Play.

Annabel is one of the founders featured in #WeArePlay, which spotlights the people behind Google Play’s apps and games. We chatted with Annabel to learn more about how she got Ziada up and running with no tech experience, and the impact it’s had on the local community.

How did you turn your idea into an app?

Patrick and I didn’t have any tech experience — we’re both business people. So in 2017, we partnered with a software developer who believed in our dream and helped us create the app. After a lot of hard work, we published the first version of Ziada on Google Play that same year. But it didn’t really take off. We weren’t sure if the Kenyan market was ready for something like this, so we took a break.

Then when the pandemic started in 2020, we noticed people wanted to access more things on demand and online, like food delivery and taxi services. So we rebranded the app, including improving the user interface to better reflect how we could help, and launched again. Now, our app has over 60,000 downloads on Google Play and is helping service providers across Kenya find new customers.

A person wearing a yellow short-sleeved shirt smiles and holds a phone showing the Ziada logo on the screen.

What impact has your app had on the community?

Kenya is an entrepreneurial nation, with people just like us wanting to build something for themselves. Having owned small businesses in the past, we knew the app had potential to help others grow their businesses. And it makes us so happy to see this actually happening. I’m also really proud of how we’re helping women — who make up 38% of service providers on Ziada — create their own income. I believe when you empower women, you empower the whole community. It’s something that’s really close to our hearts at Ziada. Most of our team are women, and many of us mentor young girls in the community. In fact, two of our mentees are joining Ziada as software developers.

A group of seven people sitting around a table, smiling and working on laptops.

Any advice for someone starting their own app or game business?

Just jump in. I think that initial leap of faith is the hardest one to make — it definitely was for me. The app or game will never be 100% perfect, and if you wait for that moment, the train may have already left (both in terms of user needs and market share).

If you have a working prototype or early version of your app, get it on Google Play and build hype around it. I was surprised at how patient our users were with Ziada in its early days, even with all its shortcomings. But that’s because they wanted it to work. If you’re providing a good solution to a problem, the adopters will come.

What’s next for Ziada?

We’re always working on new services, like helping contractors rent equipment and tools to complete jobs or providing coaching through our upcoming business advisory service. We also want to keep partnering with growing, local businesses and expand our user base — not just in Kenya, but across the African continent. There’s so much potential here, and we’re only just getting started.

Read more about Annabel and other app and game founders featured in #WeArePlay.

5 apps making their mark in Asia Pacific and beyond

Google Play turned 10 this year, and we’ve been keeping the celebrations going with local developer communities around the world. It’s an extra special occasion in Asia Pacific, which is home to one of the largest app developer populations (nearly a third of the 26.9 million app developers worldwide) and one of the most engaged audiences. In fact, people in Asia Pacific download and use mobile apps more than any other region.

Developers in Asia Pacific are reaching global audiences, with hundreds of millions of downloads outside the region. Some of these apps have become global names and inspired new trends on Play, like multiplayer gaming (Mobile Legends: Bang Bang), super apps (Grab), rapid delivery e-commerce (Coupang) and fintech solutions for the unbanked (Paytm).

Let’s take a closer look at some other emerging themes on Play — like mental health, news and music — where developers in Asia Pacific are making their mark globally.

Forest

Developer: Seekrtech, Taiwan

Listed on Play: August 2014

“The main goal of Forest is to encourage users to put down their phones and focus on the more important things around them,” says Shaokan Pi, CEO of Forest. Here’s how it works — you set a focus time period, whether you’re working at the office or at dinner with friends. Once you put down your phone, a virtual tree starts growing. If you stay focused (and don’t look at your phone), the sapling grows into a big tree. And you can earn virtual coins to grow more trees, and eventually a whole forest. There’s a real-world benefit, too — thanks to a partnership between Forest and Trees for the Future, you can spend your coins to plant real trees on Earth.

A group of seven people standing outside and holding a banner that says “Forest.”

The Forest team planting a tree in Kenya

SmartNews

Developer: SmartNews, Japan

Listed on Play: March 2013

SmartNews, which is also celebrating its 10th anniversary this year, uses artificial intelligence to collect and deliver a curated view of news from all over the world. But it’s not just an echo chamber — its News From All Sides feature shows people articles across a wide spectrum of political perspectives. SmartNews has also developed timely products like a COVID-19 dashboard and trackers for wildfires and hurricanes.

Evolve

Developer: Evolve, India

Listed on Play: July 2020

Evolve, a health-tech startup supporting the wellbeing of the LGBTQ+ community, landed on Google Play’s Best of 2021 list in India. The app offers educational content for members of the LGBTQ+ community, covering topics like embracing your sexuality and coming out to loved ones. “There is a need for more customized solutions for this community,” says Anshul Kamath, co-founder of Evolve. “We hope to provide a virtual safe space where members can work on themselves and specific challenges that impact their daily mental health.”

Four people smiling at the camera and holding a trophy

The Evolve team with their “Best of Play” trophy in 2021

Magic Tiles 3

Developer: Amanotes, Vietnam

Listed on Play: February 2017

This musical game app quickly found fans in the U.S., Japan, Brazil and Russia. Magic Tiles 3 is designed to let anyone — even those without a musical background — play instruments like the piano, guitar and drums on their smartphone. You can choose from over 1,000 songs across genres like pop, rap, jazz and electronic dance music, and compete in an interactive game with others around the world.

Mom Sitter

Developer: Mom Sitter, Korea

Listed on Play: September 2021

Mom Sitter, a platform connecting parents with babysitters, topped the Play Store’s childcare category in Korea last year. But it didn’t actually start as a mobile app. It was founded as a website to help parents find babysitters while they were at work or when daycare centers were too full. After attending the ChangGoo program, Google’s training program for developers and startups in Korea, the Mom Sitter team learned they could reach more people if they went mobile. Today, caretakers all over the world use their services. “Childcare issues concern not only working women but everyone who raises children, and it’s important that they can find support,” says Jeeyea Chung, founder of Mom Sitter.

Community in times of need: DevFest for Ukraine

Each year, Google Developer Groups (GDGs) come together for DevFest conferences around the world – not only to exchange knowledge and share experiences, but also to get inspired, celebrate the community and simply be together. It’s a cheerful gathering, focused both on technology and the people behind it.

GDGs in Ukraine organized the first DevFest in 2012. After 10 years of building a thriving community, 2022 turned out to be different for thousands of Ukrainian developers. Ever since the anti-aircraft sirens woke them up for the first time on February 24, many in the tech industry have been working non-stop for the sake of their country – helping refugees, providing medical assistance to those in need, and trying to work from bomb shelters. Luckily, they’re not alone.

Help from all sides

The developer community in Ukraine and abroad decided to use the DevFest conference to raise awareness and funds for those in need. "This time, because of the war in my country, DevFest Ukraine is happening for Ukraine," says Vitaliy Zasadnyy, co-founder of GDG Lviv. "It's a brilliant way to celebrate the future of technology, learn new things, connect with other tech experts and raise funds for a good cause."

Three people sitting at a table, speaking at a conference.

Fireside chat with Android team members in the London studio.

On July 14-15, DevFest for Ukraine gathered more than 20 industry-leading speakers over two days, featuring live streams from London and Lviv. From tech sessions and inspirational keynotes to networking and overviews of the latest developer tools, the event brought together people who shape the future of Android, Web and AI technologies.

Funds were raised for those in need by participants donating a sum of their choice to access the live stream and recordings after the event. Topics ranged from API design based on AndroidX libraries, to applied ML for Healthcare, to next-generation apps powered by machine learning with TensorFlow.js, and more. Check out the highlights video.

A woman at a laptop, sitting in a studio next to a large microphone.

Preparing the AI Stream livestream from the studio in Lviv, Ukraine.

Support the cause

All the funds raised during DevFest for Ukraine go to three NGOs that are supporting the country at this turbulent time. The goal was to provide humanitarian aid and direct assistance to affected families. The GDG Ukraine team carefully selected them to ensure efficient use of funds and transparent reporting.

And here’s the best part: DevFest for Ukraine raised over $130k for the cause so far, and counting! You can still access the recorded sessions to learn about the future of tech.

Prepare your app to support predictive back gestures

Posted by Jason Tang, Product Management, Diego Zuluaga, Developer Relations, and Michael Mauzy, Developer Documentation

Since we introduced gesture navigation in Android 10, users have signaled they want to understand where a back gesture will take them before they complete it.

As the first step to addressing this need, we've been developing a predictive back gesture. When a user starts their gesture by swiping back, we’ll show an animated preview of the destination UI, and the user can complete the gesture to navigate to that UI if they want – as shown in the following example.

Although the predictive back gesture won’t be visible to users in Android 13, we’re making an early version of the UI available as a developer option for testing starting in Beta 4. We plan to make the UI available to users in a future Android release, and we’d like all apps to be ready. We’re also working with partners to ensure it’s consistent across devices.

Read on for details on how to try out the new gesture and support it in your apps. Adding support for predictive back gesture is straightforward for most apps, and you can get started today.

We also encourage you to submit your feedback.

Try out the predictive back gesture in Beta 4

To try out the early version of the predictive back gesture available through the developer option, you’ll need to first update your app to support the predictive back gesture, and then enable the developer option.

Update your app to support predictive back gesture

To help make predictive back gesture helpful and consistent for users, we're moving to an ahead-of-time model for back event handling by adding new APIs and deprecating existing APIs.

The new platform APIs and updates to AndroidX Activity 1.6+ are designed to make your transition from unsupported APIs (KeyEvent#KEYCODE_BACK and OnBackPressed) to the predictive back gesture as smooth as possible.

The new platform APIs include OnBackInvokedCallback and OnBackInvokedDispatcher, which AndroidX Activity 1.6+ supports through the existing OnBackPressedCallback and OnBackPressedDispatcher APIs.

You can start testing this feature in two to four steps, depending on your existing implementation.

To begin testing this feature:


1. Upgrade to AndroidX Activity 1.6.0-alpha05. By upgrading your dependency on AndroidX Activity, APIs that are already using the OnBackPressedDispatcher APIs such as Fragments and the Navigation Component will seamlessly work when you opt-in for the predictive back gesture. 

// In your build.gradle file:
dependencies {

  // Add this in addition to your other dependencies
  implementation "androidx.activity:activity:1.6.0-alpha05"


2. Opt-in for the predictive back gesture. Opt-in your app by setting the EnableOnBackInvokedCallback flag to true at the application level in the AndroidManifest.xml.

<application

    ...

    android:enableOnBackInvokedCallback="true"

    ... >

...

</application>


If your app doesn’t intercept the back event, you're done at this step.

Note: Opt-in is optional in Android 13, and it will be ignored after this version.

3. Create a callback to intercept the system Back button/event. If possible, we recommend using the AndroidX APIs as shown below. For non-AndroidX use cases, check the platform API mentioned above.

This snippet implements handleOnBackPressed and adds the OnBackPressedCallback to the OnBackPressedDispatcher at the activity level.

 val onBackPressedCallback = objectOnBackPressedCallback(true) {

   override fun handleOnBackPressed() {

     // Your business logic to handle the back pressed event

   }

 }

 requireActivity().onBackPressedDispatcher

   .addCallback(onBackPressedCallback)


4. When your app is ready to stop intercepting the system Back event, disable the onBackPressedCallback callback.
 

onBackPressedCallback.isEnabled = webView.canGoBack()



Note: Your app may require using the platform APIs (OnBackInvokedCallback and OnBackPressedDispatcher) to implement the predictive back gesture. Read our documentation for details.

Enable the developer option to test the predictive back gesture

Once you’ve updated your app to support the predictive back gesture, you can enable a developer option (supported in Android 13 Beta 4 and higher) to see it for yourself.

To test this animation, complete the following steps:
  1. On your device, go to Settings > System > Developer options.
  2. Select Predictive back animations.
  3. Launch your updated app, and use the back gesture to see it in action.

Learn more

In addition to our detailed documentation, try out our predictive back gesture codelab in an actual implementation.

If you need a refresher on system back and predictive back gesture on Android, we recommend watching Basics for System Back.


Thank you again for all the feedback and being a part of the Android Community - we love collaborating together to provide the best experience for our users.

Machine Learning Communities: Q2 ‘22 highlights and achievements

Posted by Nari Yoon, Hee Jung, DevRel Community Manager / Soonson Kwon, DevRel Program Manager

Let’s explore highlights and accomplishments of vast Google Machine Learning communities over the second quarter of the year! We are enthusiastic and grateful about all the activities by the global network of ML communities. Here are the highlights!

TensorFlow/Keras

TFUG Agadir hosted #MLReady phase as a part of #30DaysOfML. #MLReady aimed to prepare the attendees with the knowledge required to understand the different types of problems which deep learning can solve, and helped attendees be prepared for the TensorFlow Certificate.

TFUG Taipei hosted the basic Python and TensorFlow courses named From Python to TensorFlow. The aim of these events is to help everyone learn about the basics of Python and TensorFlow, including TensorFlow Hub, TensorFlow API. The event videos are shared every week via Youtube playlist.

TFUG New York hosted Introduction to Neural Radiance Fields for TensorFlow users. The talk included Volume Rendering, 3D view synthesis, and links to a minimal implementation of NeRF using Keras and TensorFlow. In the event, ML GDE Aritra Roy Gosthipaty (India) had a talk focusing on breaking the concepts of the academic paper, NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis into simpler and more ingestible snippets.

TFUG Turkey, GDG Edirne and GDG Mersin organized a TensorFlow Bootcamp 22 and ML GDE M. Yusuf Sarıgöz (Turkey) participated as a speaker, TensorFlow Ecosystem: Get most out of auxiliary packages. Yusuf demonstrated the inner workings of TensorFlow, how variables, tensors and operations interact with each other, and how auxiliary packages are built upon this skeleton.

TFUG Mumbai hosted the June Meetup and 110 folks gathered. ML GDE Sayak Paul (India) and TFUG mentor Darshan Despande shared knowledge through sessions. And ML workshops for beginners went on and participants built up machine learning models without writing a single line of code.

ML GDE Hugo Zanini (Brazil) wrote Realtime SKU detection in the browser using TensorFlow.js. He shared a solution for a well-known problem in the consumer packaged goods (CPG) industry: real-time and offline SKU detection using TensorFlow.js.

ML GDE Gad Benram (Portugal) wrote Can a couple TensorFlow lines reduce overfitting? He explained how just a few lines of code can generate data augmentations and boost a model’s performance on the validation set.

ML GDE Victor Dibia (USA) wrote How to Build An Android App and Integrate Tensorflow ML Models sharing how to run machine learning models locally on Android mobile devices, How to Implement Gradient Explanations for a HuggingFace Text Classification Model (Tensorflow 2.0) explaining in 5 steps about how to verify the model is focusing on the right tokens to classify text. He also wrote how to finetune a HuggingFace model for text classification, using Tensorflow 2.0.

ML GDE Karthic Rao (India) released a new series ML for JS developers with TFJS. This series is a combination of short portrait and long landscape videos. You can learn how to build a toxic word detector using TensorFlow.js.

ML GDE Sayak Paul (India) implemented the DeiT family of ViT models, ported the pre-trained params into the implementation, and provided code for off-the-shelf inference, fine-tuning, visualizing attention rollout plots, distilling ViT models through attention. (code | pretrained model | tutorial)

ML GDE Sayak Paul (India) and ML GDE Aritra Roy Gosthipaty (India) inspected various phenomena of a Vision Transformer, shared insights from various relevant works done in the area, and provided concise implementations that are compatible with Keras models. They provide tools to probe into the representations learned by different families of Vision Transformers. (tutorial | code)

JAX/Flax

ML GDE Aakash Nain (India) had a special talk, Introduction to JAX for ML GDEs, TFUG organizers and ML community network organizers. He covered the fundamentals of JAX/Flax so that more and more people try out JAX in the near future.

ML GDE Seunghyun Lee (Korea) started a project, Training and Lightweighting Cookbook in JAX/FLAX. This project attempts to build a neural network training and lightweighting cookbook including three kinds of lightweighting solutions, i.e., knowledge distillation, filter pruning, and quantization.

ML GDE Yucheng Wang (China) wrote History and features of JAX and explained the difference between JAX and Tensorflow.

ML GDE Martin Andrews (Singapore) shared a video, Practical JAX : Using Hugging Face BERT on TPUs. He reviewed the Hugging Face BERT code, written in JAX/Flax, being fine-tuned on Google’s Colab using Google TPUs. (Notebook for the video)

ML GDE Soumik Rakshit (India) wrote Implementing NeRF in JAX. He attempts to create a minimal implementation of 3D volumetric rendering of scenes represented by Neural Radiance Fields.

Kaggle

ML GDEs’ Kaggle notebooks were announced as the winner of Google OSS Expert Prize on Kaggle: Sayak Paul and Aritra Roy Gosthipaty’s Masked Image Modeling with Autoencoders in March; Sayak Paul’s Distilling Vision Transformers in April; Sayak Paul & Aritra Roy Gosthipaty’s Investigating Vision Transformer Representations; Soumik Rakshit’s Tensorflow Implementation of Zero-Reference Deep Curve Estimation in May and Aakash Nain’s The Definitive Guide to Augmentation in TensorFlow and JAX in June.

ML GDE Luca Massaron (Italy) published The Kaggle Book with Konrad Banachewicz. This book details competition analysis, sample code, end-to-end pipelines, best practices, and tips & tricks. And in the online event, Luca and the co-author talked about how to compete on Kaggle.















ML GDE Ertuğrul Demir (Turkey) wrote Kaggle Handbook: Fundamentals to Survive a Kaggle Shake-up covering bias-variance tradeoff, validation set, and cross validation approach. In the second post of the series, he showed more techniques using analogies and case studies.













TFUG Chennai hosted ML Study Jam with Kaggle and created study groups for the interested participants. More than 60% of members were active during the whole program and many of them shared their completion certificates.

TFUG Mysuru organizer Usha Rengaraju shared a Kaggle notebook which contains the implementation of the research paper: UNETR - Transformers for 3D Biomedical Image Segmentation. The model automatically segments the stomach and intestines on MRI scans.

TFX

ML GDE Sayak Paul (India) and ML GDE Chansung Park (Korea) shared how to deploy a deep learning model with Docker, Kubernetes, and Github actions, with two promising ways - FastAPI (for REST) and TF Serving (for gRPC).

ML GDE Ukjae Jeong (Korea) and ML Engineers at Karrot Market, a mobile commerce unicorn with 23M users, wrote Why Karrot Uses TFX, and How to Improve Productivity on ML Pipeline Development.

ML GDE Jun Jiang (China) had a talk introducing the concept of MLOps, the production-level end-to-end solutions of Google & TensorFlow, and how to use TFX to build the search and recommendation system & scientific research platform for large-scale machine learning training.

ML GDE Piero Esposito (Brazil) wrote Building Deep Learning Pipelines with Tensorflow Extended. He showed how to get started with TFX locally and how to move a TFX pipeline from local environment to Vertex AI; and provided code samples to adapt and get started with TFX.

TFUG São Paulo (Brazil) had a series of online webinars on TensorFlow and TFX. In the TFX session, they focused on how to put the models into production. They talked about the data structures in TFX and implementation of the first pipeline in TFX: ingesting and validating data.

TFUG Stockholm hosted MLOps, TensorFlow in Production, and TFX covering why, what and how you can effectively leverage MLOps best practices to scale ML efforts and had a look at how TFX can be used for designing and deploying ML pipelines.

Cloud AI

ML GDE Chansung Park (Korea) wrote MLOps System with AutoML and Pipeline in Vertex AI on GCP official blog. He showed how Google Cloud Storage and Google Cloud Functions can help manage data and handle events in the MLOps system.

He also shared the Github repository, Continuous Adaptation with VertexAI's AutoML and Pipeline. This contains two notebooks to demonstrate how to automate to produce a new AutoML model when the new dataset comes in.

TFUG Northwest (Portland) hosted The State and Future of AI + ML/MLOps/VertexAI lab walkthrough. In this event, ML GDE Al Kari (USA) outlined the technology landscape of AI, ML, MLOps and frameworks. Googler Andrew Ferlitsch had a talk about Google Cloud AI’s definition of the 8 stages of MLOps for enterprise scale production and how Vertex AI fits into each stage. And MLOps engineer Chris Thompson covered how easy it is to deploy a model using the Vertex AI tools.

Research

ML GDE Qinghua Duan (China) released a video which introduces Google’s latest 540 billion parameter model. He introduced the paper PaLM, and described the basic training process and innovations.

ML GDE Rumei LI (China) wrote blog postings reviewing papers, DeepMind's Flamingo and Google's PaLM.