
Google Play’s Indie Games Fund selects 10 Latin American studios

Posted by Nari Yoon, Hee Jung, DevRel Community Manager / Soonson Kwon, DevRel Program Manager
Let’s explore highlights and accomplishments of vast Google Machine Learning communities over the third quarter of the year! We are enthusiastic and grateful about all the activities by the global network of ML communities. Here are the highlights!
![]() |
Load-testing TensorFlow Serving’s REST Interface by ML GDE Sayak Paul (India) and Chansung Park (Korea) shares the lessons and findings they learned from conducting load tests for an image classification model across numerous deployment configurations.
TFUG Taipei hosted events (Python + Hugging Face-Translation+ tf.keras.losses, Python + Object detection, Python+Hugging Face-Token Classification+tf.keras.initializers) in September and helped community members learn how to use TF and Hugging face to implement machine learning model to solve problems.
Neural Machine Translation with Bahdanau’s Attention Using TensorFlow and Keras and the related video by ML GDE Aritra Roy Gosthipaty (India) explains the mathematical intuition behind neural machine translation.
![]() |
Automated Deployment of TensorFlow Models with TensorFlow Serving and GitHub Actions by ML GDE Chansung Park (Korea) and Sayak Paul (India) explains how to automate TensorFlow model serving on Kubernetes with TensorFlow Serving and GitHub Action.
Deploying ? ViT on Kubernetes with TF Serving by ML GDE Sayak Paul (India) and Chansung Park (Korea) shows how to scale the deployment of a ViT model from ? Transformers using Docker and Kubernetes.
![]() |
Long-term TensorFlow Guidance on tf.wiki Forum by ML GDE Xihan Li (China) provides TensorFlow guidance by answering the questions from Chinese developers on the forum.
![]() |
Hindi Character Recognition on Android using TensorFlow Lite by ML GDE Nitin Tiwari (India) shares an end-to-end tutorial on training a custom computer vision model to recognize Hindi characters. In TFUG Pune event, he also gave a presentation titled Building Computer Vision Model using TensorFlow: Part 1.
![]() |
Using TFlite Model Maker to Complete a Custom Audio Classification App by ML GDE Xiaoxing Wang (China) shows how to use TFLite Model Maker to build a custom audio classification model based on YAMNet and how to import and use the YAMNet-based custom models in Android projects.
SoTA semantic segmentation in TF with ? by ML GDE Sayak Paul (India) and Chansung Park (Korea). The SegFormer model was not available on TensorFlow.
Text Augmentation in Keras NLP by ML GDE Xiaoquan Kong (China) explains what text augmentation is and how the text augmentation feature in Keras NLP is designed.
The largest vision model checkpoint (public) in TF (10 Billion params) through ? transformers by ML GDE Sayak Paul (India) and Aritra Roy Gosthipaty (India). The underlying model is RegNet, known for its ability to scale.
![]() |
CryptoGANs open-source repository by ML GDE Dimitre Oliveira (Brazil) shows simple model implementations following TensorFlow best practices that can be extended to more complex use-cases. It connects the usage of TensorFlow with other relevant frameworks, like HuggingFace, Gradio, and Streamlit, building an end-to-end solution.
![]() |
MLOps for Vision Models from ? with TFX by ML GDE Chansung Park (Korea) and Sayak Paul (India) shows how to build a machine learning pipeline for a vision model (TensorFlow) from ? Transformers using the TF ecosystem.
First release of TFX Addons Package by ML GDE Hannes Hapke (United States). The package has been downloaded a few thousand times (source). Google and other developers maintain it through bi-weekly meetings. Google’s Open Source Peer Award has recognized the work.
TFUG São Paulo hosted TFX T1 | E4 & TFX T1 | E5. And ML GDE Vinicius Caridá (Brazil) shared how to train a model in a TFX pipeline. The fifth episode talks about Pusher: publishing your models with TFX.
Semantic Segmentation model within ML pipeline by ML GDE Chansung Park (Korea) and Sayak Paul (India) shows how to build a machine learning pipeline for semantic segmentation task with TFX and various GCP products such as Vertex Pipeline, Training, and Endpoints.
![]() |
JAX Tutorial by ML GDE Phillip Lippe (Netherlands) is meant to briefly introduce JAX, including writing and training neural networks with Flax.
TFUG Malaysia hosted Introduction to JAX for Machine Learning (video) and Leong Lai Fong gave a talk. The attendees learned what JAX is and its fundamental yet unique features, which make it efficient to use when executing deep learning workloads. After that, they started training their first JAX-powered deep learning model.
TFUG Taipei hosted Python+ JAX + Image classification and helped people learn JAX and how to use it in Colab. They shared knowledge about the difference between JAX and Numpy, the advantages of JAX, and how to use it in Colab.
Introduction to JAX by ML GDE João Araújo (Brazil) shared the basics of JAX in Deep Learning Indaba 2022.
![]() |
Should I change from NumPy to JAX? by ML GDE Gad Benram (Portugal) compares the performance and overview of the issues that may result from changing from NumPy to JAX.
Introduction to JAX: efficient and reproducible ML framework by ML GDE Seunghyun Lee (Korea) introduced JAX/Flax and their key features using practical examples. He explained the pure function and PRNG, which make JAX explicit and reproducible, and XLA and mapping functions which make JAX fast and easily parallelized.
Data2Vec Style pre-training in JAX by ML GDE Vasudev Gupta (India) shares a tutorial for demonstrating how to pre-train Data2Vec using the Jax/Flax version of HuggingFace Transformers.
Distributed Machine Learning with JAX by ML GDE David Cardozo (Canada) delivered what makes JAX different from TensorFlow.
Image classification with JAX & Flax by ML GDE Derrick Mwiti (Kenya) explains how to build convolutional neural networks with JAX/Flax. And he wrote several articles about JAX/Flax: What is JAX?, How to load datasets in JAX with TensorFlow, Optimizers in JAX and Flax, Flax vs. TensorFlow, etc..
DDPMs - Part 1 by ML GDE Aakash Nain (India) and cait-tf by ML GDE Sayak Paul (India) were announced as Kaggle ML Research Spotlight Winners.
![]() |
Fresher on Random Variables, All you need to know about Gaussian distribution, and A deep dive into DDPMs by ML GDE Aakash Nain (India) explain the fundamentals of diffusion models.
In Grandmasters Journey on Kaggle + The Kaggle Book, ML GDE Luca Massaron (Italy) explained how Kaggle helps people in the data science industry and which skills you must focus on apart from the core technical skills.
How Cohere is accelerating language model training with Google Cloud TPUs by ML GDE Joanna Yoo (Canada) explains what Cohere engineers have done to solve scaling challenges in large language models (LLMs).
![]() |
In Using machine learning to transform finance with Google Cloud and Digits, ML GDE Hannes Hapke (United States) chats with Fillipo Mandella, Customer Engineering Manager at Google, about how Digits leverages Google Cloud’s machine learning tools to empower accountants and business owners with near-zero latency.
A tour of Vertex AI by TFUG Chennai for ML, cloud, and DevOps engineers who are working in MLOps. This session was about the introduction of Vertex AI, handling datasets and models in Vertex AI, deployment & prediction, and MLOps.
TFUG Abidjan hosted two events with GDG Cloud Abidjan for students and professional developers who want to prepare for a Google Cloud certification: Introduction session to certifications and Q&A, Certification Study Group.
![]() |
Deploying ? ViT on Vertex AI by ML GDE Sayak Paul (India) and Chansung Park (Korea) shows how to deploy a ViT B/16 model on Vertex AI. They cover some critical aspects of a deployment such as auto-scaling, authentication, endpoint consumption, and load-testing.
![]() |
TFUG Singapore hosted The World of Diffusion - DALL-E 2, IMAGEN & Stable Diffusion. ML GDE Martin Andrews (Singapore) and Sam Witteveen (Singapore) gave talks named “How Diffusion Works” and “Investigating Prompt Engineering on Diffusion Models” to bring people up-to-date with what has been going on in the world of image generation.
ML GDE Martin Andrews (Singapore) have done three projects: GCP VM with Nvidia set-up and Convenience Scripts, Containers within a GCP host server, with Nvidia pass-through, Installing MineRL using Containers - with linked code.
Jupyter Services on Google Cloud by ML GDE Gad Benram (Portugal) explains the differences between Vertex AI Workbench, Colab, and Deep Learning VMs.
![]() |
Train and Deploy Google Cloud's Two Towers Recommender by ML GDE Rubens de Almeida Zimbres (Brazil) explains how to implement the model and deploy it in Vertex AI.
![]() |
The first session of #MLPaperReadingClubs (video) by ML GDE Nathaly Alarcon Torrico (Bolivia) and Women in Data Science La Paz. Nathaly led the session, and the community members participated in reading the ML paper “Zero-shot learning through cross-modal transfer.”
In #MLPaperReadingClubs (video) by TFUG Lesotho, Arnold Raphael volunteered to lead the first session “Zero-shot learning through cross-modal transfer.”
![]() |
ML Paper Reading Clubs #1: Zero Shot Learning Paper (video) by TFUG Agadir introduced a model that can recognize objects in images even if no training data is available for the objects. TFUG Agadir prepared this event to make people interested in machine learning research and provide them with a broader vision of differentiating good contributions from great ones.
Opening of the Machine Learning Paper Reading Club (video) by TFUG Dhaka introduced ML Paper Reading Club and the group’s plan.
EDA on SpaceX Falcon 9 launches dataset (Kaggle) (video) by TFUG Mysuru & TFUG Chandigarh organizer Aashi Dutt (presenter) walked through exploratory data analysis on SpaceX Falcon 9 launches dataset from Kaggle.
![]() |
Introduction to MRC-style dialogue summaries based on BERT by ML GDE Qinghua Duan (China) shows how to apply the MRC paradigm and BERT to solve the dialogue summarization problem.
Plant disease classification using Deep learning model by ML GDE Yannick Serge Obam Akou (Cameroon) talked on plant disease classification using deep learning model : an end to end Android app (open source project) that diagnoses plant diseases.
![]() |
Nystromformer Github repository by Rishit Dagli provides TensorFlow/Keras implementation of Nystromformer, a transformer variant that uses the Nyström method to approximate standard self-attention with O(n) complexity which allows for better scalability.
In the past few years, startups throughout South East Asia and Pakistan have been steadily growing and taking on the regions’ most pressing challenges. From agriculture to healthcare, these startups are building digital solutions to tackle their area of focus.
Recently, the digital audiences in these regions have expanded significantly. In South East Asia alone, 80 million new users have come online since March 2020, boosting activity for startups developing digital products and services across a variety of industries. And we’ve seen that growth as venture funding reached new heights in both South East Asia and Pakistan. In South East Asia, deal activity hit a record US$11.5 billion in the first half of 2021. Meanwhile in Pakistan, startups raised US$350 million in funding in 2021, five times the amount in 2020.
One explanation for this acceleration is that Pakistan and South East Asia both have a thriving youth population. More than half of the population of South East Asia is under 30 years old. In Pakistan too, the median age is only 22. These young people tend to be tech-savvy, have an interest in entrepreneurship, and are more in tune with global trends. With that mindset, they’re often more inclined to use emerging technologies like artificial intelligence and blockchain to solve problems and build digital products.
Government support across the regions has certainly helped as well, as local governments have recognised the critical role of startups in their economies — specifically in digital transformation and creating job opportunities. Government-driven initiatives like Thailand 4.0, Indonesia’s 1000 startups, Singapore’s Startup SG Founders, as well as Pakistan’s Prime Minister’s Youth Program, will continue to help aspiring founders get their startups off the ground.
Google is excited to nurture this next wave of tech startup founders with the Google for Startups Accelerator (South East Asia and Pakistan), particularly those that are focused on e-commerce, finance, healthcare, SME-focused B2B solutions, education, agriculture and logistics.
We’re looking for 10 to 15 startups based in Indonesia, Malaysia, Pakistan, Philippines, Singapore, Thailand or Vietnam, that are in the seed or Series A stage. The Accelerator will support these startups by providing the best of Google’s resources: Googler mentors, a network of new contacts to help them on their journeys, and the most cutting edge technology.
Interested startups are encouraged to applyby Oct 7, 2022.
One day, Annabel Angwenyi was running errands in Nairobi, Kenya when her car refused to start. She called her usual mechanic, but he was busy helping another customer on the opposite side of town. She knew there must be another mechanic close by, but because many local businesses don’t have an online presence, she had trouble finding and contacting someone else. Annabel was frustrated — but she also saw an opportunity to solve a problem.
After a lot of research, hard work and perseverance, she and her co-founder Patrick launched Ziada, an app that connects people across Kenya to local service providers. Today, Ziada has a team of seven people and over 60,000 downloads on Google Play.
Annabel is one of the founders featured in #WeArePlay, which spotlights the people behind Google Play’s apps and games. We chatted with Annabel to learn more about how she got Ziada up and running with no tech experience, and the impact it’s had on the local community.
How did you turn your idea into an app?
Patrick and I didn’t have any tech experience — we’re both business people. So in 2017, we partnered with a software developer who believed in our dream and helped us create the app. After a lot of hard work, we published the first version of Ziada on Google Play that same year. But it didn’t really take off. We weren’t sure if the Kenyan market was ready for something like this, so we took a break.
Then when the pandemic started in 2020, we noticed people wanted to access more things on demand and online, like food delivery and taxi services. So we rebranded the app, including improving the user interface to better reflect how we could help, and launched again. Now, our app has over 60,000 downloads on Google Play and is helping service providers across Kenya find new customers.
What impact has your app had on the community?
Kenya is an entrepreneurial nation, with people just like us wanting to build something for themselves. Having owned small businesses in the past, we knew the app had potential to help others grow their businesses. And it makes us so happy to see this actually happening. I’m also really proud of how we’re helping women — who make up 38% of service providers on Ziada — create their own income. I believe when you empower women, you empower the whole community. It’s something that’s really close to our hearts at Ziada. Most of our team are women, and many of us mentor young girls in the community. In fact, two of our mentees are joining Ziada as software developers.
Any advice for someone starting their own app or game business?
Just jump in. I think that initial leap of faith is the hardest one to make — it definitely was for me. The app or game will never be 100% perfect, and if you wait for that moment, the train may have already left (both in terms of user needs and market share).
If you have a working prototype or early version of your app, get it on Google Play and build hype around it. I was surprised at how patient our users were with Ziada in its early days, even with all its shortcomings. But that’s because they wanted it to work. If you’re providing a good solution to a problem, the adopters will come.
What’s next for Ziada?
We’re always working on new services, like helping contractors rent equipment and tools to complete jobs or providing coaching through our upcoming business advisory service. We also want to keep partnering with growing, local businesses and expand our user base — not just in Kenya, but across the African continent. There’s so much potential here, and we’re only just getting started.
Read more about Annabel and other app and game founders featured in #WeArePlay.
Google Play turned 10 this year, and we’ve been keeping the celebrations going with local developer communities around the world. It’s an extra special occasion in Asia Pacific, which is home to one of the largest app developer populations (nearly a third of the 26.9 million app developers worldwide) and one of the most engaged audiences. In fact, people in Asia Pacific download and use mobile apps more than any other region.
Developers in Asia Pacific are reaching global audiences, with hundreds of millions of downloads outside the region. Some of these apps have become global names and inspired new trends on Play, like multiplayer gaming (Mobile Legends: Bang Bang), super apps (Grab), rapid delivery e-commerce (Coupang) and fintech solutions for the unbanked (Paytm).
Let’s take a closer look at some other emerging themes on Play — like mental health, news and music — where developers in Asia Pacific are making their mark globally.
Developer: Seekrtech, Taiwan
Listed on Play: August 2014
“The main goal of Forest is to encourage users to put down their phones and focus on the more important things around them,” says Shaokan Pi, CEO of Forest. Here’s how it works — you set a focus time period, whether you’re working at the office or at dinner with friends. Once you put down your phone, a virtual tree starts growing. If you stay focused (and don’t look at your phone), the sapling grows into a big tree. And you can earn virtual coins to grow more trees, and eventually a whole forest. There’s a real-world benefit, too — thanks to a partnership between Forest and Trees for the Future, you can spend your coins to plant real trees on Earth.
The Forest team planting a tree in Kenya
Developer: SmartNews, Japan
Listed on Play: March 2013
SmartNews, which is also celebrating its 10th anniversary this year, uses artificial intelligence to collect and deliver a curated view of news from all over the world. But it’s not just an echo chamber — its News From All Sides feature shows people articles across a wide spectrum of political perspectives. SmartNews has also developed timely products like a COVID-19 dashboard and trackers for wildfires and hurricanes.
Developer: Evolve, India
Listed on Play: July 2020
Evolve, a health-tech startup supporting the wellbeing of the LGBTQ+ community, landed on Google Play’s Best of 2021 list in India. The app offers educational content for members of the LGBTQ+ community, covering topics like embracing your sexuality and coming out to loved ones. “There is a need for more customized solutions for this community,” says Anshul Kamath, co-founder of Evolve. “We hope to provide a virtual safe space where members can work on themselves and specific challenges that impact their daily mental health.”
The Evolve team with their “Best of Play” trophy in 2021
Developer: Amanotes, Vietnam
Listed on Play: February 2017
This musical game app quickly found fans in the U.S., Japan, Brazil and Russia. Magic Tiles 3 is designed to let anyone — even those without a musical background — play instruments like the piano, guitar and drums on their smartphone. You can choose from over 1,000 songs across genres like pop, rap, jazz and electronic dance music, and compete in an interactive game with others around the world.
Developer: Mom Sitter, Korea
Listed on Play: September 2021
Mom Sitter, a platform connecting parents with babysitters, topped the Play Store’s childcare category in Korea last year. But it didn’t actually start as a mobile app. It was founded as a website to help parents find babysitters while they were at work or when daycare centers were too full. After attending the ChangGoo program, Google’s training program for developers and startups in Korea, the Mom Sitter team learned they could reach more people if they went mobile. Today, caretakers all over the world use their services. “Childcare issues concern not only working women but everyone who raises children, and it’s important that they can find support,” says Jeeyea Chung, founder of Mom Sitter.
Each year, Google Developer Groups (GDGs) come together for DevFest conferences around the world – not only to exchange knowledge and share experiences, but also to get inspired, celebrate the community and simply be together. It’s a cheerful gathering, focused both on technology and the people behind it.
GDGs in Ukraine organized the first DevFest in 2012. After 10 years of building a thriving community, 2022 turned out to be different for thousands of Ukrainian developers. Ever since the anti-aircraft sirens woke them up for the first time on February 24, many in the tech industry have been working non-stop for the sake of their country – helping refugees, providing medical assistance to those in need, and trying to work from bomb shelters. Luckily, they’re not alone.
The developer community in Ukraine and abroad decided to use the DevFest conference to raise awareness and funds for those in need. "This time, because of the war in my country, DevFest Ukraine is happening for Ukraine," says Vitaliy Zasadnyy, co-founder of GDG Lviv. "It's a brilliant way to celebrate the future of technology, learn new things, connect with other tech experts and raise funds for a good cause."
Fireside chat with Android team members in the London studio.
On July 14-15, DevFest for Ukraine gathered more than 20 industry-leading speakers over two days, featuring live streams from London and Lviv. From tech sessions and inspirational keynotes to networking and overviews of the latest developer tools, the event brought together people who shape the future of Android, Web and AI technologies.
Funds were raised for those in need by participants donating a sum of their choice to access the live stream and recordings after the event. Topics ranged from API design based on AndroidX libraries, to applied ML for Healthcare, to next-generation apps powered by machine learning with TensorFlow.js, and more. Check out the highlights video.
Preparing the AI Stream livestream from the studio in Lviv, Ukraine.
All the funds raised during DevFest for Ukraine go to three NGOs that are supporting the country at this turbulent time. The goal was to provide humanitarian aid and direct assistance to affected families. The GDG Ukraine team carefully selected them to ensure efficient use of funds and transparent reporting.
And here’s the best part: DevFest for Ukraine raised over $130k for the cause so far, and counting! You can still access the recorded sessions to learn about the future of tech.
Posted by Jason Tang, Product Management, Diego Zuluaga, Developer Relations, and Michael Mauzy, Developer Documentation
Since we introduced gesture navigation in Android 10, users have signaled they want to understand where a back gesture will take them before they complete it.
As the first step to addressing this need, we've been developing a predictive back gesture. When a user starts their gesture by swiping back, we’ll show an animated preview of the destination UI, and the user can complete the gesture to navigate to that UI if they want – as shown in the following example.
Although the predictive back gesture won’t be visible to users in Android 13, we’re making an early version of the UI available as a developer option for testing starting in Beta 4. We plan to make the UI available to users in a future Android release, and we’d like all apps to be ready. We’re also working with partners to ensure it’s consistent across devices.
Read on for details on how to try out the new gesture and support it in your apps. Adding support for predictive back gesture is straightforward for most apps, and you can get started today.
We also encourage you to submit your feedback.
To try out the early version of the predictive back gesture available through the developer option, you’ll need to first update your app to support the predictive back gesture, and then enable the developer option.
To help make predictive back gesture helpful and consistent for users, we're moving to an ahead-of-time model for back event handling by adding new APIs and deprecating existing APIs.
The new platform APIs and updates to AndroidX Activity 1.6+ are designed to make your transition from unsupported APIs (KeyEvent#KEYCODE_BACK and OnBackPressed) to the predictive back gesture as smooth as possible.
The new platform APIs include OnBackInvokedCallback
and OnBackInvokedDispatcher, which AndroidX Activity 1.6+ supports through the existing OnBackPressedCallback and OnBackPressedDispatcher
APIs.
You can start testing this feature in two to four steps, depending on your existing implementation.
To begin testing this feature:
1. Upgrade to AndroidX Activity 1.6.0-alpha05. By upgrading your dependency on AndroidX Activity, APIs that are already using the OnBackPressedDispatcher APIs such as Fragments and the Navigation Component will seamlessly work when you opt-in for the predictive back gesture.
2. Opt-in for the predictive back gesture. Opt-in your app by setting the EnableOnBackInvokedCallback flag to true at the application level in the AndroidManifest.xml.
If your app doesn’t intercept the back event, you're done at this step.
Note: Opt-in is optional in Android 13, and it will be ignored after this version.
3. Create a callback to intercept the system Back button/event. If possible, we recommend using the AndroidX APIs as shown below. For non-AndroidX use cases, check the platform API mentioned above.
This snippet implements handleOnBackPressed and adds the OnBackPressedCallback to the OnBackPressedDispatcher at the activity level.
4. When your app is ready to stop intercepting the system Back event, disable the onBackPressedCallback callback.
Posted by Nari Yoon, Hee Jung, DevRel Community Manager / Soonson Kwon, DevRel Program Manager
Let’s explore highlights and accomplishments of vast Google Machine Learning communities over the second quarter of the year! We are enthusiastic and grateful about all the activities by the global network of ML communities. Here are the highlights!
TFUG Agadir hosted #MLReady phase as a part of #30DaysOfML. #MLReady aimed to prepare the attendees with the knowledge required to understand the different types of problems which deep learning can solve, and helped attendees be prepared for the TensorFlow Certificate.
TFUG Taipei hosted the basic Python and TensorFlow courses named From Python to TensorFlow. The aim of these events is to help everyone learn about the basics of Python and TensorFlow, including TensorFlow Hub, TensorFlow API. The event videos are shared every week via Youtube playlist.
TFUG New York hosted Introduction to Neural Radiance Fields for TensorFlow users. The talk included Volume Rendering, 3D view synthesis, and links to a minimal implementation of NeRF using Keras and TensorFlow. In the event, ML GDE Aritra Roy Gosthipaty (India) had a talk focusing on breaking the concepts of the academic paper, NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis into simpler and more ingestible snippets.
TFUG Turkey, GDG Edirne and GDG Mersin organized a TensorFlow Bootcamp 22 and ML GDE M. Yusuf Sarıgöz (Turkey) participated as a speaker, TensorFlow Ecosystem: Get most out of auxiliary packages. Yusuf demonstrated the inner workings of TensorFlow, how variables, tensors and operations interact with each other, and how auxiliary packages are built upon this skeleton.
TFUG Mumbai hosted the June Meetup and 110 folks gathered. ML GDE Sayak Paul (India) and TFUG mentor Darshan Despande shared knowledge through sessions. And ML workshops for beginners went on and participants built up machine learning models without writing a single line of code.
ML GDE Hugo Zanini (Brazil) wrote Realtime SKU detection in the browser using TensorFlow.js. He shared a solution for a well-known problem in the consumer packaged goods (CPG) industry: real-time and offline SKU detection using TensorFlow.js.
ML GDE Gad Benram (Portugal) wrote Can a couple TensorFlow lines reduce overfitting? He explained how just a few lines of code can generate data augmentations and boost a model’s performance on the validation set.
ML GDE Victor Dibia (USA) wrote How to Build An Android App and Integrate Tensorflow ML Models sharing how to run machine learning models locally on Android mobile devices, How to Implement Gradient Explanations for a HuggingFace Text Classification Model (Tensorflow 2.0) explaining in 5 steps about how to verify the model is focusing on the right tokens to classify text. He also wrote how to finetune a HuggingFace model for text classification, using Tensorflow 2.0.
ML GDE Karthic Rao (India) released a new series ML for JS developers with TFJS. This series is a combination of short portrait and long landscape videos. You can learn how to build a toxic word detector using TensorFlow.js.
ML GDE Sayak Paul (India) implemented the DeiT family of ViT models, ported the pre-trained params into the implementation, and provided code for off-the-shelf inference, fine-tuning, visualizing attention rollout plots, distilling ViT models through attention. (code | pretrained model | tutorial)
ML GDE Sayak Paul (India) and ML GDE Aritra Roy Gosthipaty (India) inspected various phenomena of a Vision Transformer, shared insights from various relevant works done in the area, and provided concise implementations that are compatible with Keras models. They provide tools to probe into the representations learned by different families of Vision Transformers. (tutorial | code)
ML GDE Aakash Nain (India) had a special talk, Introduction to JAX for ML GDEs, TFUG organizers and ML community network organizers. He covered the fundamentals of JAX/Flax so that more and more people try out JAX in the near future.
ML GDE Seunghyun Lee (Korea) started a project, Training and Lightweighting Cookbook in JAX/FLAX. This project attempts to build a neural network training and lightweighting cookbook including three kinds of lightweighting solutions, i.e., knowledge distillation, filter pruning, and quantization.
ML GDE Yucheng Wang (China) wrote History and features of JAX and explained the difference between JAX and Tensorflow.
ML GDE Martin Andrews (Singapore) shared a video, Practical JAX : Using Hugging Face BERT on TPUs. He reviewed the Hugging Face BERT code, written in JAX/Flax, being fine-tuned on Google’s Colab using Google TPUs. (Notebook for the video)
ML GDE Soumik Rakshit (India) wrote Implementing NeRF in JAX. He attempts to create a minimal implementation of 3D volumetric rendering of scenes represented by Neural Radiance Fields.
ML GDEs’ Kaggle notebooks were announced as the winner of Google OSS Expert Prize on Kaggle: Sayak Paul and Aritra Roy Gosthipaty’s Masked Image Modeling with Autoencoders in March; Sayak Paul’s Distilling Vision Transformers in April; Sayak Paul & Aritra Roy Gosthipaty’s Investigating Vision Transformer Representations; Soumik Rakshit’s Tensorflow Implementation of Zero-Reference Deep Curve Estimation in May and Aakash Nain’s The Definitive Guide to Augmentation in TensorFlow and JAX in June.
ML GDE Luca Massaron (Italy) published The Kaggle Book with Konrad Banachewicz. This book details competition analysis, sample code, end-to-end pipelines, best practices, and tips & tricks. And in the online event, Luca and the co-author talked about how to compete on Kaggle.
ML GDE Ertuğrul Demir (Turkey) wrote Kaggle Handbook: Fundamentals to Survive a Kaggle Shake-up covering bias-variance tradeoff, validation set, and cross validation approach. In the second post of the series, he showed more techniques using analogies and case studies.
TFUG Chennai hosted ML Study Jam with Kaggle and created study groups for the interested participants. More than 60% of members were active during the whole program and many of them shared their completion certificates.
TFUG Mysuru organizer Usha Rengaraju shared a Kaggle notebook which contains the implementation of the research paper: UNETR - Transformers for 3D Biomedical Image Segmentation. The model automatically segments the stomach and intestines on MRI scans.
ML GDE Sayak Paul (India) and ML GDE Chansung Park (Korea) shared how to deploy a deep learning model with Docker, Kubernetes, and Github actions, with two promising ways - FastAPI (for REST) and TF Serving (for gRPC).
ML GDE Ukjae Jeong (Korea) and ML Engineers at Karrot Market, a mobile commerce unicorn with 23M users, wrote Why Karrot Uses TFX, and How to Improve Productivity on ML Pipeline Development.
ML GDE Jun Jiang (China) had a talk introducing the concept of MLOps, the production-level end-to-end solutions of Google & TensorFlow, and how to use TFX to build the search and recommendation system & scientific research platform for large-scale machine learning training.
ML GDE Piero Esposito (Brazil) wrote Building Deep Learning Pipelines with Tensorflow Extended. He showed how to get started with TFX locally and how to move a TFX pipeline from local environment to Vertex AI; and provided code samples to adapt and get started with TFX.
TFUG São Paulo (Brazil) had a series of online webinars on TensorFlow and TFX. In the TFX session, they focused on how to put the models into production. They talked about the data structures in TFX and implementation of the first pipeline in TFX: ingesting and validating data.
TFUG Stockholm hosted MLOps, TensorFlow in Production, and TFX covering why, what and how you can effectively leverage MLOps best practices to scale ML efforts and had a look at how TFX can be used for designing and deploying ML pipelines.
ML GDE Chansung Park (Korea) wrote MLOps System with AutoML and Pipeline in Vertex AI on GCP official blog. He showed how Google Cloud Storage and Google Cloud Functions can help manage data and handle events in the MLOps system.
He also shared the Github repository, Continuous Adaptation with VertexAI's AutoML and Pipeline. This contains two notebooks to demonstrate how to automate to produce a new AutoML model when the new dataset comes in.
TFUG Northwest (Portland) hosted The State and Future of AI + ML/MLOps/VertexAI lab walkthrough. In this event, ML GDE Al Kari (USA) outlined the technology landscape of AI, ML, MLOps and frameworks. Googler Andrew Ferlitsch had a talk about Google Cloud AI’s definition of the 8 stages of MLOps for enterprise scale production and how Vertex AI fits into each stage. And MLOps engineer Chris Thompson covered how easy it is to deploy a model using the Vertex AI tools.
ML GDE Qinghua Duan (China) released a video which introduces Google’s latest 540 billion parameter model. He introduced the paper PaLM, and described the basic training process and innovations.
ML GDE Rumei LI (China) wrote blog postings reviewing papers, DeepMind's Flamingo and Google's PaLM.
Posted by Maru Ahues Bouza, Director, Android Developer Relations
We’re just a few weeks away from the official release of Android 13! As we put the finishing touches on the next version of Android, today we’re bringing you Beta 4, a final update for your testing and development. Now is the time to make sure your apps are ready!
There’s a lot to explore in Android 13, from privacy features like the new notification permission and photo picker, to productivity features like themed app icons and per-app language support, as well as modern standards like HDR video, Bluetooth LE Audio, and MIDI 2.0 over USB. We’ve also extended the updates we made in 12L, giving you better tools to take advantage of tablet and large screen devices.
You can try Beta 4 today on your Pixel device by enrolling here for over-the-air updates. If you previously enrolled, you’ll automatically get today’s update. You can also get Android 13 Beta on select devices from several of our partners. Visit the Android 13 developer site for details.
Watch for more information on the official Android 13 release coming soon!
Today’s update includes a release candidate build of Android 13 for Pixel devices and the Android Emulator. We reached Platform Stability at Beta 3, so all app-facing surfaces are final, including SDK and NDK APIs, app-facing system behaviors, and restrictions on non-SDK interfaces. With these and the latest fixes and optimizations, Beta 4 gives you everything you need to complete your testing.
With the official Android 13 release just ahead, we’re asking all app and game developers to complete your final compatibility testing and publish your compatibility updates ahead of the final release. For SDK, library, tools, and game engine developers, it’s important to release your compatible updates as soon as possible -- your downstream app and game developers may be blocked until they receive your updates.
To test your app for compatibility, just install it on a device running Android 13 Beta 4 and work through the app flows, looking for any functional or UI issues. Review the Android 13 behavior changes for all apps to focus on areas where your app could be affected. Here are some of the top changes to test:
Remember to test the libraries and SDKs in your app for compatibility. If you find any SDK issues, try updating to the latest version of the SDK or reaching out to the developer for help.
Once you’ve published the compatible version of your current app, you can start the process to update your app's targetSdkVersion. Review the behavior changes that apply when your app targets Android 13 and use the compatibility framework to help detect issues quickly.
Android 13 builds on the tablet optimizations introduced in 12L, so as part of your testing, make sure your apps look their best on tablets and other large-screen devices. You can test large-screen features by setting up an Android emulator in Android Studio, or you can use a large screen device from our Android 13 Beta partners. Here are some areas to watch for:
You can read more about the tablet features in Android 13 and what to test here.
Today’s Beta 4 release has everything you need to test your app and try the Android 13 features. Just enroll your Pixel device to get the update over-the-air. To get started, set up the Android 13 SDK.
You can also test your app with Android 13 Beta on devices from several of our partners. Visit android.com/beta to see the full list of partners, with links to their sites for details on their supported devices and Beta builds, starting with Beta 1. Each partner will handle their own enrollments and support, and provide the Beta updates to you directly. For even broader testing, you can try Beta 4 on Android GSI images, and if you don’t have a device, you can test on the Android Emulator. For complete details on Android 13, visit the Android 13 developer site.
Watch for information on the official Android 13 launch coming in the weeks ahead! Until then, feel free to continue sharing your feedback through our hotlists for platform issues, app compatibility issues, and third-party SDK issues.
A huge thank you to our developer community for helping shape the Android 13 release! You’ve given us thousands of bug reports and shared insights that have helped us optimize APIs, improve features, fix significant bugs, and in general make the platform better for users and developers.
We’re looking forward to seeing your apps on Android 13!
Annalisa Arcella, a scientist based in London, spent her career working in data science, using her technical expertise to work alongside people in different sectors with different backgrounds. But a project in March 2021 led her in a new direction — working with cloud technology and ultimately becoming a Women Techmakers Ambassador.
She started working on a project for the Public Sector in London. The goal was to analyze thousands of responses to public policy consultations. She was collaborating with a customer engineer from the Google Cloud Platform team on the project, as it was her first time using the technology. After the two worked together, he suggested she participate in the Women Developers Academy, an intensive program for women in tech to develop their public speaking skills and confidence.
Annalisa was accepted to the Women Developers Academy a few months later. “For two months, they taught us how to contribute to the community via technical public speaking and writing blog posts, as well as how to prepare technical video content,” she says. After graduating, Annalisa was inspired to do more for the community of women in tech around her. “Joining the program motivated me to share my experience and inspire like-minded young women who want to pursue careers in tech,” she says. “I worked and lived in different countries between Europe and the U.S. and women are still a minority group in tech, especially in non-European countries.”
In December 2021, Annalisa became a Women Techmakers Ambassador, joining a global group of leaders around the world passionate about impacting their communities and building a world where all women can thrive in tech. Additionally, her first experience using Google Cloud Platform sparked her interest in new areas of technology and led her to a new role as machine learning engineering manager for PricewaterhouseCoopers. “I have always worked closely with engineers in high-performance computing environments, on the boundary of science and tech,” she says. “Now, I’m excited to move to machine learning engineering, spending most of my time on the cloud.”
Since becoming an Ambassador and getting a new job, Annalisa has kept quite busy. As an Ambassador, she’s been able to share her knowledge about Google Cloud, and is planning learning sessions in MLOps and Tensorflow. “I am meeting really inspiring people from all over the world,” she says. This role has also allowed her to mentor other women who are interested in getting into the tech industry, as well as participating in training sessions to help her grow her own skills.
In June — almost a year after she first participated in the Women Developers Academy — she took center stage to give a talk about the advantages of using fully managed Google Cloud services at the DataLift conference in Berlin. “The talk attracted many people, and the room was full,” she says. “I was talking about the challenges of a small team in machine learning operations, which is one of the hottest topics in data science right now.”