Dev Channel Update for ChromeOS

The Dev channel is being updated to 105.0.5195.5 (Platform version: 14989.11.0) for most ChromeOS devices. This build contains a number of bug fixes and security updates.

If you find new issues, please let us know one of the following ways

  1. File a bug
  2. Visit our ChromeOS communities
    1. General: Chromebook Help Community
    2. Beta Specific: ChromeOS Beta Help Community
  3. Report an issue or send feedback on Chrome

Interested in switching channels? Find out how.

Matt Nelson,
Google ChromeOS

Advancing transparency for buyers and publishers

Our advertising partners often ask me, “how can I better understand where my dollars are being spent?” This question isn’t new. But as buying and selling digital ads has become more complex, tracking where the money goes has become more difficult. On average 15% of advertiser spend is unattributable, according to some industry estimates.

One of my biggest concerns about this trend is its impact on marketer confidence in digital advertising. How can we all provide greater visibility into the investments of agencies and advertisers to properly inform future media spend? While we can’t speak for the many other companies in this space, our platforms do not take hidden fees. Working with others in the industry, we’re committed to investing in solutions that bring greater trust to programmatic buying and advance a more transparent ecosystem.

Introducing Confirming Gross Revenue

Today we’re announcing Confirming Gross Revenue, a new solution that gives buyers and publishers a privacy-safe way to verify that no hidden fees are taken from digital advertising transactions when using Google Ad Manager.

The publisher can use the new Revenue Verification Report to see the aggregate gross revenue received from a specific buyer. Then the buyer and the publisher can verify the media cost from the buyer’s reporting matches the gross revenue the publisher received. If the numbers match, the buyer can confirm that their full media spend reached the publisher and no hidden fees were taken.

Illustration shows process for buyers and publishers to verify no hidden fees with Confirming Gross Revenue

As we build out this feature, Display & Video 360 is onboard as an early tester. And, as a new solution, we’re in communication and collaboration with other demand-side platforms, sell-side platforms, publishers and agencies, who will test this feature and provide feedback to improve it. As we onboard more partners, we have started to gather some of this feedback.


"OMG prides ourselves in being industry leading advocates for full supply chain transparency. We believe this feature will be a great first step toward confirming that there are no hidden fees in programmatic buying, and having a seat at the table gives us the best opportunity to affect positive change for advertisers."

- Philip Pollock, Chief Operating Officer, Omnicom Media Group Australia


"Transparency and trust go hand in hand and giving us additional access and insight into media costs is a step in the right direction. We look forward to being early adopters of the solution and partnering with Google to provide feedback on how to make improvements.”

- Eric Hochberger, CEO, Mediavine


"Greater transparency in the digital advertising supply chain through solutions like Confirming Gross Revenue is sorely needed. That’s why we’ve made it a priority to invest in creating industry standards like ads.txt, sellers.json, DemandChain Object and buyers.json to help everyone raise the bar on trust in programmatic buying. We look forward to working with Google on this privacy-forward solution and potentially incorporating these concepts into IAB Tech Lab’s standards portfolio."

- Anthony Katsur, CEO IAB Tech Lab


Transparency without compromising privacy

As we continue to invest in solutions to bring more transparency into media buying, we’re also protecting people’s privacy and the contractual confidentiality of our partners. Transparency and privacy do not need to be at odds, which is why Confirming Gross Revenue only uses the data needed to confirm no hidden fees have been taken. To reduce the risk of user identification, the feature relies on aggregate gross revenue amounts, rather than combining granular log-level data.

Implementing industry standards

This solution builds on years of work to increase transparency to programmatic advertising, including steps we’ve taken to simplify our platforms and explain our own fee structure. In recent years, we've also participated in industry transparency standards across our buyside and sellside businesses, like ads.txt / app-ads.txt, sellers.json and SupplyChain Object. For example, we recently brought SupplyChain Object data into Ads Data Hub to help marketers using Display & Video 360 see the steps their impressions took before arriving on a publisher’s site.

Together, these initiatives give partners greater visibility into digital advertising. This can help inform buying decisions, improve bid transparency and strengthen fraud detection. Still, we recognize that there is more work to do.

A continued commitment

Confirming Gross Revenue is one part of our efforts to address concerns over lack of transparency that we have heard from publishers, agencies, advertisers and regulators. Over the next few months, we’ll continue to work with the industry on shaping this new solution and, more broadly, initiatives to instill more confidence in online advertising. Bringing greater transparency to advertisers, agencies and publishers is core to our approach. We welcome participation from others who want to work together to advance an ad-supported internet that works for everyone.

Advancing transparency for buyers and publishers

Our advertising partners often ask me, “how can I better understand where my dollars are being spent?” This question isn’t new. But as buying and selling digital ads has become more complex, tracking where the money goes has become more difficult. On average 15% of advertiser spend is unattributable, according to some industry estimates.

One of my biggest concerns about this trend is its impact on marketer confidence in digital advertising. How can we all provide greater visibility into the investments of agencies and advertisers to properly inform future media spend? While we can’t speak for the many other companies in this space, our platforms do not take hidden fees. Working with others in the industry, we’re committed to investing in solutions that bring greater trust to programmatic buying and advance a more transparent ecosystem.

Introducing Confirming Gross Revenue

Today we’re announcing Confirming Gross Revenue, a new solution that gives buyers and publishers a privacy-safe way to verify that no hidden fees are taken from digital advertising transactions when using Google Ad Manager.

The publisher can use the new Revenue Verification Report to see the aggregate gross revenue received from a specific buyer. Then the buyer and the publisher can verify the media cost from the buyer’s reporting matches the gross revenue the publisher received. If the numbers match, the buyer can confirm that their full media spend reached the publisher and no hidden fees were taken.

Illustration shows process for buyers and publishers to verify no hidden fees with Confirming Gross Revenue

As we build out this feature, Display & Video 360 is onboard as an early tester. And, as a new solution, we’re in communication and collaboration with other demand-side platforms, sell-side platforms, publishers and agencies, who will test this feature and provide feedback to improve it. As we onboard more partners, we have started to gather some of this feedback.


"OMG prides ourselves in being industry leading advocates for full supply chain transparency. We believe this feature will be a great first step toward confirming that there are no hidden fees in programmatic buying, and having a seat at the table gives us the best opportunity to affect positive change for advertisers."

- Philip Pollock, Chief Operating Officer, Omnicom Media Group Australia


"Transparency and trust go hand in hand and giving us additional access and insight into media costs is a step in the right direction. We look forward to being early adopters of the solution and partnering with Google to provide feedback on how to make improvements.”

- Eric Hochberger, CEO, Mediavine


"Greater transparency in the digital advertising supply chain through solutions like Confirming Gross Revenue is sorely needed. That’s why we’ve made it a priority to invest in creating industry standards like ads.txt, sellers.json, DemandChain Object and buyers.json to help everyone raise the bar on trust in programmatic buying. We look forward to working with Google on this privacy-forward solution and potentially incorporating these concepts into IAB Tech Lab’s standards portfolio."

- Anthony Katsur, CEO IAB Tech Lab


Transparency without compromising privacy

As we continue to invest in solutions to bring more transparency into media buying, we’re also protecting people’s privacy and the contractual confidentiality of our partners. Transparency and privacy do not need to be at odds, which is why Confirming Gross Revenue only uses the data needed to confirm no hidden fees have been taken. To reduce the risk of user identification, the feature relies on aggregate gross revenue amounts, rather than combining granular log-level data.

Implementing industry standards

This solution builds on years of work to increase transparency to programmatic advertising, including steps we’ve taken to simplify our platforms and explain our own fee structure. In recent years, we've also participated in industry transparency standards across our buyside and sellside businesses, like ads.txt / app-ads.txt, sellers.json and SupplyChain Object. For example, we recently brought SupplyChain Object data into Ads Data Hub to help marketers using Display & Video 360 see the steps their impressions took before arriving on a publisher’s site.

Together, these initiatives give partners greater visibility into digital advertising. This can help inform buying decisions, improve bid transparency and strengthen fraud detection. Still, we recognize that there is more work to do.

A continued commitment

Confirming Gross Revenue is one part of our efforts to address concerns over lack of transparency that we have heard from publishers, agencies, advertisers and regulators. Over the next few months, we’ll continue to work with the industry on shaping this new solution and, more broadly, initiatives to instill more confidence in online advertising. Bringing greater transparency to advertisers, agencies and publishers is core to our approach. We welcome participation from others who want to work together to advance an ad-supported internet that works for everyone.

Three new Maps updates to help plan your next adventure

Who’s got those summertime feelings? If the warmer months have you feeling extra inspired — and excited — to get outside and explore with friends, Google Maps can help you transform the way you coordinate plans and stay connected this summer and beyond. Whether you’re checking out top landmarks in a new city, planning to hop on your bike, or hanging out with friends around town, these updates have you covered.

Experience global landmarks in a whole new way

The summer travel season is in full swing, and people are turning to Google Maps to plan their trips and find helpful information about places they plan to visit — like what time a place is open and how crowded it is. To help you with the trip-planning process, we’re bringing photorealistic aerial views of nearly 100 of the world’s most popular landmarks in cities like Barcelona, London, New York, San Francisco and Tokyo right to Google Maps. This is the first step toward launching immersive view — an experience that pairs AI with billions of high definition Street View, satellite and aerial imagery.

Say you’re planning a trip to New York. With this update, you can get a sense for what the Empire State Building is like up close so you can decide whether or not you want to add it to your trip itinerary. To see an aerial view wherever they’re available, search for a landmark in Google Maps and head to the Photos section.

GIF of aerial landmark views on Google Maps

See photorealistic aerial views of iconic landmarks, right from Google Maps

Get ready for your ride with new cycling route information

More people are hopping on their bikes! Over the past few months, cycling has increased by more than 40% worldwide – which is no surprise given that the warmer weather and high gas prices have people opting for more sustainable transportation choices. Google Maps has provided cycling directions for over 12 years thanks to AI paired with data from cities, trusted cartographic partners and feedback from the Google Maps community. With our new cycling route information, you will soon be able to easily compare bike routes and see even more granular details (when this data is available) to prepare for the ride ahead.

Just get cycling directions to any destination. In addition to seeing the elevation along your route, you’ll also know if you’ll encounter heavy car traffic, stairs or steep hills. You’ll also be able to get a highly detailed breakdown of the route itself so you can know at a glance what type of road you’ll be biking on – like a major road vs. a local street. Gone are the days of unknowingly pedaling up a strenuous hill or on a route with more car traffic than you’re comfortable with.

Be prepared for your ride with more detailed cycling route information

Stay connected and safer on the go

More social outings means more time juggling meetups with friends and family. With new location sharing notifications, you can see when a loved one has arrived or left a place so you can more easily coordinate schedules and have peace of mind. Say you’re headed to a concert with a group of friends. If they’ve already chosen to share their location with you, you can set a notification for the concert venue’s address so you can see when they’ve arrived and meet up quickly. You could also set a notification to see when they’ve left the venue — just in case you get split up. One of the ways I plan to use this feature this summer is when I set out on my solo hiking trip. By asking my sister to set a notification for me so she can see when I’ve returned to the trailhead parking lot gives me peace of mind that someone will know that I made it back safely.

We’ve built this feature with privacy at the forefront. Notifications can only be set for someone who has already chosen to share their location with you. The person who’s shared their location with you will receive multiple reminders to let them know — including both a push notification in the Maps app and an email, along with recurring monthly emails. As always, you’re in control: you choose to stop sharing your location or block someone from setting notifications altogether.

GIF of Location Sharing notifications UI

Stay coordinated and have peace of mind with new location sharing notifications

GIF of Location Sharing notifications controls

Frequent reminders and strong controls: block notifications or stop sharing altogether

Aerial views of landmarks and location sharing notifications are rolling out now globally on Google Maps on Android and iOS, with cycling route information launching in the coming weeks in the hundreds of cities where cycling directions are available.

ML-Enhanced Code Completion Improves Developer Productivity

The increasing complexity of code poses a key challenge to productivity in software engineering. Code completion has been an essential tool that has helped mitigate this complexity in integrated development environments (IDEs). Conventionally, code completion suggestions are implemented with rule-based semantic engines (SEs), which typically have access to the full repository and understand its semantic structure. Recent research has demonstrated that large language models (e.g., Codex and PaLM) enable longer and more complex code suggestions, and as a result, useful products have emerged (e.g., Copilot). However, the question of how code completion powered by machine learning (ML) impacts developer productivity, beyond perceived productivity and accepted suggestions, remains open.

Today we describe how we combined ML and SE to develop a novel Transformer-based hybrid semantic ML code completion, now available to internal Google developers. We discuss how ML and SEs can be combined by (1) re-ranking SE single token suggestions using ML, (2) applying single and multi-line completions using ML and checking for correctness with the SE, or (3) using single and multi-line continuation by ML of single token semantic suggestions. We compare the hybrid semantic ML code completion of 10k+ Googlers (over three months across eight programming languages) to a control group and see a 6% reduction in coding iteration time (time between builds and tests) and a 7% reduction in context switches (i.e., leaving the IDE) when exposed to single-line ML completion. These results demonstrate that the combination of ML and SEs can improve developer productivity. Currently, 3% of new code (measured in characters) is now generated from accepting ML completion suggestions.

Transformers for Completion
A common approach to code completion is to train transformer models, which use a self-attention mechanism for language understanding, to enable code understanding and completion predictions. We treat code similar to language, represented with sub-word tokens and a SentencePiece vocabulary, and use encoder-decoder transformer models running on TPUs to make completion predictions. The input is the code that is surrounding the cursor (~1000-2000 tokens) and the output is a set of suggestions to complete the current or multiple lines. Sequences are generated with a beam search (or tree exploration) on the decoder.

During training on Google’s monorepo, we mask out the remainder of a line and some follow-up lines, to mimic code that is being actively developed. We train a single model on eight languages (C++, Java, Python, Go, Typescript, Proto, Kotlin, and Dart) and observe improved or equal performance across all languages, removing the need for dedicated models. Moreover, we find that a model size of ~0.5B parameters gives a good tradeoff for high prediction accuracy with low latency and resource cost. The model strongly benefits from the quality of the monorepo, which is enforced by guidelines and reviews. For multi-line suggestions, we iteratively apply the single-line model with learned thresholds for deciding whether to start predicting completions for the following line.

Encoder-decoder transformer models are used to predict the remainder of the line or lines of code.

Re-rank Single Token Suggestions with ML
While a user is typing in the IDE, code completions are interactively requested from the ML model and the SE simultaneously in the backend. The SE typically only predicts a single token. The ML models we use predict multiple tokens until the end of the line, but we only consider the first token to match predictions from the SE. We identify the top three ML suggestions that are also contained in the SE suggestions and boost their rank to the top. The re-ranked results are then shown as suggestions for the user in the IDE.

In practice, our SEs are running in the cloud, providing language services (e.g., semantic completion, diagnostics, etc.) with which developers are familiar, and so we collocated the SEs to run on the same locations as the TPUs performing ML inference. The SEs are based on an internal library that offers compiler-like features with low latencies. Due to the design setup, where requests are done in parallel and ML is typically faster to serve (~40 ms median), we do not add any latency to completions. We observe a significant quality improvement in real usage. For 28% of accepted completions, the rank of the completion is higher due to boosting, and in 0.4% of cases it is worse. Additionally, we find that users type >10% fewer characters before accepting a completion suggestion.

Check Single / Multi-line ML Completions for Semantic Correctness
At inference time, ML models are typically unaware of code outside of their input window, and code seen during training might miss recent additions needed for completions in actively changing repositories. This leads to a common drawback of ML-powered code completion whereby the model may suggest code that looks correct, but doesn’t compile. Based on internal user experience research, this issue can lead to the erosion of user trust over time while reducing productivity gains.

We use SEs to perform fast semantic correctness checks within a given latency budget (<100ms for end-to-end completion) and use cached abstract syntax trees to enable a “full” structural understanding. Typical semantic checks include reference resolution (i.e., does this object exist), method invocation checks (e.g., confirming the method was called with a correct number of parameters), and assignability checks (to confirm the type is as expected).

For example, for the coding language Go, ~8% of suggestions contain compilation errors before semantic checks. However, the application of semantic checks filtered out 80% of uncompilable suggestions. The acceptance rate for single-line completions improved by 1.9x over the first six weeks of incorporating the feature, presumably due to increased user trust. As a comparison, for languages where we did not add semantic checking, we only saw a 1.3x increase in acceptance.

Language servers with access to source code and the ML backend are collocated on the cloud. They both perform semantic checking of ML completion suggestions.

Results
With 10k+ Google-internal developers using the completion setup in their IDE, we measured a user acceptance rate of 25-34%. We determined that the transformer-based hybrid semantic ML code completion completes >3% of code, while reducing the coding iteration time for Googlers by 6% (at a 90% confidence level). The size of the shift corresponds to typical effects observed for transformational features (e.g., key framework) that typically affect only a subpopulation, whereas ML has the potential to generalize for most major languages and engineers.

Fraction of all code added by ML 2.6%
Reduction in coding iteration duration 6%
Reduction in number of context switches 7%
Acceptance rate (for suggestions visible for >750ms) 25%
Average characters per accept 21
Key metrics for single-line code completion measured in production for 10k+ Google-internal developers using it in their daily development across eight languages.
Fraction of all code added by ML (with >1 line in suggestion) 0.6%
Average characters per accept 73
Acceptance rate (for suggestions visible for >750ms) 34%
Key metrics for multi-line code completion measured in production for 5k+ Google-internal developers using it in their daily development across eight languages.

Providing Long Completions while Exploring APIs
We also tightly integrated the semantic completion with full line completion. When the dropdown with semantic single token completions appears, we display inline the single-line completions returned from the ML model. The latter represent a continuation of the item that is the focus of the dropdown. For example, if a user looks at possible methods of an API, the inline full line completions show the full method invocation also containing all parameters of the invocation.

Integrated full line completions by ML continuing the semantic dropdown completion that is in focus.
Suggestions of multiple line completions by ML.

Conclusion and Future Work
We demonstrate how the combination of rule-based semantic engines and large language models can be used to significantly improve developer productivity with better code completion. As a next step, we want to utilize SEs further, by providing extra information to ML models at inference time. One example can be for long predictions to go back and forth between the ML and the SE, where the SE iteratively checks correctness and offers all possible continuations to the ML model. When adding new features powered by ML, we want to be mindful to go beyond just “smart” results, but ensure a positive impact on productivity.

Acknowledgements
This research is the outcome of a two-year collaboration between Google Core and Google Research, Brain Team. Special thanks to Marc Rasi, Yurun Shen, Vlad Pchelin, Charles Sutton, Varun Godbole, Jacob Austin, Danny Tarlow, Benjamin Lee, Satish Chandra, Ksenia Korovina, Stanislav Pyatykh, Cristopher Claeys, Petros Maniatis, Evgeny Gryaznov, Pavel Sychev, Chris Gorgolewski, Kristof Molnar, Alberto Elizondo, Ambar Murillo, Dominik Schulz, David Tattersall, Rishabh Singh, Manzil Zaheer, Ted Ying, Juanjo Carin, Alexander Froemmgen and Marcus Revaj for their contributions.


Source: Google AI Blog


Meet The YouTube Music Foundry Class of 2022

Today, YouTube Music is glad to introduce the Foundry Class of 2022, with 30 new artists joining our global artist development program.



Since its beginnings – in 2015 as a workshop series and in 2017 as an incubator dedicated to independent music – it’s been Foundry’s mission to help artists get to “the next level,” with the resources to navigate a rapidly evolving music business. Over the years, we’ve seen how that progress takes no one path. To thrive, independent artists must constantly take on new challenges, shift course, and reinforce their sense of self-belief.



The journeys of independent artists may be winding, but they don’t have to be lonely. Turning an audience into a real community of fans is an enormous feat, requiring courage and allies as much as talent. Foundry artists receive seed funding invested into the development of their content and dedicated partner support from YouTube as they grow on their own terms.



“The advice I received as part of Foundry helped me build a super strong foundation on which to build the next 5-10 years of my visual output and career,” said UK pioneer Shygirl, after graduating from Foundry’s 2021 class. When artists have stability, culture everywhere wins. “Foundry helped corridos tumbados to be more known on a global level and helped me take the Mexican flag to the most important stages all over the world,” said 2020 alum Natanael Cano.




To date, Foundry’s annual artist development classes and ongoing release support campaigns have supported more than 250 independent artists, with alumni including Arlo Parks, beabadoobee, Dave, Dua Lipa, Clairo, ENNY, Eladio Carrion, girl in red, Gunna, Japanese Breakfast, Kenny Beats, Natanael Cano, Omar Apollo, Rema, Rina Sawayama, ROSALÍA, Saba, Snail Mail, Tems, Tokischa, Tenille Arts and more.



This year, we saw a 4X increase in Foundry applications, and the Class of 2022 is the program’s largest to date, with 30 artists representing 15 countries. Working with devoted teams and across genres, each artist in the class uniquely represents the spirit of independence. For YouTube’s global music team, it’s an honor to champion their work and potential, every step of the way.



Meet the Foundry Class of 2022 below and check out a playlist of songs from the class here.

Posted by Naomi Zeichner, Artist Partnerships Lead


 ==== 


Use the Cloud Search Query API to set Suggest Filters to enhance Cloud Search results

What’s changing 

We’re introducing Suggest Filters for Cloud Search. Using the Cloud Search Query API, admins can specify a filter condition that will be pre-applied to keyword suggestions as user types a query. This will surface more relevant suggestions, helping reduce the time users spend searching. 



Who’s impacted 

Admins, developers and end users 



Why it’s important 

With suggestion filters, admins can configure suggestions based on the use cases for a given search application, reducing irrelevant suggestions. For example, admins can add a suggestion such as a country, which will surface suggestions based on documents that align the filters. 


Getting started 

  • Admins: See our developer documentation here and here for more information about creating a suggestion filter.
  • End users: There is no end user action required, you will automatically see relevant suggested filters as you type a query. 

Rollout pace 

  • This feature is available now for all users. 

Availability 

  • Available to Google Cloud Search Customers 

Resources 

Working Location enabled by default

Quick summary

Earlier this year, we announced an improved user interface for sharing your working location in Google Calendar. Starting today, you will be able to set your working location without having to first enable this feature in your Calendar settings. 


Getting started 

  • Admins: Working Location is ON by default and can be disabled at the domain or OU level. Current settings for your domain will remain the same unless updated in the Admin console. Visit the Help Center to learn more about turning working location on or off for your organization
  • End users: Unless disabled by your admin, this feature will be ON by default. There will be no major visual changes to your calendar unless you set your working location. Visit the Help Center to learn more about setting your working location

Rollout pace 

  • Rapid Release domains: Gradual rollout (up to 15 days for feature visibility) starting on July 26, 2022 
  • Scheduled Release domains: Extended rollout (potentially longer than 15 days for feature visibility starting on August 10, 2022 

Availability 

  • Available to Google Workspace Business Standard, Business Plus, Enterprise Standard, Enterprise Plus, Education Fundamentals, Education Standard, Education Plus, the Teaching and Learning Upgrade, and Nonprofits, as well as legacy G Suite Business customers
  • Not available to Google Workspace Essentials, Business Starter, Enterprise Essentials, and Frontline, as well as legacy G Suite Basic customers 
  • Not available to users with personal Google Accounts 

Resources 

Migrate unmanaged accounts to your domain using new “UserInvitation” API functionality

What’s changing 

We’re introducing new API functionality which allows you to automate the process of finding conflicting accounts and inviting them to join your organization. 


Who’s impacted 

Admins, end users, and developers 


Why you’d use it 

When employees create a Google account using one of your organization’s domains to access Google services, this is known as an unmanaged account. Unmanaged accounts are not ideal for managing users and keeping their work data secure. 


Additionally, should an admin try to create a managed account with the same name, this conflict will prevent a managed account from being created. Using the UserInvitation API functionality, you can send a request to convert their personal account to a Google Workspace account. 


While the same action can be manually performed with the Transfer Tool, the API allows conflicting accounts to be identified and remediated programmatically, using logic that best suits your needs.


Getting started 

  • Admins and Developers: 
  • End users: 
    • If you accept the request from your admin to transfer their account, your admin will be granted access to their data and the ability to manage your account. 
    • If you don’t accept the invitation, you will have to rename your account. Your administrator can create a new, managed account for you. 

Rollout pace 


Availability 

  • Available to Google Workspace Business Starter, Business Standard, Business Plus, Enterprise Standard, Enterprise, Cloud Identity Premium and Cloud Identity Free customers 
  • Not available to Google Workspace Essentials,, Enterprise Essentials, Education Standard, Enterprise Plus, Education Fundamentals, Education Plus, Frontline, and Nonprofits, as well as legacy G Suite Basic and Business customers 

Resources 

Machine Learning Communities: Q2 ‘22 highlights and achievements

Posted by Nari Yoon, Hee Jung, DevRel Community Manager / Soonson Kwon, DevRel Program Manager

Let’s explore highlights and accomplishments of vast Google Machine Learning communities over the second quarter of the year! We are enthusiastic and grateful about all the activities by the global network of ML communities. Here are the highlights!

TensorFlow/Keras

TFUG Agadir hosted #MLReady phase as a part of #30DaysOfML. #MLReady aimed to prepare the attendees with the knowledge required to understand the different types of problems which deep learning can solve, and helped attendees be prepared for the TensorFlow Certificate.

TFUG Taipei hosted the basic Python and TensorFlow courses named From Python to TensorFlow. The aim of these events is to help everyone learn about the basics of Python and TensorFlow, including TensorFlow Hub, TensorFlow API. The event videos are shared every week via Youtube playlist.

TFUG New York hosted Introduction to Neural Radiance Fields for TensorFlow users. The talk included Volume Rendering, 3D view synthesis, and links to a minimal implementation of NeRF using Keras and TensorFlow. In the event, ML GDE Aritra Roy Gosthipaty (India) had a talk focusing on breaking the concepts of the academic paper, NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis into simpler and more ingestible snippets.

TFUG Turkey, GDG Edirne and GDG Mersin organized a TensorFlow Bootcamp 22 and ML GDE M. Yusuf Sarıgöz (Turkey) participated as a speaker, TensorFlow Ecosystem: Get most out of auxiliary packages. Yusuf demonstrated the inner workings of TensorFlow, how variables, tensors and operations interact with each other, and how auxiliary packages are built upon this skeleton.

TFUG Mumbai hosted the June Meetup and 110 folks gathered. ML GDE Sayak Paul (India) and TFUG mentor Darshan Despande shared knowledge through sessions. And ML workshops for beginners went on and participants built up machine learning models without writing a single line of code.

ML GDE Hugo Zanini (Brazil) wrote Realtime SKU detection in the browser using TensorFlow.js. He shared a solution for a well-known problem in the consumer packaged goods (CPG) industry: real-time and offline SKU detection using TensorFlow.js.

ML GDE Gad Benram (Portugal) wrote Can a couple TensorFlow lines reduce overfitting? He explained how just a few lines of code can generate data augmentations and boost a model’s performance on the validation set.

ML GDE Victor Dibia (USA) wrote How to Build An Android App and Integrate Tensorflow ML Models sharing how to run machine learning models locally on Android mobile devices, How to Implement Gradient Explanations for a HuggingFace Text Classification Model (Tensorflow 2.0) explaining in 5 steps about how to verify the model is focusing on the right tokens to classify text. He also wrote how to finetune a HuggingFace model for text classification, using Tensorflow 2.0.

ML GDE Karthic Rao (India) released a new series ML for JS developers with TFJS. This series is a combination of short portrait and long landscape videos. You can learn how to build a toxic word detector using TensorFlow.js.

ML GDE Sayak Paul (India) implemented the DeiT family of ViT models, ported the pre-trained params into the implementation, and provided code for off-the-shelf inference, fine-tuning, visualizing attention rollout plots, distilling ViT models through attention. (code | pretrained model | tutorial)

ML GDE Sayak Paul (India) and ML GDE Aritra Roy Gosthipaty (India) inspected various phenomena of a Vision Transformer, shared insights from various relevant works done in the area, and provided concise implementations that are compatible with Keras models. They provide tools to probe into the representations learned by different families of Vision Transformers. (tutorial | code)

JAX/Flax

ML GDE Aakash Nain (India) had a special talk, Introduction to JAX for ML GDEs, TFUG organizers and ML community network organizers. He covered the fundamentals of JAX/Flax so that more and more people try out JAX in the near future.

ML GDE Seunghyun Lee (Korea) started a project, Training and Lightweighting Cookbook in JAX/FLAX. This project attempts to build a neural network training and lightweighting cookbook including three kinds of lightweighting solutions, i.e., knowledge distillation, filter pruning, and quantization.

ML GDE Yucheng Wang (China) wrote History and features of JAX and explained the difference between JAX and Tensorflow.

ML GDE Martin Andrews (Singapore) shared a video, Practical JAX : Using Hugging Face BERT on TPUs. He reviewed the Hugging Face BERT code, written in JAX/Flax, being fine-tuned on Google’s Colab using Google TPUs. (Notebook for the video)

ML GDE Soumik Rakshit (India) wrote Implementing NeRF in JAX. He attempts to create a minimal implementation of 3D volumetric rendering of scenes represented by Neural Radiance Fields.

Kaggle

ML GDEs’ Kaggle notebooks were announced as the winner of Google OSS Expert Prize on Kaggle: Sayak Paul and Aritra Roy Gosthipaty’s Masked Image Modeling with Autoencoders in March; Sayak Paul’s Distilling Vision Transformers in April; Sayak Paul & Aritra Roy Gosthipaty’s Investigating Vision Transformer Representations; Soumik Rakshit’s Tensorflow Implementation of Zero-Reference Deep Curve Estimation in May and Aakash Nain’s The Definitive Guide to Augmentation in TensorFlow and JAX in June.

ML GDE Luca Massaron (Italy) published The Kaggle Book with Konrad Banachewicz. This book details competition analysis, sample code, end-to-end pipelines, best practices, and tips & tricks. And in the online event, Luca and the co-author talked about how to compete on Kaggle.















ML GDE Ertuğrul Demir (Turkey) wrote Kaggle Handbook: Fundamentals to Survive a Kaggle Shake-up covering bias-variance tradeoff, validation set, and cross validation approach. In the second post of the series, he showed more techniques using analogies and case studies.













TFUG Chennai hosted ML Study Jam with Kaggle and created study groups for the interested participants. More than 60% of members were active during the whole program and many of them shared their completion certificates.

TFUG Mysuru organizer Usha Rengaraju shared a Kaggle notebook which contains the implementation of the research paper: UNETR - Transformers for 3D Biomedical Image Segmentation. The model automatically segments the stomach and intestines on MRI scans.

TFX

ML GDE Sayak Paul (India) and ML GDE Chansung Park (Korea) shared how to deploy a deep learning model with Docker, Kubernetes, and Github actions, with two promising ways - FastAPI (for REST) and TF Serving (for gRPC).

ML GDE Ukjae Jeong (Korea) and ML Engineers at Karrot Market, a mobile commerce unicorn with 23M users, wrote Why Karrot Uses TFX, and How to Improve Productivity on ML Pipeline Development.

ML GDE Jun Jiang (China) had a talk introducing the concept of MLOps, the production-level end-to-end solutions of Google & TensorFlow, and how to use TFX to build the search and recommendation system & scientific research platform for large-scale machine learning training.

ML GDE Piero Esposito (Brazil) wrote Building Deep Learning Pipelines with Tensorflow Extended. He showed how to get started with TFX locally and how to move a TFX pipeline from local environment to Vertex AI; and provided code samples to adapt and get started with TFX.

TFUG São Paulo (Brazil) had a series of online webinars on TensorFlow and TFX. In the TFX session, they focused on how to put the models into production. They talked about the data structures in TFX and implementation of the first pipeline in TFX: ingesting and validating data.

TFUG Stockholm hosted MLOps, TensorFlow in Production, and TFX covering why, what and how you can effectively leverage MLOps best practices to scale ML efforts and had a look at how TFX can be used for designing and deploying ML pipelines.

Cloud AI

ML GDE Chansung Park (Korea) wrote MLOps System with AutoML and Pipeline in Vertex AI on GCP official blog. He showed how Google Cloud Storage and Google Cloud Functions can help manage data and handle events in the MLOps system.

He also shared the Github repository, Continuous Adaptation with VertexAI's AutoML and Pipeline. This contains two notebooks to demonstrate how to automate to produce a new AutoML model when the new dataset comes in.

TFUG Northwest (Portland) hosted The State and Future of AI + ML/MLOps/VertexAI lab walkthrough. In this event, ML GDE Al Kari (USA) outlined the technology landscape of AI, ML, MLOps and frameworks. Googler Andrew Ferlitsch had a talk about Google Cloud AI’s definition of the 8 stages of MLOps for enterprise scale production and how Vertex AI fits into each stage. And MLOps engineer Chris Thompson covered how easy it is to deploy a model using the Vertex AI tools.

Research

ML GDE Qinghua Duan (China) released a video which introduces Google’s latest 540 billion parameter model. He introduced the paper PaLM, and described the basic training process and innovations.

ML GDE Rumei LI (China) wrote blog postings reviewing papers, DeepMind's Flamingo and Google's PaLM.