Tag Archives: community

Google Dev Library Letters: 18th Edition

Posted by the Dev Library Team

In this newsletter, we’re highlighting the best projects developed with Google technologies that have been contributed to the Google Dev Library platform. We hope this will spark some inspiration for your next project!


Contributions of the month


Moving image showing SSImagePicker in different modes

[Android] SSImagePicker by Simform

See how to use a lightweight and easy-to-use image picker library that has features like cropping, compression and rotation, video, and Live Photos support.



Moving image showing overview of coroutines

[Kotlin] Mastering Coroutines in Kotlin by Reyhaneh Ezatpanah

Dive into a comprehensive overview of coroutines including tips and best practices, along with a detailed explanation of the different types of coroutines available in Kotlin and how to use them effectively.

Read more on DevLibrary


Flow Chart demonstrating Image to Image stable diffusion in Flax

[Machine Learning] Image2Image with Stable Diffusion in Flax by Bachir Chihani

Learn the uses of the Diffusion method, a technique used to improve the stability and performance of image-to-image translation models.

Read more on DevLibrary


Android


Jetpack Compose state, deconstructed by Yves Kalume

Learn how state management in Jetpack Compose is implemented, how it can be used to build a responsive and dynamic UI, and how it compares to other solutions in Android development.


Dynamic environment switching on Android by Ashwini Kumar

Find out how to switch between different environments (such as development, staging, and production) in an Android app.


Migration to Jetpack Compose for a legacy application by Abhishek Saxena

Migrate an existing legacy Android application to Jetpack Compose, a modern UI toolkit for building native Android apps



Machine Learning


Simple diffusion in TensorFlow by Bachir Chihani

Understand the benefits of using TensorFlow for image processing, including the ability to easily parallelize computations and utilize GPUs for faster processing.


Deep dive into stable diffusion by Bachir Chihani

Look into the Flax implementation of the Stable Diffusion model to better understand how it works.


Create-tf-app by Radi Cho

See the tool that allows you to quickly create a TensorFlow application by generating the necessary code and file structure.

 

Angular


NGX-Valdemort by Cédric Exbrayat

Dive into a set of pre-built validation rules and error messages for commonly encountered use cases, making it easy to quickly implement robust form validation for your application.


Passing configuration dynamically from one module to another using ModuleWithProviders by Madhusuthanan B

Learn how to pass configuration data dynamically between modules in an Angular application.


Flutter


Mastering Dart & Flutter DevTools by Ashita Prasad

Look at the first part of the series aimed at helping developers to understand how to use the tools effectively to build applications with Dart and Flutter.


Server-driven UI in Flutter - an experiment on remote widgets by Akshat Vinaybhai Patel

Learn the insights, code snippets and results of the experiment for readers to better understand the concept of Server-Driven UI and its potential in Flutter app development.


Flutter Photo Manager by Alex Li

Learn an easy-to-use API for accessing the device's photo library, that performs operations like retrieving images, videos, and albums, as well as deleting, creating, and updating files in the photo library.


Firebase


How to authenticate to Firebase using email and password in Jetpack Compose? By Alex Mamo 

Here’s a simple solution for implementing Firebase Authentication with email and password, using a clean architecture with Jetpack Compose on Android.


Google Cloud


Google Firestore Data Source plugin for Grafana by Prasanna Kumar

Learn how it allows users to perform operations like querying, aggregating, and visualizing data from Firestore, making it a powerful tool for monitoring and analyzing real-time data in a variety of applications. The repository provides the source code for the plugin and documentation on how to install and use it with Grafana.


Cluster cloner by Joshua Fox

See how this project aims to replicate clusters across different cloud environments and examine these varying infrastructure models.


Getting to know Cloud Firestore by Mustapha Adekunle

Learn how this post covers the basic features and benefits of Cloud Firestore, and how this document database is a scalable and versatile NoSQL cloud database.


Google’s Mandar Chaphalkar has submitted Data Governance with Dataplex

Discover how Dataplex can be used to transform data to meet specific business requirements, and how it can integrate with other Google Cloud services like BigQuery for efficient data storage and analysis.

ML Olympiad 2023: Globally Distributed ML Competitions by Google ML Community

Posted by Hee Jung, DevRel Community Manager

What is the ML Olympiad?

The ML Olympiad is an associated Kaggle Community Competitions hosted by ML GDE, TFUG, 3rd-party ML communities, supported by Google Developers. The ML Developer Programs team and the communities successfully ran the first round of the campaign in 2022 and are now launching the second round. The goal of this campaign is to provide ML training opportunities for developers by leveraging Kaggle’s features.

ML Olympiad Community Competitions

17 ML Olympiad community competitions are currently open. Visit the ML Olympiad page to participate.

Into the Space

  • Predicting which spaceship passengers were transported by the anomaly using records recovered from the spaceship’s damaged computer system.
  • Host: MD Shahriyar Al Mustakim Mitul / TFUG Dhaka

    Water Quality Prediction

    • Estimating the quality of water.
    • Hosts: Usha Rengaraju, Vijayabharathi Karuppasamy (TFUG Chennai), Samuel T (TFUG Mysuru)

      Breast Cancer Diagnosis

      • Predicting medical diagnosis [breast cancer].
      • Host: Ankit Kumar Verma / TFUG Prayagraj

        Book Recommendations

        • To provide personalized recommendations to users based on their reading history and preferences using various machine learning algorithms.
        • Hosts: Anushka Raj, Yugandhar Surya / TFUG Hajipur

          Argania Tree Deforestation Detection

          • Use Sentinel-2 satellite imagery to detect and map areas of deforestation in the Argania region.
          • Hosts: Taha Bouhsine / TFUG Agadir

            Multilingual Spell Correction

            • Reconstruct noisy sentences in European languages: English, French, German, Bulgarian and Turkish.
            • Host: Radostin Cholakov (ML GDE)

              CO2 Emissions Forecasting

              • Forecasting CO2 emissions based on deforestation in Côte d'Ivoire.
              • Hosts: Armel Yara, Kimana Misago, Jordan Erifried / TFUG Abidjan

                Ensure Healthy Lives (in local language) 

                • Use ML techniques to help achieve common good health and well-being.
                • Hosts: Vinicius Fernandes Caridá (ML GDE), Pedro Gengo, Alex Fernandes Mansano / TFUG São Paulo

                  Predictive Maintenance

                  • Predict future engine’s failures.
                  • Host: Daniel Pereda / TFUG Santiago

                    Firetrucks Are Red And Cars Are Blue

                    • To create a model that can accurately predict the correct class for each image, without overfitting.
                    • Host: Prasoon Kottarathil / TFUG Thrissur

                      Dialect Recognition (in Arabic) 

                      • Dialect recognition in order to improve user experience in AI applications.
                      • Hosts: Ruqiya Bin Safi (ML GDE), Eyad Sibai, Hussain Alfayez / Saudi TFUG & Applied ML/AI group

                        Sentiment Analysis Of JUMIA Tunisia  (in local language) 

                        • Use JUMIA customer reviews to determine the sentiment of content from text data.
                        • Host: Boulbaba BEN AMMAR / TFUG Sfax

                          Kolkata Housing Prediction

                          • Kolkata housing prediction results can be used to address related social and economic issues.
                          • Host: Rishiraj Acharya / TFUG Kolkata

                            Can You Guess The Beer Style?

                            • This is a machine learning competition focused on classifying beer into 17 distinct styles based on key descriptors.
                            • Host: Marvik

                              Detect ChatGpt answers

                              • The goal of this competition is to classify ChatGpt answers vs real human answers for a variety of questions.
                              • Host: Elyes Manai (ML GDE) / IEEE ESSTHS + GDSC ISETSO + PyData Tunisia

                                MLAct Pose Detection

                                • Raising awareness about some basic yoga poses, and encouraging our community members to practice the basic parts of computer vision.
                                • Host: Imen Masmoudi / MLAct ML Community

                                  Hausa Sentiment Analysis 2.0 (in local language) 

                                  • Classify the sentiment of sentences of Hausa Language.
                                  • Hosts: Nuruddeen Sambo, Dattijo Murtala Makama / TFUG Bauchi

                                    Navigating ML Olympiad

                                    You can search “ML Olympiad” on Kaggle Community Competitions page to see them all. And for further info, look for #MLOlympiad on social media.

                                    Google Developers supports the hosts of each competition. Browse through the available competitions and participate in those that interest you!

                                    Machine Learning Communities: Q4 ‘22 highlights and achievements

                                    Posted by Nari Yoon, Hee Jung, DevRel Community Manager / Soonson Kwon, DevRel Program Manager

                                    Let’s explore highlights and accomplishments of vast Google Machine Learning communities over the last quarter of 2022. We are enthusiastic and grateful about all the activities by the global network of ML communities. Here are the highlights!


                                    ML at DevFest 2022

                                    A group of ML Developers attending DevFest 2022

                                    A large number of members of ML GDE, TFUG, and 3P ML communities participated in DevFests 2022 worldwide covering various ML topics with Google products. Machine Learning with Jax: Zero to Hero (DevFest Conakry) by ML GDE Yannick Serge Obam Akou (Cameroon) and Easy ML on Google Cloud (DevFest Med) by ML GDE Nathaly Alarcon Torrico (Bolivia) hosted great sessions.

                                    ML Community Summit 2022

                                    A group of ML Developers attending ML Community Summit

                                    ML Community Summit 2022 was hosted on Oct 22-23, 2022, in Bangkok, Thailand. Twenty-five most active community members (ML GDE or TFUG organizer) were invited and shared their past activities and thoughts on Google’s ML products. A video sketch from ML Developer Programs team and a blog posting by ML GDE Margaret Maynard-Reid (United States) help us revisit the moments.

                                    TensorFlow

                                    MAXIM in TensorFlow by ML GDE Sayak Paul (India) shows his implementation of the MAXIM family of models in TensorFlow.

                                    Diagram of gMLP block

                                    gMLP: What it is and how to use it in practice with Tensorflow and Keras? by ML GDE Radostin Cholakov (Bulgaria) demonstrates the state-of-the-art results on NLP and computer vision tasks using a lot less trainable parameters than corresponding Transformer models. He also wrote Differentiable discrete sampling in TensorFlow.

                                    Building Computer Vision Model using TensorFlow: Part 2 by TFUG Pune for the developers who want to deep dive into training an object detection model on Google Colab, inspecting the TF Lite model, and deploying the model on an Android application. ML GDE Nitin Tiwari (India) covered detailed aspects for end-to-end training and deployment of object model detection.

                                    Advent of Code 2022 in pure TensorFlow (days 1-5) by ML GDE Paolo Galeone (Italy) solving the Advent of Code (AoC) puzzles using only TensorFlow. The articles contain a description of the solutions of the Advent of Code puzzles 1-5, in pure TensorFlow.

                                    tf.keras.metrics / tf.keras.optimizers by TFUG Taipei helped people learn the TF libraries. They shared basic concepts and how to use them using Colab.

                                    Screen shot of TensorFlow Lite on Android Project Practical Course
                                    A hands-on course on TensorFlow Lite projects on Android by ML GDE Xiaoxing Wang (China) is the book mainly introducing the application of TensorFlow Lite in Android development. The content focuses on applying three typical ML applications in Android development.

                                    Build tensorflow-lite-select-tf-ops.aar and tensorflow-lite.aar files with Colab by ML GDE George Soloupis (Greece) guides how you can shrink the final size of your Android application’s .apk by building tensorflow-lite-select-tf-ops.aar and tensorflow-lite.aar files without the need of Docker or personal PC environment.

                                    TensorFlow Lite and MediaPipe Application by ML GDE XuHua Hu (China) explains how to use TFLite to deploy an ML model into an application on devices. He shared experiences with developing a motion sensing game with MediaPipe, and how to solve problems that we may meet usually.

                                    Train and Deploy TensorFlow models in Go by ML GDE Paolo Galeone (Italy) delivered the basics of the TensorFlow Go bindings, the limitations, and how the tfgo library simplifies their usage.

                                    Keras

                                    Diagram of feature maps concatenated together and flattened

                                    Complete Guide on Deep Learning Architectures, Chapter 1 on ConvNets by ML GDE Merve Noyan (France) brings you into the theory of ConvNets and shows how it works with Keras.

                                    Hazy Image Restoration Using Keras by ML GDE Soumik Rakshit (India) provides an introduction to building an image restoration model using TensorFlow, Keras, and Weights & Biases. He also shared an article Improving Generative Images with Instructions: Prompt-to-Prompt Image Editing with Cross Attention Control.

                                    Mixed precision in Keras based Stable Diffusion
                                    Let’s Generate Images with Keras based Stable Diffusion by ML GDE Chansung Park (Korea) delivered how to generate images with given text and what stable diffusion is. He also talked about Keras-based stable diffusion, basic building blocks, and the advantages of using Keras-based stable diffusion.

                                    A Deep Dive into Transformers with TensorFlow and Keras: Part 1, Part 2, Part3 by ML GDE Aritra Roy Gosthipaty (India) covered the journey from the intuition of attention to formulating the multi-head self-attention. And TensorFlow port of GroupViT in 🤗 transformers library was his contribution to Hugging Face transformers library.

                                    TFX

                                    Digits + TFX banner

                                    How startups can benefit from TFX by ML GDE Hannes Hapke (United States) explains how the San Francisco-based FinTech startup Digits has benefitted from applying TFX early, how TFX helps Digits grow, and how other startups can benefit from TFX too.

                                    Usha Rengaraju (India) shared TensorFlow Extended (TFX) Tutorials (Part 1, Part 2, Part 3) and the following TF projects: TensorFlow Decision Forests Tutorial and FT Transformer TensorFlow Implementation.

                                    Hyperparameter Tuning and ML Pipeline by ML GDE Chansung Park (Korea) explained hyperparam tuning, why it is important; Introduction to KerasTuner, basic usage; how to visualize hyperparam tuning results with TensorBoard; and integration within ML pipeline with TFX.

                                    JAX/Flax

                                    JAX High-performance ML Research by TFUG Taipei and ML GDE Jerry Wu (Taiwan) introduced JAX and how to start using JAX to solve machine learning problems.

                                    [TensorFlow + TPU] GatedTabTransformer[W&B] and its JAX/Flax counterpart GatedTabTransformer-FLAX[W&B] by Usha Rengaraju (India) are tutorial series containing the implementation of GatedTabTransformer paper in both TensorFlow (TPU) and FLAX.

                                    Putting NeRF on a diet: Semantically consistent Few-Shot View Synthesis Implementation
                                    JAX implementation of Diet NeRf by ML GDE Wan Hong Lau (Singapore) implemented the paper “Putting NeRF on a Diet (DietNeRF)” in JAX/Flax. And he also implemented a JAX-and-Flax training pipeline with the ResNet model in his Kaggle notebook, 🐳HappyWhale🔥Flax/JAX⚡TPU&GPU - ResNet Baseline.

                                    Introduction to JAX with Flax (slides) by ML GDE Phillip Lippe (Netherlands) reviewed from the basics of the requirements we have on a DL framework to what JAX has to offer. Further, he focused on the powerful function-oriented view JAX offers and how Flax allows you to use them in training neural networks.

                                    Screen grab of ML GDE David Cardozo and Cristian Garcia during a live coding session of a review of new features, specifically Shared Arrays, in the recent release of JAX
                                    JAX Streams: Exploring JAX 0.4 by ML GDE David Cardozo (Canada) and Cristian Garcia (Colombia) showed a review of new features (specifically Shared Arrays) in the recent release of JAX and demonstrated live coding.

                                    [LiveCoding] Train ResNet/MNIST with JAX/Flax by ML GDE Qinghua Duan (China) demonstrated how to train ResNet using JAX by writing code online.

                                    Kaggle

                                    Low-light Image Enhancement using MirNetv2 by ML GDE Soumik Rakshit (India) demonstrated the task of Low-light Image Enhancement.

                                    Heart disease Prediction and Diabetes Prediction Competition hosted by TFUG Chandigarh were to familiarize participants with ML problems and find solutions using classification techniques.

                                    TensorFlow User Group Bangalore Sentiment Analysis Kaggle Competition 1
                                    TFUG Bangalore Kaggle Competition - Sentiment Analysis hosted by TFUG Bangalore was to find the best sentiment analysis algorithm. Participants were given a set of training data and asked to submit an ML/DL algorithm that could predict the sentiment of a text. The group also hosted Kaggle Challenge Finale + Vertex AI Session to support the participants and guide them in learning how to use Vertex AI in a workflow.

                                    Cloud AI

                                    Better Hardware Provisioning for ML Experiments on GCP by ML GDE Sayak Paul (India) discussed the pain points of provisioning hardware (especially for ML experiments) and how we can get better provision hardware with code using Vertex AI Workbench instances and Terraform.

                                    Jayesh Sharma, Platform Engineer, Zen ML; MLOps workshop with TensorFlow and Vertex AI November 12, 2022|TensorFlow User Group Chennai
                                    MLOps workshop with TensorFlow and Vertex AI by TFUG Chennai targeted beginners and intermediate-level practitioners to give hands-on experience on the E2E MLOps pipeline with GCP. In the workshop, they shared the various stages of an ML pipeline, the top tools to build a solution, and how to design a workflow using an open-source framework like ZenML.

                                    10 Predictions on the Future of Cloud Computing by 2025: Insights from Google Next Conference by ML GDE Victor Dibia (United States) includes a recap of his notes reflecting on the top 10 cloud technology predictions discussed at the Google Cloud Next 2022 keynote.
                                    Workflow of Google Virtual Career Center
                                    O uso do Vertex AI Matching Engine no Virtual Career Center (VCC) do Google Cloud by ML GDE Rubens Zimbres (Brazil) approaches the use of Vertex AI Matching Engine as part of the Google Cloud Virtual Career Center solution.

                                    More practical time-series model with BQML by ML GDE JeongMin Kwon (Korea) introduced BQML and time-series modeling and showed some practical applications with BQML ARIMA+ and Python implementations.

                                    Vertex AI Forecast - Demand Forecasting with AutoML by ML GDE Rio Kurihara (Japan) presented a time series forecast overview, time series fusion transformers, and the benefits and desired features of AutoML.

                                    Research & Ecosystem

                                    AI in Healthcare by ML GDE Sara EL-ATEIF (Morocco) introduced AI applications in healthcare and the challenges facing AI in its adoption into the health system.

                                    Women in AI APAC finished their journey at ML Paper Reading Club. During 10 weeks, participants gained knowledge on outstanding machine learning research, learned the latest techniques, and understood the notion of “ML research” among ML engineers. See their session here.

                                    A Natural Language Understanding Model LaMDA for Dialogue Applications by ML GDE Jerry Wu (Taiwan) introduced the natural language understanding (NLU) concept and shared the operation mode of LaMDA, model fine-tuning, and measurement indicators.

                                    Python library for Arabic NLP preprocessing (Ruqia) by ML GDE Ruqiya Bin (Saudi Arabia) is her first python library to serve Arabic NLP.

                                    Screengrab of ML GDEs Margaret Maynard-Reid and Akash Nain during Chat with ML GDE Akash
                                    Chat with ML GDE Vikram & Chat with ML GDE Aakash by ML GDE Margaret Maynard-Reid (United States) shared the stories of ML GDEs’ including how they became ML GDE and how they proceeded with their ML projects.

                                    Anatomy of Capstone ML Projects 🫀by ML GDE Sayak Paul (India) discussed working on capstone ML projects that will stay with you throughout your career. He covered various topics ranging from problem selection to tightening up the technical gotchas to presentation. And in Improving as an ML Practitioner he shared his learning from experience in the field working on several aspects.

                                    Screen grab of  statement of objectives in MLOps Development Environment by ML GDE Vinicius Carida
                                    MLOps Development Environment by ML GDE Vinicius Caridá (Brazil) aims to build a full development environment where you can write your own pipelines connecting MLFLow, Airflow, GCP and Streamlit, and build amazing MLOps pipelines to practice your skills.

                                    Transcending Scaling Laws with 0.1% Extra Compute by ML GDE Grigory Sapunov (UK) reviewed a recent Google article on UL2R. And his posting Discovering faster matrix multiplication algorithms with reinforcement learning explained how AlphaTensor works and why it is important.

                                    Back in Person - Prompting, Instructions and the Future of Large Language Models by TFUG Singapore and ML GDE Sam Witteveen (Singapore) and Martin Andrews (Singapore). This event covered recent advances in the field of large language models (LLMs).

                                    ML for Production: The art of MLOps in TensorFlow Ecosystem with GDG Casablanca by TFUG Agadir discussed the motivation behind using MLOps and how it can help organizations automate a lot of pain points in the ML production process. It also covered the tools used in the TensorFlow ecosystem.

                                    Google Dev Library Letter: 17th Edition

                                    Posted by the Dev Library Team

                                    We are highlighting the best projects developed with Google technologies that have been shared on the Google Dev Library platform. We hope this will spark some inspiration for your next project.


                                    Android - Content of the Month



                                    Transformers by Daichi Furiya

                                    See the Android transformation library providing a variety of image transformations for Coil, Glide, Picasso, and Fresco.



                                    Camposer by Lucas Yuji Yoshimine

                                    Learn how the camera library in Jetpack Compose which supports taking photos, recording videos, flash modes, zoom ratio, and more.

                                    Read more on DevLibrary




                                    ChatGPT Android by Jaewoong Eum

                                    Integrate ChatGPT on Android with Stream Chat SDK for Compose.

                                    Read more on DevLibrary




                                    Continue reading

                                    More Voices = More Bazel

                                    Posted by Lyra Levin, Technical Writer, Software Engineering

                                    Takeaways from the BazelCon DEI lunch panel


                                    In front of a standing-room-only lunch panel, Minu Puranik asks us, “If there is one thing you want to change [about Bazel’s DEI culture], what would it be and why?”

                                    We’d spent the last hour on three main themes: community culture, fostering trust, and growing our next generation of leaders. Moderated by Minu, the Strategy and Operations leader for DeveloperX & DevRel at Google, our panel brought together a slate of brilliant people from underrepresented genders and populations of color to give a platform to our experiences and ideas. Together with representatives and allies in the community, we explored methods to building inclusivity in our open source community and sought a better understanding of the institutional and systemic barriers to increasing diversity.

                                    Culture defines how we act, which informs who feels welcome to contribute. Studies show that diverse contributor backgrounds yield more and better results, so how do we create a culture where everyone feels safe to share, ask questions, and contribute? Helen Altshuler, co-founder and CEO of EngFlow, relayed her experience, “Having people that can have your back is important to get past the initial push to submit something and feeling like it’s ok. You don’t need to respond to everything in one go. Last year, Cynthia Coah and I gave a talk on how to make contributions to the Bazel community. Best practices which we can apply as a Bazel community: better beginners’ documentation, classifying GitHub issues as "good first issue", and having Slack channels where code owners can play a more active role.” Diving further, we discussed the need to make sure new contributors get positive, actionable feedback to reward them with context and resources, and encourage them to take the risk of contributing to the codebase.

                                    This encouragement of new contributors feeds directly into the next generation of technical influencers and leaders. Eva Howe, co-founder and Legal Counsel for Aspect, addressed the current lack of diversity in the community pipeline. “I’d like to see more trainings like the Bazel Community Day. Trainings serve 2 purposes:

                                    1. People can blend in, start talking to someone in the background and form connections.
                                    2. When someone goes through a bootcamp or CS course, Bazel is not mentioned. Nobody cares that the plumbing works until it doesn’t work. We need to educate people and give them that avenue and a good experience to move forward. I struggle with the emotional side of it - I count myself out before I get somewhere. It needs to be a safe space, which it hasn’t been in the past.”

                                    In addition to industry trainings, the audience and panel brought up bootcamps and university classes as rich sources to find and promote diversity, though cautioned that it takes active, ongoing effort to maintain an environment that diverse candidates are willing to stay in. There are fewer opportunities to take risks as part of an underrepresented group, and the feeling that you have to succeed for everyone who looks like you creates a high-pressure environment that is worse for learning outcomes.

                                    To bypass this pipeline problem, we can recruit promising candidates and sponsor them through getting the necessary experience on the job. Lyra Levin, Bazel’s internal technical writer at Google, spoke to this process of incentivizing and recognizing contributions outside the codebase, as a way to both encourage necessary glue work, and pull people into tech from parallel careers more hospitable to underrepresented candidates.

                                    She said, “If someone gives you an introduction to another person, recognize that. Knowing a system of people is work. Knowing where to find answers is work. Saying I’m going to be available and responding to emails is work. If you see a conversation where someone is getting unhelpful pushback, jump in and moderate it. Reward those who contribute by creating a space that can be collaborative and supportive.”

                                    Sophia Vargas, Program Manager in Google’s OSPO (Open Source Programs Office), chimed in, “Create ways to recognize non-code contributions. One example is a markdown file describing other forms of contribution, especially in cases that do not generate activity attached to a name on GitHub.”

                                    An audience member agreed, “A positive experience for the first few PRs is very critical for building trust in the community.”

                                    And indeed, open source is all about building trust. So how do we go about building trust? What should we do differently? Radhika Advani, Bazel’s product manager at Google, suggests that the key is to “have some amazing allies”. “Be kind and engage with empathy,” she continued, “Take your chances - there are lots of good people out there. You have to come from a place of vulnerability.”

                                    Sophia added some ideas for how to be an “amazing ally” and sponsor the careers of those around you. “Create safe spaces to have these conversations. Not everyone is bold enough to speak up or to ask for support, as raising issues in a public forum can be intimidating. Make yourself accessible, or provide anonymous forms for suggestions or feedback — both can serve as opportunities to educate yourself and to increase awareness of diverging opinions.” An audience member added, “If you recognize that an action is alienating to a member of your group, even just acknowledging their experience or saying something to the room can be very powerful to create a sense of safety and belonging.” Another said, “If you’re in a leadership position, when you are forthright about the limits of your knowledge, it gives people the freedom to not know everything.”

                                    So to Minu’s question, what should we do to improve Bazel’s culture?

                                    Helen: Create a governance group on Slack to ensure posts are complying with the community code of conduct guidelines. Review how this is managed for other OSS communities.

                                    Sophia: Institutionalize mentorship; have someone else review what you’ve done and give you the confidence to push a change. Nurture people. We need to connect new and established members of the community.

                                    Lyra: Recruit people in parallel careers paths with higher representation. Give them sponsorship to transition to tech.

                                    Radhika: Be more inclusive. All the jargon can get overwhelming, so let’s consider how we can make things simpler, including with non-technical metaphors.

                                    Eva: Consider what each of us can do to make the experience for people onboarding better.

                                    There are more ways to be a Bazel contributor than raising PRs. Being courageous, vulnerable and open contributes to the culture that creates the code. Maintainers — practice empathy and remember the human on the other side of the screen. Be a coach and a mentor, knowing that you are opening the door for more people to build the product you love, with you. Developers — be brave and see the opportunities to accept sponsorship into the space. Bazel is for everyone.

                                    Welcome.

                                    Introducing the Earth Engine Google Developer Experts (GDEs)

                                    Posted by Tyler Erickson, Developer Advocate, Google Earth Engine

                                    One of the greatest things about Earth Engine is the vibrant community of developers who openly share their knowledge about the platform and how it can be used to address real-world sustainability issues. To recognize some of these exceptional community members, in 2022 we launched the initial cohort of Earth Engine Google Developer Experts (GDEs). You can view the current list of Earth Engine GDEs on the GDE Directory page.

                                    Moving 3D image of earth rotating showing locations of members belonging to the initial cohort of Earth Engine GDEs
                                    The initial cohort of Earth Engine Google Developer Experts.
                                    What makes an Earth Engine expert? Earth Engine GDEs are selected based on their expertise in the Earth Engine product (of course), but also for their knowledge sharing. They share their knowledge in many ways, including answering questions from other developers, writing tutorials and blogs, teaching in settings spanning from workshops to university classes, organizing meetups and conference sessions that allow others to share their work, building extensions to the platform, and so much more!

                                    To learn more about the Google Developer Experts program and the Earth Engine GDEs, go to https://developers.google.com/community/experts.

                                    Now that it is 2023, we are re-opening the application process for additional Earth Engine GDEs. If you’re interested in being considered, you can find information about the process in the GDE Program Application guide.

                                    GDE Digital badge logo - Earth

                                    Improving Video Voice Dubbing Through Deep Learning

                                    Posted by Paul McCartney, Software Engineer, Vivek Kwatra, Research Scientist, Yu Zhang, Research Scientist, Brian Colonna, Software Engineer, and Mor Miller, Software Engineer

                                    People increasingly look to video as their preferred way to be better informed, to explore their interests, and to be entertained. And yet a video’s spoken language is often a barrier to understanding. For example, a high percentage of YouTube videos are in English but less than 20% of the world's population speaks English as their first or second language. Voice dubbing is increasingly being used to transform video into other languages, by translating and replacing a video’s original spoken dialogue. This is effective in eliminating the language barrier and is also a better accessibility option with regard to both literacy and sightedness in comparison to subtitles.

                                    In today’s post, we share our research for increasing voice dubbing quality using deep learning, providing a viewing experience closer to that of a video produced directly for the target language. Specifically, we describe our work with technologies for cross-lingual voice transfer and lip reanimation, which keeps the voice similar to the original speaker and adjusts the speaker’s lip movements in the video to better match the audio generated in the target language. Both capabilities were developed using TensorFlow, which provides a scalable platform for multimodal machine learning. We share videos produced using our research prototype, which are demonstrably less distracting and - hopefully - more enjoyable for viewers.

                                    Cross-Lingual Voice Transfer

                                    Voice casting is the process of finding a suitable voice to represent each person on screen. Maintaining the audience’s suspension of disbelief by having believable voices for speakers is important in producing a quality dub that supports rather than distracts from the video. We achieve this through cross-lingual voice transfer, where we create synthetic voices in the target language that sound like the original speaker voices. For example, the video below uses an English dubbed voice that was created from the speaker’s original Spanish voice.

                                    Original “Coding TensorFlow” video clip in Spanish.

                                    The “Coding TensorFlow” video clip dubbed from Spanish to English, using cross-lingual voice transfer and lip reanimation.

                                    Inspired by few-shot learning, we first pre-trained a multilingual TTS model based on our cross-language voice transfer approach. This approach uses an attention-based sequence-to-sequence model to generate a series of log-mel spectrogram frames from a multilingual input text sequence with a variational autoencoder-style residual encoder. Subsequently, we fine-tune the model parameters by retraining the decoder and attention modules with a fixed mixing ratio of the adaptation data and original multilingual data as illustrated in Figure 1.

                                    fine tuning voice imitation architecture
                                    Figure 1: Voice transfer architecture

                                    Note that voice transfer and lip reanimation is only done when the content owner and speakers give consent for these techniques on their content.

                                    Lip Reanimation

                                    With conventionally dubbed videos, you hear the translated / dubbed voices while seeing the original speakers speaking the original dialogue in the source language. The lip movements that you see in the video generally do not match the newly dubbed words that you hear, making the combined audio/video look unnatural. This can distract viewers from engaging fully with the content. In fact, people often even intentionally look away from the speaker’s mouth while watching dubbed videos as a means to avoid seeing this discrepancy.

                                    To help with audience engagement, producers of higher quality dubbed videos may put more effort into carefully tailoring the dialogue and voice performance to partially match the new speech with the existing lip motion in video. But this is extremely time consuming and expensive, making it cost prohibitive for many content producers. Furthermore, it requires changes that may slightly degrade the voice performance and translation accuracy.

                                    To provide the same lip synchronization benefit, but without these problems, we developed a lip reanimation architecture for correcting the video to match the dubbed voice. That is, we adjust speaker lip movements in the video to make the lips move in alignment with the new dubbed dialogue. This makes it appear as though the video was shot with people originally speaking the translated / dubbed dialogue. This approach can be applied when permitted by the content owner and speakers.

                                    For example, the following clip shows a video that was dubbed in the conventional way (without lip reanimation):

                                    "Machine Learning Foundations” video clip dubbed from English to Spanish, with voice transfer, but without lip reanimation

                                    Notice how the speaker’s mouth movements don’t seem to move naturally with the voice. The video below shows the same video with lip reanimation, resulting in lip motion that appears more natural with the translated / dubbed dialogue:

                                    The dubbed “Machine Learning Foundations” video clip, with both voice transfer and lip reanimation

                                    For lip reanimation, we train a personalized multistage model that learns to map audio to lip shapes and facial appearance of the speaker, as shown in Figure 2. Using original videos of the speaker for training, we isolate and represent the faces in a normalized space that decouples 3D geometry, head pose, texture, and lighting, as described in this paper. Taking this approach allows our first stage to focus on synthesizing lip-synced 3D geometry and texture compatible with the dubbed audio, without worrying about pose and lighting. Our second stage employs a conditional GAN-based approach to blend these synthesized textures with the original video to generate faces with consistent pose and lighting. This stage is trained adversarially using multiple discriminators to simultaneously preserve visual quality, temporal smoothness and lip-sync consistency. Finally, we refine the output using a custom super-resolution network to generate a photorealistic lip-reanimated video. The comparison videos shown above can also be viewed here.


                                    Lip-Reanimation Pipeline showing inference blocks in blue, training blocks in red.
                                    Figure 2: Lip-Reanimation Pipeline: inference blocks in blue, training blocks in red.

                                    Aligning with our AI Principles

                                    The techniques described here fall into the broader category of synthetic media generation, which has rightfully attracted scrutiny due to its potential for abuse. Photorealistically manipulating videos could be misused to produce fake or misleading information that can create downstream societal harms, and researchers should be aware of these risks. Our use case of video dubbing, however, highlights one potential socially beneficial outcome of these technologies. Our new research in voice dubbing could help make educational lectures, video-blogs, public discourse, and other formats more widely accessible across a global audience. This is also only applied when consent has been given by the content owners and speakers.

                                    During our research, we followed our guiding AI Principles for developing and deploying this technology in a responsible manner. First, we work with the creators to ensure that any dubbed content is produced with their consent, and any generated media is identifiable as such. Second, we are actively working on tools and techniques for attributing ownership of original and modified content using provenance and digital watermarking techniques. Finally, our central goal is fidelity to the source-language video. The techniques discussed herein serve that purpose only -- namely, to amplify the potential social benefit to the user, while preserving the content’s original nature, style and creator intent. We are continuing to determine how best to uphold and implement data privacy standards and safeguards before broader deployment of our research.

                                    The Opportunity Ahead

                                    We strongly believe that dubbing is a creative process. With these techniques, we strive to make a broader range of content available and enjoyable in a variety of other languages.

                                    We hope that our research inspires the development of new tools that democratize content in a responsible way. To demonstrate its potential, today we are releasing dubbed content for two online educational series, AI for Anyone and Machine Learning Foundations with Tensorflow on the Google Developers LATAM channel.

                                    We have been actively working on expanding our scope to more languages and larger demographics of speakers — we have previously detailed this work, along with a broader discussion, in our research papers on voice transfer and lip reanimation.

                                    Interview with Vanessa Aristizabal, contributor to Google’s Dev Library

                                    Posted by the Dev Library Team

                                    We are back with another edition of the Dev Library Contributor Spotlights - a blog series highlighting developers that are supporting the thriving development ecosystem by contributing their resources and tools to Google Dev Library.

                                    We met with Vanessa Aristizabal, one of the many talented developers contributing to Dev Library, to discuss her journey of learning the Angular framework and what drives her to share insights regarding budding technologies with the developer community.

                                    What is one thing that surprised you when you started using Google technology?

                                    Talking about my journey, Angular was my first JavaScript framework. So, I was really surprised when I started using it because with only a few lines of code, I could create a good application.

                                    What kind of challenges did you face when you were learning how to use Angular? How did you manage to overcome them?

                                    I would like to share that maybe it’s a common practice for developers that when we are working on some requirement for a project, we look it up on Google or Stack Overflow. And if we find a solution, we copy and paste the code without internalizing that knowledge. The same happened to me. Initially, I implemented bad practices as I did not know Angular completely. This led to the bad performance of my applications.

                                    I overcame this challenge by checking the documentation properly and doing in-depth research on Google to learn good practices of Angular and implement them effectively in my applications. This approach helped me to solve all the performance-related problems.

                                    How and why did you start sharing your knowledge by writing blog posts?

                                    It was really difficult to learn Angular because, in the beginning, I did not have a solid basis for the web. So, I first had to work on that. And during the process of learning Angular, I always had to research something or the other because sometimes I couldn’t find the thing that I needed in the documentation.

                                    I had to refer to blogs, search on Google, or go through books to solve my requirements. And then I started taking some notes. From there on, I decided to start writing so I could help other developers who might be facing the same set of challenges. The idea was to help people find something useful and add value to their learning process through my articles.
                                    Google Dev Library Logo is in the top left with Vanessa's headshot corpped into a circle. Vanessa is wearing a dark grey t-shirt and smiling, a quote card reads, 'I decided to start writing so I could help other developers who might be facing the same set of challenges. the idea was to help people find something useful and add value to their learning process through my articles' Vanessa Aristizabal Dev Library Contributor
                                    Find out more content contributed and authored by Vanessa Aristizabal (@vanessamarely) and discover more unique tools and resources on the Google Dev Library website!

                                    Dev Library Letters: 16th Issue

                                    Posted by the Dev Library Team

                                    Welcome to the 16th Issue! Our monthly newsletter curates some of the best projects developed with Google tech that have been submitted to the Google Dev Library platform.  We hope this brings you the inspiration you need for your next project!

                                        Content of the month

                                    How to exclude stylesheets from the bundle and lazy load them in Angular

                                    by Dharmen Shah

                                    Learn how to load stylesheets only when needed without making them part of an application bundle.


                                        Check out content from Google Cloud, Angular, Android, ML, & Flutter


                                    Android

                                    • Check out this Android library that offers dialogs and views for various use cases built with Jetpack Compose for Compose projects by Maximilian Keppeler.

                                    • Learn how to create and publish your own Android Library with JitPack by Matteo Macri.

                                    Angular

                                    • Dive into into composition and inheritance in Angular by Dany Paredes featuring an example focused on forms that highlights why you should be careful using inheritance in components.

                                    • Read “Angular dependency injection understood” by Jordi Riera to gain a broader perspective of how it works, why it is important, and how to leverage it inside angular.

                                    Cloud

                                    • Learn how Iris automatically assigns labels to Google Cloud resources for manageability and easier billing reporting in this post by Joshua Fox.

                                    • Check out Glen Yu’s hack for those in regions without access to native replication in “Pulumi DIY GCS replication” - some of these solutions will require understanding of the fundamental building blocks that make up the Google Cloud Platform.

                                    Flutter

                                    • Learn how to make Flutter projects scalable by using a modularization approach in R. Rifa Fauzi Komara’s article, “Flutter: mastering modularization”.

                                    • Check out Let’s Draw by Festus Olusegun, a simple app made with Flutter that enables users to draw art with freehand, line, and shape tools.

                                    • Explore how to use Cubits from the Bloc library to manage states and get the benefits and drawbacks of this approach in Verena Zaiser’s article.

                                    Machine Learning

                                    • Get an overview on Convolutional Neural Networks (CNNs, ConvNets), why they matter, and how to use them in Henry Ndubuaku’s tutorial, “Applying CNNs to images for computer vision and text for NLP”.

                                    • See why you should add deep learning framework Jax to your stack and get an intro to writing and training your own neural networks with Flax in this introduction tutorial by Phillip Lippe.

                                    Want to read more? 
                                    Check out the latest projects and community-authored content by visiting Google Dev Library.
                                    Submit your projects to showcase your work and inspire developers!

                                    GDE community highlight: Lars Knudsen

                                    Posted by Monika Janota, Community Manager

                                    Lars Knudsen is a Google Developer Expert; we talked to him about how a $10 device can make computers more accessible for people with disabilities.
                                     

                                    Monika: What inspired you to become a developer? What’s your current professional focus?

                                    Lars: I got my MSc in engineering, but in fact my interest in tech started much earlier. When I was a kid in the 80s, my father owned a computing company working with graphic design. Sometimes, especially during the summer holidays, he would take me to work with him. At times, some of his employees would keep an eye on me. There was this really smart guy who once said to me, “Lars, I need to get some work done, but here's a C manual, and there’s a computer over there. Here’s how you start a C compiler. If you have any questions, come and ask me.” I started to write short texts that were translated into something the computer could understand. It seemed magical to me. I was 11 years old when I started and around seventh grade, I was able to create small applications for my classmates or to be used at school. That’s how it started.

                                    Over the years, I’ve worked for many companies, including Nokia, Maersk, and Openwave. At the beginning, like in many other professions, because you know a little, you feel like you can do everything, but with time you learn each company has a certain way of doing things.

                                    After a few years of working for a medical company, I started my own business in 1999. I worked as a freelance contractor and, thanks to that, had the chance to get to know multiple organizations quickly. After completing the first five contracts, I found out that every company thinks they’ve found the perfect setup, but all of them are completely different. At that time, I was also exposed to a lot of different technologies, operating systems etc. Around my early twenties, my mindset changed. At the beginning, I was strictly focused on one technology and wanted to learn all about it. With time, I started to think about combining technologies as a way of improving our lives. I have a particular interest in narrowing the gap between what we call the A and the B team in the world. I try to transfer as much knowledge as possible to regions where people don’t have the luxury of owning a computer or studying at university free of charge.

                                    I continue to work as a contractor for external partners but, whenever possible, I try to choose projects that have some kind of positive impact on the environment or society. I’m currently working on embedded software for a hearing-aid company called Oticon. Software-wise, I’ve been working on everything from the tiniest microcontrollers to the cloud; a lot of what I do revolves around the web. I’m trying to combine technologies whenever it makes sense.

                                    Monika: Were you involved in developer communities before joining the Google Developer Experts program?

                                    Lars: Yes, I was engaged in meetups and conferences. I first connected with the community while working for Nokia. Around 2010, I met Kenneth Rohde Christiansen, who became a GDE before me. He inspired me to see how web technologies can be useful for aspiring tech professionals in developing countries. Developing and deploying solutions using C++, C# or Java requires some years of experience, but everyone who has access to a computer, browser, and notepad can start developing web-based applications and learn really fast. It’s possible to build a fully functional application with limited resources, and ramp up from nothing. That’s why I call the web a very democratizing technology stack.

                                    But back to the community—after a while I got interested in web standardization and what problems bleeding edge web technologies could solve. I experimented with new capabilities in a browser before release. I was working for Nokia at the time, developing for a Linux-based flagship device, the N9. The browser we built was WebKit based and I got some great experience developing features for a large open source project. In the years after leaving Nokia, I got involved in web conferences and meetups, so it made sense to join the GDE community in 2017.

                                    I really enjoy the community work and everything we’re doing together, especially the pre-pandemic Chrome Developer Summits, where I got to help with booth duty alongside a bunch of awesome Google Engineers and other GDEs.

                                    Monika: What advice would you give to a young developer who’s just starting their professional career and is not sure which path to take?

                                    Lars: I’d say from my own experience—if you can afford it—consider freelancing for a couple of different companies. This way, you’ll be exposed to code in many different forms and stages of development. You’ll get to know a multitude of operating systems and languages, and learn how to resolve problems in many ways. This helped me a lot. I gained experience as senior developer in my twenties. This approach will help you achieve your professional goals faster.

                                    Besides that, have fun, explore, play with the hardware and software. Consider building something that solves a real problem—maybe for your friends, family, or a local business. Don’t be afraid to jump into something you’ve never done before.

                                    Monika: What does the future hold for web technologies?

                                    Lars: I think that for a couple of years now the web has been fully capable of providing a platform for large field applications, both for the consumer and for business. On the server side of things, web technologies offer a seamless experience, especially for frontend developers who want to build a backend component. It’s easier for them to get started now. I know people who were using both Firebase and Heroku to get the job done. And this trend will grow—web technologies will be enough to build complex solutions of any kind. I believe that the Web Capabilities - Project Fugu ? really unlocks that potential.

                                    Looking at it from a slightly different point of view, I also think that if we provide full documentation and in-depth articles not only in English but also in other languages (for example, Spanish and Portuguese), we would unlock a lot of potential in Latin America—and other regions, of course. Developers there often don’t know English well enough to fully understand all the relevant articles. We should also give them the opportunity to learn as early as possible, even before they start university, while still in their hometowns. They may use those skills to help local communities and businesses before they leave home and maybe never come back.

                                    Thomas: You came a long way from doing C development on a random computer to hacking on hardware. How did you do that?

                                    Lars: I started taking apart a lot of hardware I had at home. My dad was not always happy when I couldn’t put it back together. With time, I learned how to build some small devices, but it really took off much later, around the time I joined Nokia, where I got my embedded experience. I had the chance to build small screensavers, components for the Series 30 phones. I was really passionate about it and could really think outside the box. They assigned me a task to build a Snake game for those devices. It was a very interesting experience. The main difference between building embedded systems and most other things (including web) is that you leave a small footprint—you don’t have much space or memory to use. While building Snake, the RAM that I had available was less than one-third of the frame buffer (around 120 x 120 pixels). I had to come up with ways to algorithmically rejoin components on screen so they’d look static, as if they were tiles. I learned a lot—that was the move from larger systems to small, embedded solutions.

                                    Thomas: The skill set of a typical frontend developer is very different from the skill set of someone who builds embedded hardware. How would you encourage a frontend developer to look into hardware and to start thinking in binary?

                                    Lars: I think that the first step is to look at some of the Fugu APIs that work in Chrome and Edge, and are built into all the major systems today. That’s all you need at the start.

                                    Another thing is that the toolchains for building embedded solutions have a steep learning curve. If you want to build your own custom hardware, start with Arduino or ESP32—something that is easy to buy and fairly cheap. With the right development environment, you can get your project up and running in no time.

                                    You could also buy a heart rate monitor or a multisensor unit, which are already using Bluetooth GATT services, so you don’t have to build your own hardware or firmware—you can use what’s already there and start experimenting with the Web Bluetooth API to start communicating with it.

                                    There are also devices that use a serial protocol—for these, you can use the Web Serial API (also Fugu). Recently I’ve been looking into using the WebHID API, which enables you to talk to all the human interface devices that everyone has access to. I found some old ones in my basement that had not been supported by any operating system for years, but thanks to reverse engineering it took me a few hours to re-enable them.

                                    There are different approaches depending on what you want to build, but to a web developer I would say, get a solid sensor unit, maybe a Thingy 52 from Nordic Semiconductor; it has a lot of sensors, and you can hook up to your web application with very little effort.

                                    Thomas: Connecting to the device is the first step, but then speaking to it effectively—that’s a whole other thing. How come you did not give up after facing obstacles? What kept you motivated to continue working?

                                    Lars: For me personally the social aspect of solving a problem was the most important. When I started working on my own embedded projects, I had a vision and a desire to build a science lab in a box for developing regions. My wife is from Mexico and I saw some of the schools there; some that are located outside of the big cities are pretty shabby, without access to the materials and equipment that we have in our part of the world.

                                    The passion for building something that can potentially be used to help others—that’s what kept me going. I also really enjoyed the community support. I reached out to some people at Google and all were extremely helpful and patiently answered all of my questions.

                                    Thomas: A lot of people have some sort of hardware at home, but don’t know what to do with it. How do you find inspiration for all your amazing projects, in particular the one under the working name SimpleMouse?

                                    Lars: Well, recently I have been in fact reviving a lot of old hardware, but for this particular project—the name has not been set yet, but let’s call it SimpleMouse—I used my experience. I worked with some accessibility solutions earlier and I saw how some of them just don’t work anymore; you’d need to have an old Windows XP with certain software installed to run them. You can’t really update those, you can only use those at home because you can’t move your setup.

                                    Because of that, I wondered how to combine my skills from the embedded world with project Fugu and what is now possible on the web to create cheap, affordable hardware combined with easy-to-understand software on both sides, so people can build on that.

                                    For that particular project, I took a small USB dongle with a reflexive chip, the nRF52840. It communicates with Bluetooth on one side and USB on the other. You can basically program it to be anything on both sides. And then I thought about the devices that control a computer—a mouse and a keyboard. Some people with disabilities may find it difficult to operate those devices, and I wanted to help them.

                                    The first thing I did was to make sure that any operating system would see the USB dongle as a mouse. You can control it from a native application or a web application—directly into Bluetooth. After that, I built a web application—a simple template that people can extend the way they want using web components. Thanks to that, everyone can control their computer with a web app that I made in just a couple of hours on an Android phone.

                                    Having that set up will enable anyone in the world with some web experience to build, in a matter of days, a very customized solution for anyone with a disability who wants to control their computer. The cool thing is that you can take it with you anywhere you go and use it with other devices as well. It will be the exact same experience. To me, the portability and affordability of the device are very important because people are no longer confined to using their own devices, and are no longer limited to one location.

                                    Thomas: Did you have a chance to test the device in real life?

                                    Lars: Actually during my last trip to Mexico I discussed it with a web professional living there; he’s now looking into the possibilities of using the device locally. Over there the equipment is really expensive, but a USB dongle normally costs around ten US dollars. He’s now checking if we could build local setups there to try it out. But I haven’t done official trials yet here in Denmark.

                                    Thomas: Many devices designed to assist people with disabilities are really expensive. Are you planning on cooperating with any particular company and putting it into production for a fraction of the price of that expensive equipment?

                                    Lars: Yes, definitely! I’ve already been talking to a local hardware manufacturer about that. Of course, the device won’t replace all those highly specialized solutions, but it can be the first step to building something bigger—for example, using voice recognition, already available for web technologies. It’ll be an easy way of controlling devices using your Android phone; it can work with a device of any kind.

                                    Just being able to build whatever you want on the web and to use that to control any host computer opens up a lot of possibilities.

                                    Thomas: Are you releasing your Zephyr project as open source? What kind of license do you use? Are there plans to monetize the project?

                                    Lars: Yes, the solution is open source. I did not put a specific license on it, but I think Apache 2.0 would be the way to go. Many major companies use this license, including Google. When I worked on SimpleMouse, I did not think about monetizing the project—that was not my goal. But I also think it would make sense to try to put it into production in some way, and with this comes cost. The ultimate goal is to make it available. I’d love to see it being implemented at a low cost and on a large scale.