Category Archives: Google Developers Blog

News and insights on Google platforms, tools and events

Let’s meet the students coding their way to a better world

Posted by Erica Hanson, Global Program Manager, Google Developer Student Clubs

With every new challenge ahead comes a new opportunity for finding a solution. Today’s challenges, and those we will continue to face, remind us all of how vital it remains to protect our planet and the people living on it. Enter the Solution Challenge, Google’s annual contest inviting the global Google Developer Student Clubs (GDSC) community to develop solutions to real world problems utilizing Google technologies.

This year’s Solution Challenge asks participants to solve for one or more of the United Nations 17 Sustainable Development Goals, intended to promote employment for all, economic growth, and climate action.

The top 50 semi-finalists and the top 10 finalists were announced earlier this month. Now, it all comes down to Demo Day on July 28th, where the finalists will present their solutions to Google and developers all around the world, live on YouTube.

At Demo Day, our judges will review the projects, ask questions, and choose the top 3 grand prize winners! You can RSVP here to be a part of Demo Day, vote for the People’s Choice Award, and watch all the action as it unfolds live.

Ahead of the event, let’s get to know the top 10 finalists and their incredible solutions below.

The Top 10 Projects

. . .

BloodCall - Greece, Harokopio University of Athens

UN Sustainable Goals Addressed: #3: Good Health & Wellbeing

BloodCall aims to make blood donation an easier task for everyone involved by leveraging Android, Firebase, and the Google Maps SDK. It was built by Athanasios Bimpas, Georgios Kitsakis and Stefanos Togias.

“Our main inspiration was based on two specific findings, we noticed that especially in Greece the willingness to donate blood is significantly high but information is not readily available. We also have noticed lots of individuals trying to reach as many people as possible through sharing their (or a loved one's) need of blood on social media, so we concluded that there exists a major need for blood especially in periods of heightened activity like summertime.”


Blossom - Canada, University of Waterloo

UN Sustainable Goals Addressed: #3: Good Health & Wellbeing, #4: Quality Education, #5: Gender Equality and Women’s Empowerment, #10: Reduced Inequalities

Blossom provides an integrated solution for young girls to get access to accurate and reliable menstrual education and resources and uses Android, Firebase, Flutter, Google Cloud Platform. It was built by Aditi Sandhu, Het Patel, Mehak Dhaliwal, and Jinal Rajawat.

“As all group members of this project are of South Asian descent, we know firsthand how difficult it is to talk about the female reproductive system within our families. We wanted to develop an application that would target youth so they can begin this conversation at an earlier age. Blossom allows users to learn from the safety of their own devices. Simply by knowing more about their bodies, individuals are more confident with them, thereby solving Goal 5: to achieve gender equality and empower all women and girls.”


Gateway - Vietnam, Hoa Sen University

UN Sustainable Goals Addressed: #3: Good Health & Wellbeing, #11: Sustainable Cities, #17: Partnerships

Gateway creates an open covid-19 digital check-in system. Through an open-source, IoT solution that pairs with an application on a mobile device and communicates with an embedded system over Bluetooth connection protocol. It uses Angular, Firebase, Flutter, Google Cloud Platform, TensorFlow, Progressive Web Apps and connects users with a COVID-19 digital check-in system.It was built by: Cao Nguyen Vo Dang, Duy Truong Hoang, Khuong Nguyen Dang, and Nguyễn Mạnh Hùng.

“Problems are still happening in our community where the support of technology is still lacking when it comes to covid. Vaccination in our country is still continuing. We still have to manually (paper) when it comes to registering for vaccination results. And "back to school/office" are now the biggest challenges for business, community. Contact tracing solutions are fully overloaded with crowded areas. We're focused on improving the crowded situation by creating an open-source automatic checking gateway, allowing users to interact with the system more intuitively.”


GetWage - India, G.H. Raisoni College of Engineering, Nagpur

UN Sustainable Goals Addressed: #1: No Poverty, #4: Quality Education, #8: Decent Work & Economic Growth

GetWage provides a tool to help those impacted by unemployment and unfilled positions in the local economy find and post daily wage work with ease. It uses Firebase, Flutter, Google Cloud Platform, TensorFlow. It was built by Aaliya Ali, Aniket Singh, Neenad Sahasrabuddhe, and Shivam.

“When COVID struck the world, daily wage laborers were hit the hardest. Data from Lucknow shows how the average working days pre-Covid for most workers were around 21 days a month, which fell to nine days a month post the lockdown. In the city of Pune, average working days in a month came down from 12 to two days. All of this inspired us to do something in order to help the needy by connecting them with those looking to hire laborers and educating them.“


Isak - South Korea, Soonchunhyang University

UN Sustainable Goals Addressed: #3: Good Health & Wellbeing, #12: Responsible Consumption & Production

Isak is an application that combines the activity of jogging and trash collection to make picking up trash more impactful. It uses Firebase, Flutter, Google Cloud Platform, TensorFlow. It was built by Choo Chang Woo, Jang Hyeon Wook, Jeong Hyeong Lee, and JeongWoo Han.

“COVID-19 has increased people's time to stay at home, and disposable garbage generated by the increase in packaging and delivery orders has been increasing exponentially and people are home more as a result. Our team decided to solve both garbage reduction and exercise. We thought that if we picked up trash while jogging, we could take care of our health and environment at the same time, and if we added additional functions, we could arouse interest from users and encourage them to participate.”


SaveONE life - Kenya, Taita Taveta University

UN Sustainable Goals Addressed: #1: No Poverty, #2: Zero Hunger, #4:Quality Education, #10: Reduced Inequality

SaveONE life helps donors locate and donate goods to home orphanages in Kenya that are in need of basic items, food, clothing, and other educational resources. It's built with Android, Assistant / Actions on Google, Firebase, Google Cloud Platform, and Google Maps. It was built by David Kinyanjui, Nasubo Imelda, and Wycliff Njenga.

“We visited one of the orphanage homes near our campus and we talked to the Orphanage Manager and his feedback he told us that their challenge is food. Some of the kids are suffering from malnutrition because they are not getting enough food, water, clothing and educational materials including school fees for the kids. The major inspiration for use is helping donors around our campus better know where the home orphanage is, when, and how orphans can get donations.


SIGNify - Canada, University of Toronto, Mississauga

UN Sustainable Goals Addressed: #10: Reduced Inequalities, #4: Quality Education

SIGNify provides an interface where deaf and non-deaf people can easily understand sign language through a graphical context. It leverages Android, Firebase, Flutter, Google Cloud Platform, and TensorFlow. It was built by Kavya Mehta, Milind Vishnoi, Mitesh Ghimire, and Wentao Zhou.

“Approximately 70 million deaf people around the world use sign language for communication. These are all people that have great ideas, thoughts, and opinions that need to be heard. However, their talent and skills will be of no use if people are not able to understand what they have to say; this has lead to 1 in 4 deaf people leaving a job due to discrimination. If we fail to learn sign language, we are depriving ourselves of the knowledge resources that deaf people have to provide. By learning sign language and hiring deaf people in the workspace, we are promoting equal rights and increasing employment opportunities for disabled people.”


Starvelp - Turkey, İzmir University of Economics

UN Sustainable Goals Addressed: #2: Zero Hunger

Starvelp aims to tackle the problems of food waste and hunger by enabling more ways to share local resources with those in need. It leverages Firebase, Flutter, and Google Cloud Platform. It was built by Akash Srivastava and Selin Doğa Orhan.

"We found that the prevalence of undernourishment is impacting a huge population. Prevalence of food insecurity and not being able to feed themselves and their families are related to poor financial conditions. We were inspired to build this, because in many countries, there are a large number of slum areas and many people who are in the farming sector are not able to get sufficient food. It is really shocking for us to see news about how people are getting impacted each year and have different diseases due to improper nutrition. In fact, they have to skip many meals which ultimately leads to undernourishment, and this is a big problem."


Xtrinsic - Germany, Faculty of Engineering Albert-Ludwigs-Universität Freiburg

UN Sustainable Goals Addressed: #3: Good Health & Wellbeing

Xtrinsic is an application for mental health research and therapy - it adapts your environment to your personal habits and needs. Using a wearable device and TensorFlow, the team aims to detect and help users get through their struggles throughout the day and at night with behavioral suggestions. It’s built using Android, Assistant / Actions on Google, Firebase, Flutter, Google Cloud Platform, TensorFlow, WearOS, DialogFlow, and Google Health Services. It was built by Alexander Monneret, Chikordili Fabian Okeke, Emma Rein, and Vandysh Kateryna.

“Our inspiration comes from our own experience with mental health issues. Two of our team members were directly impacted by the recently waged wars in Syria and Ukraine. And all of us have experienced mental health conditions during the pandemic. We learned through our hardships how to overcome these tough situations and stay strong and positive. We believe that with our know-how and Google technologies we can make a difference and help make the world a better place.”


Zero-zone - South Korea, Sookmyung Women's University

UN Sustainable Goals Addressed: #4: Quality Education, #10: Reduced Inequalities

Zero-zone supports active communication for, and with, the hearing impaired and helps individuals with hearing impairments practice lip reading. The tool leverages Android, Assistant / Actions on Google, Flutter, Google Cloud Platform, and TensorFlow. It was built by DoEun Kim, Hwi Min, Hyemin Song, and Hyomin Kim.

“About 39% of Korean hearing impaired people find it difficult to learn lip-reading, even if they have enrolled in special schools. The project aims to refine lip-reading so that hearing impaired can learn lip-reading anytime, anywhere and communicate actively. Our tool provides equal educational opportunities for deaf users who want to practice oral speech. In addition, the active communication of the hearing impaired, will give them confidence and develop a power to overcome inequality due to difficulties in communication.”


Feeling inspired and ready to learn more about Google Developer Student Clubs? Find a club near you here, and be sure to RSVP here and tune in for the livestream of the upcoming Solution Challenge Demo Day on July 28th.

ML in Action: Campaign to Collect and Share Machine Learning Use Cases

Posted by Hee Jung, Developer Relations Community Manager / Soonson Kwon, Developer Relations Program Manager

ML in Action is a virtual event to collect and share cool and useful machine learning (ML) use cases that leverage multiple Google ML products. This is the first run of an ML use case campaign by the ML Developer Programs team.

Let us announce the winners right now, right here. They have showcased practical uses of ML, and how ML was adapted to real life situations. We hope these projects can spark new applied ML project ideas and provide opportunities for ML community leaders to discuss ML use cases.

4 Winners of "ML in Action" are:

Detecting Food Quality with Raspberry Pi and TensorFlow

By George Soloupis, ML Google Developer Expert (Greece)

This project helps people with smell impairment by identifying food degradation. The idea came suddenly when a friend revealed that he has no sense of smell due to a bike crash. Even with experiences attending a lot of IT meetings, this issue was unaddressed and the power of machine learning is something we could rely on. Hence the goal. It is to create a prototype that is affordable, accurate and usable by people with minimum knowledge of computers.

The basic setting of the food quality detection is this. Raspberry Pi collects data from air sensors over time during the food degradation process. This single board computer was very useful! With the GUI, it’s easy to execute Python scripts and see the results on screen. Eight sensors collected data of the chemical elements such as NH3, H2s, O3, CO, and CH4. After operating the prototype for one day, categories were set following the results. The first hours of the food out of the refrigerator as “good” and the rest as “bad”. Then the dataset was evaluated with the help of TensorFlow and the inference was done with TensorFlow Lite.

Since there were no open source prototypes out there with similar goals, it was a complete adventure. Sensors on PCBs and standalone sensors were used to get the best mixture of accuracy, stability and sensitivity. A logic level converter has been used to minimize the use of resistors, and capacitors have been placed for stability. And the result, a compact prototype! The Raspberry Pi could attach directly on with slots for eight sensors. It is developed in such a way that sensors can be replaced at any time. Users can experiment with different sensors. And the inference time values are sent through the bluetooth to a mobile device. As an end result a user with no advanced technical knowledge will be able to see food quality on an app built on Android (Kotlin).

Reference: Github, more to read

* This project is supported by Google Impact Fund.


Election Watch: Applying ML in Analyzing Elections Discourse and Citizen Participation in Nigeria

By Victor Dibia, ML Google Developer Expert (USA)

This project explores the use of GCP tools in ingesting, storing and analyzing data on citizen participation and election discourse in Nigeria. It began on the premise that the proliferation of social media interactions provides an interesting lens to study human behavior, and ask important questions about election discourse in Nigeria as well as interrogate social/demographic questions.

It is based on data collected from twitter between September 2018 to March 2019 (tweets geotagged to Nigeria and tweets containing election related keywords). Overall, the data set contains 25.2 million tweets and retweets, 12.6 million original tweets, 8.6 million geotagged tweets and 3.6 million tweets labeled (using an ML model) as political.

By analyzing election discourse, we can learn a few important things including - issues that drive election discourse, how social media was utilized by candidates, and how participation was distributed across geographic regions in the country. Finally, in a country like Nigeria where updated demographics data is lacking (e.g., on community structures, wealth distribution etc), this project shows how social media can be used as a surrogate to infer relative statistics (e.g., existence of diaspora communities based on election discussion and wealth distribution based on device type usage across the country).

Data for the project was collected using python scripts that wrote tweets from the Twitter streaming api (matching certain criteria) to BigQuery. BigQuery queries were then used to generate aggregate datasets used for visualizations/analysis and training machine learning models (political text classification models to label political text and multi class classification models to label general discourse). The models were built using Tensorflow 2.0 and trained on Colab notebooks powered by GCP GPU compute VMs.

References: Election Watch website, ML models descriptions one, two


Bioacoustic Sound Detector (To identify bird calls in soundscapes)

By Usha Rengaraju, TFUG Organizer (India)

 
(Bird image is taken by Krisztian Toth @unsplash)

“Visionary Perspective Plan (2020-2030) for the conservation of avian diversity, their ecosystems, habitats and landscapes in the country” proposed by the Indian government to help in the conservation of birds and their habitats inspired me to take up this project.

Extinction of bird species is an increasing global concern as it has a huge impact on food chains. Bioacoustic monitoring can provide a passive, low labor, and cost-effective strategy for studying endangered bird populations. Recent advances in machine learning have made it possible to automatically identify bird songs for common species with ample training data. This innovation makes it easier for researchers and conservation practitioners to accurately survey population trends and they’ll be able to regularly and more effectively evaluate threats and adjust their conservation actions.

This project is an implementation of a Bioacoustic monitor using Masked Autoencoders in TensorFlow and Cloud TPUs. The project will be presented as a browser based application using Flask. The deep learning prototype can process continuous audio data and then acoustically recognize the species.

The goal of the project when I started was to build a basic prototype for monitoring of rare bird species in India. In future I would like to expand the project to monitor other endangered species as well.

References: Kaggle Notebook, Colab Notebook, Github, the dataset and more to read


Persona Labs' Digital Personas

By Martin Andrews and Sam Witteveen, ML Google Developer Experts (Singapore)

Over the last 3 years, Red Dragon AI (a company co-founded by Martin and Sam) has been developing real-time digital “Personas”. The key idea is to enable users to interact with life-like Personas in a format similar to a Zoom call : Speaking to them and seeing them respond in real time, just as a human would. Naturally, each Persona can be tailored to tasks required (by adjusting the appearance, voice, and ‘motivation’ of the dialog system behind the scenes and their corresponding backend APIs).

The components required to make the Personas work effectively include dynamic face models, expression generation models, Text-to-Speech (TTS), dialog backend(s) and Speech Recognition (ASR). Much of this was built on GCP, with GPU VMs running the (many) Deep Learning models and combining the outputs into dynamic WebRTC video that streams to users via a browser front-end.

Much of the previous years’ work focussed on making the Personas’ faces behave in a life-like way, while making sure that the overall latency (i.e. the time between the Persona hearing the user asking a question, to their lips starting the response) is kept low, and the rendering of individual images matches the 25 frames-per-second video rate required. As you might imagine, there were many Deep Learning modeling challenges, coupled with hard engineering issues to overcome.

In terms of backend technologies, Google Cloud GPUs were used to train the Deep Learning models (built using TensorFlow/TFLite, PyTorch/ONNX & more recently JAX/Flax), and the real-time serving is done by Nvidia T4 GPU-enabled VMs, launched as required. Google ASR is currently used as a streaming backend for speech recognition, and Google’s WaveNet TTS is used when multilingual TTS is needed. The system also makes use of Google’s serverless stack with CloudRun and Cloud Functions being used in some of the dialog backends.

Visit the Persona’s website (linked below) and you can see videos that demonstrate several aspects : What the Personas look like; their Multilingual capability; potential applications; etc. However, the videos can’t really demonstrate what the interactivity ‘feels like’. For that, it’s best to get a live demo from Sam and Martin - and see what real-time Deep Learning model generation looks like!

Reference: The Persona Labs website

From Developer to Teacher, How a Computer Science Professor Found Career Support with Google Developer Groups

Posted by Kübra Zengin, North America Regional Lead, Google Developers

A Path to Programming

“I was hooked from the start,” says Jennifer Bailey about programming. Always interested in the way systems work, Jennifer, now an educator in Colorado, found her path to programming in an unconventional way. She first earned a General Educational Development degree, otherwise known as a “GED” in the United States, from Aims Community College, when she was only 15 years old.

Ever a quick learner with the ambition to excel, she then secured an associate’s degree, bachelor’s, and master’s degree in Applied Science. With degrees in hand, she taught herself C Sharp while working at a local firm as a software developer building desktop applications.

When one of her mentors from Aims Community College was retiring, the school recognized Jennifer’s programming expertise and hired her to teach computer science in 2011. The administration then asked her to create the college’s certificate in mobile application development from scratch. To build out a curriculum for her new assignment, she needed to find some inspiration. As Jennifer sought out resources to curate the content for the college’s new program in mobile development, she found a local Google Developer Group (GDG), an organization where local developers came together to discuss cutting-edge programming topics.

Finding a Google Developer Group in Northern Colorado

She attended her first event with the group that same week. At the event, the group’s leader was teaching attendees to build Android apps, and other developers taught Jennifer how to use GitHub.

“I went to that in-person event, and it was everything I was hoping it would be,” Jennifer says. “I was just blown away that I was able to find that resource at exactly the time when I needed it for my professional development, and I was really happy because I had so much fun.”

The community of welcoming developers that Jennifer found in GDG drew her in, and for the first time at a technical networking event like this one, she felt comfortable meeting new people. “That initial event was the first time I felt like I had met actual friends, and I’ve been involved with GDG ever since,” she says.

A Life-Changing Community

As time progressed, Jennifer started attending GDG events more often, and eventually offered the meeting space at Aims Community College where the group could gather. After she made the offer, the group's organizers invited her to become a co-leader of the group. Fast-forward to the present, and her leadership role has led to numerous exciting opportunities, like attending Google I/O and meeting Google developers from all over the world.

“By participating in GDG, I ended up being able to attend Google I/O,” says Jennifer. “This community has had a massive impact in my life.”

Ongoing Education

Jennifer’s local GDG provides support for Android that helps other learners while also remaining helpful to her teaching of computer science subjects and the Android IOS mobile developer certificate.

“What keeps me engaged with Google technology, especially with Android, is all of the updates, changes, new ideas and new technology,” she says.

Jennifer notes that she appreciates the Android ecosystem’s constantly evolving technology and open source tools.

  • After becoming fascinated with Android, Jennifer discovered that the more time she spent learning and delving into Android, the more she learned and gained expertise that she could apply to other platforms.
  • Jennifer’s Android expertise has also led to her becoming an author for Ray Wenderlich, for whom she contributed to Saving Data on Android and Android Accessibility by Tutorials and a video course on building your first app using Android and Kotlin. “I like Jetpack Compose a lot, and I’m very interested in Android accessibility, so I can’t wait to update that book,” she says.
  • She served as editor on an article about “Lazy Composables” on lists.

Positive Career Impact

In Jennifer’s view, involvement with Google Developer Groups positively impacted her career by exposing her to a local group of developers with whom she is deeply connected, providing resources and instruction on Android, and providing her with a leadership opportunity.

“I have met such a diverse sampling of people in Google Developer Groups, from all different industries, with all different levels of experience–from students, self-taught, to someone who’s been in technology longer than I have,” Jennifer says. “You never know who you will meet out there because GDG is filled with interesting people, and you never know what opportunities you will find by mixing with those people and comparing notes.”

If you’re looking to grow as a developer, find a GDG group near you. Learn more about Google Developer Groups and find a community near you!

Using research to make code review more equitable

Posted by Emerson Murphy-Hill, Research Scientist, Central Product Inclusion, Equity, and Accessibility

At Google, we often study our own software development work as a means to better understand and make improvements to our engineering practices. In a study that we recently published in Communications of the ACM, we describe how code review pushback varies depending on an author’s demographics. Such pushback, defined as “the perception of unnecessary interpersonal conflict in code review while a reviewer is blocking a change request”, turns out to affect some developers more than others.

The study looked at pushback during the code review process and, in short, we found that:

  • Women faced 21% higher odds of pushback than men
  • Black+ developers faced 54% higher odds than White+ developers
  • Latinx+ developers faced 15% higher odds than White+ developers
  • Asian+ developers faced 42% higher odds than White+ developers
  • Older developers faced higher odds of pushback than younger developers

We estimate that this excess pushback roughly costs Google more than 1,000 engineer hours per day – something we’re working to significantly reduce, along with unconscious bias during the review process, through solutions like anonymous code review.

Last year, we explored the effectiveness of anonymous code review by asking 300 developers to do their code reviews without the author’s name at the top. Through this research, we found that code review times and review quality appeared consistent with and without anonymous review. We also found that, for certain types of review, it was more difficult for reviewers to guess the code’s author. To give you an idea, here’s what anonymous code review looks like today at Google in the Critique code review tool:

In the screenshot above, changelist author names are replaced by anonymous animals, like in Google Docs, helping reviewers focus more on the code changes and less on the people making those changes.

At Google, we strive to ensure there is equity in all that we do, including in our engineering processes and tools. Through continued experimentation with anonymous code review, we’re hoping to reduce gaps in pushback faced by developers from different demographic groups. And through this work, we want to inspire other companies to take a hard look at their own code reviews and to consider adopting anonymous author code review as part of their process as well.

In the long run, we expect that increasing equity in developers’ experience will help Google – and our industry – make meaningful progress towards an inclusive development experience for all.

#WeArePlay | Discover the people building apps & games businesses

Posted by Patricia Correa, Director, Global Developer Marketing

Over 2.5 billion people come to Google Play every month to find apps and games created by millions of businesses from all over the world.

#WeArePlay celebrates you: the global community of people behind these businesses.

Each one of you creating an app or game has a different story to tell. Some of you have been coders since childhood, others are newbies who got into tech later in life. Some of you are based in busy cities, others in smaller towns. No matter who you are or how different your story is, you all have one thing in common - you have the passion to turn an idea into a business impacting people all over the world.

Now, and over the coming months, #WeArePlay celebrates you by sharing your stories.



We are kicking off the series with the story of Yvonne and Alyssa, the London-based mother and daughter duo who created Frobelles - a dress up game increasing representation of African and Caribbean hair styles.



You can now also discover the stories of friends Ronaldo, Carlos and Thadeu from Hand Talk Translator (Brazil - my home country!), art lover Zuzanna from DailyArt (Poland) and travel-loving couple Ina & Jonas from TravelSpend (Germany).





To all apps and games businesses - thank you for being a part of the Google Play community. Your dedication and ambition is helping millions of people learn, connect, relax, exercise, find jobs, give back, laugh, have fun, escape to fantasy lands, and so much more.

Read more and stay tuned for many more stories at g.co/play/weareplay


How useful did you find this blog post?

Migrating from App Engine Memcache to Cloud Memorystore (Module 13)

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Introduction and background

The previous Module 12 episode of the Serverless Migration Station video series demonstrated how to add App Engine Memcache usage to an existing app that has transitioned from the webapp2 framework to Flask. Today's Module 13 episode continues its modernization by demonstrating how to migrate that app from Memcache to Cloud Memorystore. Moving from legacy APIs to standalone Cloud services makes apps more portable and provides an easier transition from Python 2 to 3. It also makes it possible to shift to other Cloud compute platforms should that be desired or advantageous. Developers benefit from upgrading to modern language releases and gain added flexibility in application-hosting options.

While App Engine Memcache provides a basic, low-overhead, serverless caching service, Cloud Memorystore "takes it to the next level" as a standalone product. Rather than a proprietary caching engine, Cloud Memorystore gives users the option to select from a pair of open source engines, Memcached or Redis, each of which provides additional features unavailable from App Engine Memcache. Cloud Memorystore is typically more cost efficient at-scale, offers high availability, provides automatic backups, etc. On top of this, one Memorystore instance can be used across many applications as well as incorporates improvements to memory handling, configuration tuning, etc., gained from experience managing a huge fleet of Redis and Memcached instances.

While Memcached is more similar to Memcache in usage/features, Redis has a much richer set of data structures that enable powerful application functionality if utilized. Redis has also been recognized as the most loved database by developers in StackOverflow's annual developers survey, and it's a great skill to pick up. For these reasons, we chose Redis as the caching engine for our sample app. However, if your apps' usage of App Engine Memcache is deeper or more complex, a migration to Cloud Memorystore for Memcached may be a better option as a closer analog to Memcache.

Migrating to Cloud Memorystore for Redis featured video

Performing the migration

The sample application registers individual web page "visits," storing visitor information such as IP address and user agent. In the original app, the most recent visits are cached into Memcache for an hour and used for display if the same user continuously refreshes their browser during this period; caching is a one way to counter this abuse. New visitors or cache expiration results new visits as well as updating the cache with the most recent visits. Such functionality must be preserved when migrating to Cloud Memorystore for Redis.

Below is pseudocode representing the core part of the app that saves new visits and queries for the most recent visits. Before, you can see how the most recent visits are cached into Memcache. After completing the migration, the underlying caching infrastructure has been swapped out in favor of Memorystore (via language-specific Redis client libraries). In this migration, we chose Redis version 5.0, and we recommend the latest versions, 5.0 and 6.x at the time of this writing, as the newest releases feature additional performance benefits, fixes to improve availability, and so on. In the code snippets below, notice how the calls between both caching systems are nearly identical. The bolded lines represent the migration-affected code managing the cached data.

Switching from App Engine Memcache to Cloud Memorystore for Redis

Wrap-up

The migration covered begins with the Module 12 sample app ("START"). Migrating the caching system to Cloud Memorystore and other requisite updates results in the Module 13 sample app ("FINISH") along with an optional port to Python 3. To practice this migration on your own to help prepare for your own migrations, follow the codelab to do it by-hand while following along in the video.

While the code migration demonstrated seems straightforward, the most critical change is that Cloud Memorystore requires dedicated server instances. For this reason, a Serverless VPC connector is also needed to connect your App Engine app to those Memorystore instances, requiring more dedicated servers. Furthermore, neither Cloud Memorystore nor Cloud VPC are free services, and neither has an "Always free" tier quota. Before moving forward this migration, check the pricing documentation for Cloud Memorystore for Redis and Serverless VPC access to determine cost considerations before making a commitment.

One key development that may affect your decision: In Fall 2021, the App Engine team extended support of many of the legacy bundled services like Memcache to next-generation runtimes, meaning you are no longer required to migrate to Cloud Memorystore when porting your app to Python 3. You can continue using Memcache even when upgrading to 3.x so long as you retrofit your code to access bundled services from next-generation runtimes.

A move to Cloud Memorystore and today's migration techniques will be here if and when you decide this is the direction you want to take for your App Engine apps. All Serverless Migration Station content (codelabs, videos, source code [when available]) can be accessed at its open source repo. While our content initially focuses on Python users, we plan to cover other language runtimes, so stay tuned. For additional video content, check out our broader Serverless Expeditions series.

Reach global markets as a Recommended for Google Workspace app

Posted by Elena Kingbo, Program Manager, Google Workspace

Today we announced our 2022 Recommended for Google Workspace apps. This program offers a distinct way for third-party developers to better reach Google Workspace users and attract new customers to their apps. So, for those developers who may be interested in it in the future, we wanted to walk through the basics of what the program is and how to apply for it.

What is the Google Workspace Marketplace?

The Google Workspace Marketplace is the first place Google Workspace administrators and users look when they want to extend or enhance their Google Workspace experience. The Marketplace can be accessed within most first-party Google Workspace apps, including Gmail, Drive, Docs, Sheets, Slides, Forms, Calendar, and Classroom, as well as at workspace.google.com/marketplace.

Launch Marketplace from your favorite Google Workspace app by clicking the “+”.

The Google Workspace Marketplace is the first place Google Workspace administrators and users look when they want to extend or enhance their Google Workspace experience. The Marketplace can be accessed within most first-party Google Workspace apps, including Gmail, Drive, Docs, Sheets, Slides, Forms, Calendar, and Classroom, as well as at workspace.google.com/marketplace.

Developers who want to build and deploy apps to the Marketplace can either use their own preferred coding language or leverage Google Apps Script, our serverless platform. You can learn more about building apps and publishing them to the Marketplace in our developer documentation.

What is the Recommended for Google Workspace program?

The Recommended for Google Workspace program identifies and promotes a select number of Google Workspace applications that are secure, reliable, well-integrated with Google Workspace, and loved by users.

Partners who submit their apps will be evaluated based on the quality of their solution, their strategic investment in Google Workspace integrations, and security and privacy posture. In addition, all partners will be required to complete a third-party security assessment in the final stage of the assessment. You can sign up for our Google Workspace developers newsletter to be notified when the next application window opens up.

What it means to be a Recommended app

Google Workspace customers are often looking for high-quality, secure apps they can install to enhance their Workspace experience. Since recommended apps have exceeded our highest security and reliability standards, they are the first apps we recommend to customers and among the first apps users see when they visit the Marketplace. Recommended partners will also receive new and enhanced benefits, including technical advisory services and early access to APIs.

There have been more than 4.8 billion app installs on the Marketplace. These apps are an integral part of the Google Workspace experience and users are continually looking for new ways to extend the value of Google Workspace. Creating a Google Workspace app is a fantastic opportunity for innovative developers interested in enhancing the Google Workspace experience. And, for those developers who truly want to be set apart as a trusted app on the Marketplace, the Recommended for Google Workspace program offers an unique way to reach new customers.

Explore our Recommended for Google Workspace apps on the Google Workspace Marketplace.

Helping Developers Create Meaningful Voice Interactions with Android

Helping Developers Create Meaningful Voice Interactions with Android

Posted by Rebecca Nathenson, Director, Product Management

As we recently announced at I/O, we’re investing in new ways to make Google Assistant your go-to conversational helper for everyday tasks. And we couldn’t do that without a rich community of developers. While Conversational Actions were an excellent way to experiment with voice, the ecosystem has evolved significantly over the last 5 years and we’ve heard some important feedback: users want to engage with their favorite apps using voice, and developers want to build upon their existing investments in Android.

In response to that feedback, we’ve decided to focus our efforts on making App Actions with Android the best way for developers to create deeper, more meaningful voice-forward experiences. As a result, we will turn down Conversational Actions one year from now, in June 2023.

Improving voice-forward experiences

Whether someone asks Assistant to start a workout, order food, or schedule a grocery pickup, we know users are looking for ways to get things done more naturally using voice. To allow developers to integrate those helpful voice experiences into existing Android content more easily – without having to build from scratch – we’re committed to working with them to build App Actions with Android. This will give users more ways to engage with an app’s content – like voice queries and proactive suggestions – and access the app features they already know and love.

We’re continuing to expand the reach of App Actions in the following ways:

  • Integrating voice capabilities across Android devices such as mobile, auto, wearables and other devices in the home;
  • Bringing more traffic without more development work (i.e. Assistant can now direct users to apps even when queries don’t mention an app name);
  • Driving users to the app’s Play Store page if they don’t have the app installed yet; and
  • Surfacing in ‘All Apps’ search for Pixel 6 users.

App Actions not only make your apps easier to discover; you can offer deeper voice experiences by allowing users to simply ask for what they need in their queries. Moreover, we’ll continue investing in all of the popular Assistant experiences users love, like Timers, Media, Home Automation, Communications, and more.

Supporting our developers

We know that these changes aren’t easy, which is why we’re giving developers a year to prepare for the turndown of Conversational Actions. We’re here to help you navigate this transition with these helpful resources:

Building the future together

Looking ahead, we envision a platform that is intuitive, natural, and voice-forward – and one that allows developers to leverage the entire Android ecosystem of devices so they can easily reach more users. We’re always looking to improve the Assistant experience and we’re confident that App Actions is the best way to do that. We’re grateful for all you’ve done to build the Google Assistant ecosystem over the past 5 years and we’re here to help navigate the changes as we continue to make it even better. We’re excited about what lies ahead and we’re grateful to build it together.

Grow your skills with Coding Practice with Kick Start

Posted by Julia DeLorenzo, Program Manager, Coding Competitions

Kick Start is one of Google’s online coding competitions offering programmers of all skill levels the opportunity to hone your skills through a series of online rounds hosted throughout the year.

If you’re new to coding competitions and not sure where to start, then join us for Coding Practice with Kick Start! Offering developers of all skills the chance to practice competitive programming problems on your own time without the pressure of a scoreboard or timed round. These practice sessions are not official Kick Start rounds, but are a great way for you to hone your coding skills, connect with a global community, prepare for an interview, and most importantly have fun!

Work your way through fun algorithmic and mathematical problems on the Kick Start platform in four-day practice sessions throughout the 2022 Kick Start season (see full schedule here).

There are two more Coding Practice with Kick Start sessions this year:

  • Coding Practice Session #2: June 27, 2022 (16:00 UTC) - July 1, 2022 (3:00 UTC)
  • Coding Practice Session #3: August 29, 2022 (16:00 UTC) - September 2, 2022 (3:00 UTC)

Here’s what our team of Googlers working behind the scenes to create the problems and walk-throughs have to say about the program, including advice for this year’s participants:

Sarah Young, Software Engineer

What advice would you give to beginning coders?

When first thinking about how to solve a problem, forget about the coding and try to think about it as if you only needed to explain how to do it to someone. Go back and reread the problem to make sure you covered everything. Then you can start breaking it down into logical pieces, and it'll make everything a lot easier!

Why is Coding Practice with Kick Start/the Kick Start competition such an excellent tool for growing your skills and practicing coding?

Kickstart is a great way to challenge yourself to do fun problems in a competitive but not stressful environment, whether you're a beginner or have done competitive programming in the past!

Federico Brubacher, Software Engineer

What advice would you give to beginning coders?

My advice to new coders comes in two parts:

First one is to embrace the learning process. Learning a new skill is hard. It's a rollercoaster process in which one day you are extremely productive/happy and the next you are stuck and bored. If you embrace that there will be bad days and stick with it then you will start making progress doing more difficult programming tasks.

Second is to try to pattern recognize. When we are learning incrementally difficult things, it is good to start by trying to associate the thing you are trying to learn/solve with stuff you have seen in the past. This makes the learning process easier because you are free now to focus on the new parts of the problem you are currently facing and not start from scratch. The hard part is doing the work to distill what you learned every day into patterns.

Why is Coding Practice with Kick Start/the Kick Start competition such an excellent tool for learning and practicing coding?

If you look at my previous answer you can see that pattern recognition is huge when learning coding. Practicing coding on Kick Start is all about pattern matching and thinking about a problem thoroughly armed only with your previous experience.

As you go through the problems you will see the arsenal of tools (patterns) you have to solve problems expand. Then you will use these patterns to solve new problems and continue learning and improving. It is addicting, but the good kind!

Kata Brányiné Sulák, Software Engineer

What advice would you give to beginning coders?

Coding is about solving problems - assembling the general algorithm and data structure pieces so that it results in a working solution. Don't try to learn the fine details of a specific programming language before jumping in, just use the language syntax to describe/document the steps you want to take. Making the code technically running is the easier part (even if initially you have to google for error messages or unexpected behaviors a lot).

Why is Coding Practice with Kick Start/the Kick Start competition such an excellent tool for growing your skills and practicing coding?

Kick Start's problem sets are diverse, to make coders encounter wide range of algos and data structures (giving high learning and also fun factors); mostly formulated in real life scenario descriptions to enforce the contestants to transform them into IT concepts (which is a core part of the developers' work); the input is simplified and is guaranteed to be correct so coders can concentrate on the abstract problem itself and not on writing boilerplate on error handling; and analysis is actually formulated as list of hints giving a second chance to create a solution in practice mode and still get the accomplishment.

General Availability of App Actions using the Android Shortcuts framework

Posted by Jessica Dene Earley-Cha, Developer Relations Engineer

We’re pleased to announce the General Availability (GA) of App Actions using shortcuts.xml, part of the Android shortcuts framework. By using the Shortcuts API, it’s never been easier to add a layer of voice interaction to your apps, by simply using the Android tooling, platform, and features you already know. This release means your shortcuts.xml implementations are now fully supported through our support channels.

App Actions let users launch and control Android apps with their voice, using Google Assistant. At Google I/O 2021, we released a beta of App Actions that enabled developers to implement App Actions using the Android shortcuts framework, lowering the development cost of voice enabling apps by leveraging a common framework many developers were already familiar with. Throughout the beta period, we listened to developer feedback and made several improvements to the API, developer tooling, and Assistant comprehension and accuracy of voice commands.

Over the past year we’ve added new features, like the ability to fulfill user voice requests using Android widgets, and in-app voice control. The set of built-in intents supported by App Actions has also expanded to include travel and parking intents suited for use in Android for Cars apps.

See how Strava implemented App Actions to provide a voice-forward experience to their users.

I’m new! How do I get started?

New App Actions developers are encouraged to try the App Actions learning pathway. This learning pathway is a complete training course that prepares new and seasoned Android developers to design and implement voice-enabled app experiences. After completing the pathway, you’ll earn the App Actions with Google Assistant badge on your developer profile.

Check out the latest App Actions news from I/O

We are excited to have had several sessions focusing on App Actions this year at Google I/O.

My app already uses App Actions. How do I stay supported?

Developers with existing App Actions implementations that use actions.xml are encouraged to migrate their implementation before the end of the support period on March 31st, 2023.

Implementations that leveraged shortcuts.xml during the beta period will continue to work, as they have been without any changes required, until March 31st, 2023.

I have more questions!

There are several ways to get in touch with the App Actions team and interact with the developer community.