Posted by Arjun Dayal, Group Product Manager, Google Play Games
In December, we announced that Google Play Games will be coming to PCs. As part of our broader goal to make our products and services work better together, this product strives to meet players where they are and give them access to their games on as many devices as possible. Today we're excited to announce that we’re opening sign-ups for Google Play Games as a beta in Korea, Taiwan, and Hong Kong.
Users participating in the beta can play a catalog of Google Play games on their Windows PC via a standalone application built by Google. We’re excited to announce that some of the most popular mobile games in the world will be available at launch, including Mobile Legends: Bang Bang, Summoners War, State of Survival: The Joker Collaboration, and Three Kingdoms Tactics, which delight hundreds of millions of players globally each month.
This product brings the best of Google Play to more laptops and desktops, enabling immersive and seamless gameplay sessions between a phone, tablet, Chromebook, and Windows PC. Players can easily browse, download, and play their favorite mobile games on their PCs, while taking advantage of larger screens with mouse and keyboard inputs. No more losing your progress or achievements when switching between devices, it just works with your Google Play Games profile! Play Points can also be earned for Google Play Games activity on PCs.
We’re thrilled to expand our platform for players to enjoy their favorite Android games even more. To sign up for future announcements, or to access the beta in Korea, Taiwan, and Hong Kong, please go to g.co/googleplaygames. If you’re an Android developer looking to learn more about Google Play Games, please express interest on our developer site. We’ll have more to share on future beta releases and regional availability soon.
Windows is a trademark of the Microsoft group of companies. Game titles may vary by region.
Posted by Erica Hanson, Global Senior Program Manager, Google Developer Student Clubs
Have you ever thought about building an application or tool that solves a problem your community faces? Or perhaps you’ve felt inspired to build something that can help improve the lives of those you care about. The year ahead brings more opportunities for helping each other and giving back to our communities.
With that in mind, we invite students around the world to join the Google Developer Student Clubs 2022 Solution Challenge! Where students from around the world are invited to solve for one of the United Nations' Sustainable Development Goals using Google technologies.
About the United Nations’ Sustainable Development Goals
Created by the United Nations in 2015 to be achieved by 2030, the 17 Sustainable Development Goals (SDGs) agreed upon by all 193 United Nations Member States aim to end poverty, ensure prosperity, and protect the planet.
If you’re new to the Solution Challenge, it is an annual competition that invites university students to develop solutions for real world problems using one or more Google products or platforms.
This year, see how you can use Android, Firebase, TensorFlow, Google Cloud, Flutter, or any of your favorite Google technologies to promote employment for all, economic growth, and climate action, by building a solution for one or more of the UN Sustainable Development Goals.
What winners of the Solution Challenge receive
Participants will receive specialized prizes at different stages:
Top 50 teams - Receive customized mentorship from Googlers and experts to take solutions to the next level, a branded T-shirt, and a certificate.
Top 10 finalists - Receive additional mentorship, a swag box, and the opportunity to showcase solutions to Googlers and developers all around the world during the virtual 2022 Solution Challenge Demo Day live on YouTube.
Contest Finalists - In addition to the swag box, each individual from the additional seven recognized teams will receive a Cash Prize of $1,000 per student. Winnings for each qualifying team will not exceed $4,000.
Top 3 winners - In addition to the swag box, each individual from the top 3 winning teams will receive a Cash Prize of $3,000 and a feature on the Google Developers Blog. Winnings for each qualifying team will not exceed $12,000.
How to get started on the Solution Challenge
There are four main steps to joining the Solution Challenge and getting started on your project:
Create a demo and submit your project by March 31, 2022.
Resources from Google for Solution Challenge participants
Google will provide Solution Challenge participants with various resources to help students build strong projects for their contest submission.
Live online sessions with Q&As
Mentorship from Google, Google Developer Experts, and the Google Developer Student Club community
Curated codelabs designed by Google Developers
Access to Design Sprint guidelines developed by Google Ventures
and more!
When are winners announced?
Once all the projects are submitted by the March 31st, 2022 deadline, judges will evaluate and score each submission from around the world using the criteria listed on the website.
From there, winning solutions will be announced in three rounds.
Round 1 (April): The Top 50 teams will be announced.
Round 2 (June): After the top 50 teams submit their new and improved solutions, 10 finalists will be announced.
Round 3 (July): In the finale, the top 3 grand prize winners will be announced live on YouTube during the 2022 Solution Challenge Demo Day.
With a passion for building a better world, savvy coding skills, and a little help from Google technology, we can’t wait to see the solutions students create.
Learn more and sign up for the 2022 Solution Challenge, here.
Posted by Nir Kalush, Dvir Kalev, Chen Yoveg, Elad Ben-David
Ads Review Tool
This tool flags (and optionally deletes) policy violating ads across your accounts. Advertisers can learn from the output to ensure their ads are compliant with Google Ads Policies at all times.
Business Challenge:
Advertisers operating at scale need a solution to holistically review policy violating ads across their accounts so they can ensure compliance with Google’s Ad Policies. As Google introduces more policies and enforcement mechanisms, advertisers need to continue checking their accounts to ensure they comply with Google’s Ads Policies.
Solution Overview:
“Disapproved Ads Auditor” is a tool that enables advertisers to review at scale all disapproved ads across their Google Ads accounts. This view allows advertisers to proactively audit their accounts , analyze the ad disapprovals holistically and identify learnings to minimize and reduce submission of potentially policy violating ads.
The tool is based on a Python script, which can be run in either of the following modes:
“Audit Mode”- export an output of disapproved ads across your accounts
“Remove Mode” - delete currently disapproved ads and log details.
There are a few output files (see here) which are saved locally under the “output” folder and there is an optional feature to export on BigQuery for further data analysis (“Disapproved Ads Auditor” dataset).
“Disapproved Ads Auditor” tool automates auditing ads across your accounts to provide you insights into non compliant ads. You can take the learnings from the output to ensure ads are compliant with Google Ads Policies and avoid creating non compliant ads. Moreover, you can optionally remove disapproved ads.
Posted by Álvaro Lamas, Héctor Parra, Jaime Martínez, Julia Hernández, Miguel Fernandes, Pablo Gil
Acquiring high value customers using predicted Lifetime Value, taking specific actions on high propensity of churn users, generating and activating audiences based on machine learning processed signals…All of those marketing scenarios require of analyzing first party data, performing predictions on the data and activating the results into the different marketing platforms like Google Ads as frequently as possible to keep the data fresh.
Feeding marketing platforms like Google Ads on a regular and frequent basis, requires a robust, report oriented and cost reduced ETL & prediction pipeline. These pipelines are very similar regardless of the use case and it’s very easy to fall into reinventing the wheel every time or manually copy & paste structural code increasing the risk of introducing errors.
Wouldn't it be great to have a common reusable structure and just add the specific code for each of the stages?
Here is where Prediction Framework plays a key role in helping you implement and accelerate your first-party data prediction projects by providing the backbone elements of the predictive process.
Prediction Framework is a fully customizable pipeline that allows you to simplify the implementation of prediction projects. You only need to have the input data source, the logic to extract and process the data and a Vertex AutoML model ready to use along with the right feature list, and the framework will be in charge of creating and deploying the required artifacts. With a simple configuration, all the common artifacts of the different stages of this type of projects will be created and deployed for you: data extraction, data preparation (aka feature engineering), filtering, prediction and post-processing, in addition to some other operational functionality including backfilling, throttling (for API limits), synchronization, storage and reporting.
The Prediction Framework was built to be hosted in the Google Cloud Platform and it makes use of Cloud Functions to do all the data processing (extraction, preparation, filtering and post-prediction processing), Firestore, Pub/Sub and Schedulers for the throttling system and to coordinate the different phases of the predictive process, Vertex AutoML to host your machine learning model and BigQuery as the final storage of your predictions.
Prediction Framework Architecture
To get involved and start using the Prediction Framework, a configuration file needs to be prepared with some environment variables about the Google Cloud Project to be used, the data sources, the ML model to make the predictions and the scheduler for the throttling system. In addition, custom queries for the data extraction, preparation, filtering and post-processing need to be added in the deploy files customization. Then, the deployment is done automatically using a deployment script provided by the tool.
Once deployed, all the stages will be executed one after the other, storing the intermediate and final data in the BigQuery tables:
Extract: this step will, on a timely basis, query the transactions from the data source, corresponding to the run date (scheduler or backfill run date) and will store them in a new table into the local project BigQuery.
Prepare: immediately after the extract of the transactions for one specific date is available, the data will be picked up from the local BigQuery and processed according to the specs of the model. Once the data is processed, it will be stored in a new table into the local project BigQuery.
Filter: this step will query the data stored by the prepare process and will filter the required data and store it into the local project BigQuery. (i.e only taking into consideration new customers transactionsWhat a new customer is up to the instantiation of the framework for the specific use case. Will be covered later).
Predict: once the new customers are stored, this step will read them from BigQuery and call the prediction using Vertex API. A formula based on the result of the prediction could be applied to tune the value or to apply thresholds. Once the data is ready, it will be stored into the BigQuery within the target project.
Post_process: A formula could be applied to the AutoML batch results to tune the value or to apply thresholds. Once the data is ready, it will be stored into the BigQuery within the target project.
One of the powerful features of the prediction framework is that it allows backfilling directly from the BigQuery user interface, so in case you’d need to reprocess a whole period of time, it could be done in literally 4 clicks.
In summary: Prediction Framework simplifies the implementation of first-party data prediction projects, saving time and minimizing errors of manual deployments of recurrent architectures.
For additional information and to start experimenting, you can visit the Prediction Framework repository on Github.
Posted by Álvaro Lamas, Héctor Parra, Jaime Martínez, Julia Hernández, Miguel Fernandes, Pablo Gil
Acquiring high value customers using predicted Lifetime Value, taking specific actions on high propensity of churn users, generating and activating audiences based on machine learning processed signals…All of those marketing scenarios require of analyzing first party data, performing predictions on the data and activating the results into the different marketing platforms like Google Ads as frequently as possible to keep the data fresh.
Feeding marketing platforms like Google Ads on a regular and frequent basis, requires a robust, report oriented and cost reduced ETL & prediction pipeline. These pipelines are very similar regardless of the use case and it’s very easy to fall into reinventing the wheel every time or manually copy & paste structural code increasing the risk of introducing errors.
Wouldn't it be great to have a common reusable structure and just add the specific code for each of the stages?
Here is where Prediction Framework plays a key role in helping you implement and accelerate your first-party data prediction projects by providing the backbone elements of the predictive process.
Prediction Framework is a fully customizable pipeline that allows you to simplify the implementation of prediction projects. You only need to have the input data source, the logic to extract and process the data and a Vertex AutoML model ready to use along with the right feature list, and the framework will be in charge of creating and deploying the required artifacts. With a simple configuration, all the common artifacts of the different stages of this type of projects will be created and deployed for you: data extraction, data preparation (aka feature engineering), filtering, prediction and post-processing, in addition to some other operational functionality including backfilling, throttling (for API limits), synchronization, storage and reporting.
The Prediction Framework was built to be hosted in the Google Cloud Platform and it makes use of Cloud Functions to do all the data processing (extraction, preparation, filtering and post-prediction processing), Firestore, Pub/Sub and Schedulers for the throttling system and to coordinate the different phases of the predictive process, Vertex AutoML to host your machine learning model and BigQuery as the final storage of your predictions.
Prediction Framework Architecture
To get involved and start using the Prediction Framework, a configuration file needs to be prepared with some environment variables about the Google Cloud Project to be used, the data sources, the ML model to make the predictions and the scheduler for the throttling system. In addition, custom queries for the data extraction, preparation, filtering and post-processing need to be added in the deploy files customization. Then, the deployment is done automatically using a deployment script provided by the tool.
Once deployed, all the stages will be executed one after the other, storing the intermediate and final data in the BigQuery tables:
Extract: this step will, on a timely basis, query the transactions from the data source, corresponding to the run date (scheduler or backfill run date) and will store them in a new table into the local project BigQuery.
Prepare: immediately after the extract of the transactions for one specific date is available, the data will be picked up from the local BigQuery and processed according to the specs of the model. Once the data is processed, it will be stored in a new table into the local project BigQuery.
Filter: this step will query the data stored by the prepare process and will filter the required data and store it into the local project BigQuery. (i.e only taking into consideration new customers transactionsWhat a new customer is up to the instantiation of the framework for the specific use case. Will be covered later).
Predict: once the new customers are stored, this step will read them from BigQuery and call the prediction using Vertex API. A formula based on the result of the prediction could be applied to tune the value or to apply thresholds. Once the data is ready, it will be stored into the BigQuery within the target project.
Post_process: A formula could be applied to the AutoML batch results to tune the value or to apply thresholds. Once the data is ready, it will be stored into the BigQuery within the target project.
One of the powerful features of the prediction framework is that it allows backfilling directly from the BigQuery user interface, so in case you’d need to reprocess a whole period of time, it could be done in literally 4 clicks.
In summary: Prediction Framework simplifies the implementation of first-party data prediction projects, saving time and minimizing errors of manual deployments of recurrent architectures.
For additional information and to start experimenting, you can visit the Prediction Framework repository on Github.
Welcome to #IamaGDE - a series of spotlights presenting Google Developer Experts (GDEs) from across the globe. Discover their stories, passions, and highlights of their community work.
Gaston Saillen started coding for fun, making apps for his friends. About seven years ago, he began working full-time as an Android developer for startups. He built a bunch of apps—and then someone gave him an idea for an app that has had a broad social impact in his local community. Now, he is a senior Android developer at Distillery.
Meet Gaston Saillen, Google Developer Expert in Android and Firebase.
Building the Uh-LaLa! app
After seven years of building apps for startups, Gaston visited a local food delivery truck to pick up dinner, and the server asked him, “Why don’t you do a food delivery app for the town, since you are an Android developer? We don’t have any food delivery apps here, but in the big city, there are tons of them.”
The food truck proprietor added that he was new in town and needed a tool to boost his sales. Gaston was up for the challenge and created a straightforward delivery app for local Cordoba restaurants he named Uh-Lala! Restaurants configure the app themselves, and there’s no app fee. “My plan was to deliver this service to this community and start making some progress on the technology that they use for delivery,” says Gaston. “And after that, a lot of other food delivery services started using the app.”
The base app is built similarly to food delivery apps for bigger companies. Gaston built it for Cordoba restaurants first, after several months of development, and it’s still the only food delivery app in town. When he released the app, it immediately got traction, with people placing orders. His friends joined, and the app expanded. “I’ve made a lot of apps as an Android engineer, but this is the first time I’ve made one that had such an impact on my community.”
He had to figure out how to deliver real-time notifications that food was ready for delivery. “That was a little tough at first, but then I got to know more about all the backend functions and everything, and that opened up a lot of new features.”
He also had to educate two groups of users: Restaurant owners need to know how to input their data into the app, and customers had to change their habit of using their phones for calls instead of apps.
Gaston says seeing people using the app is rewarding because he feels like he’s helping his community.“All of a sudden, nearby towns started using Uh-LaLa!, and I didn't expect it to grow that big, and it helped those communities.”
During the COVID-19 pandemic, many restaurants struggled to maintain their sales numbers. A local pub owner ran a promotion through Instagram to use the Uh-Lala! App for ten percent off, and their sales returned to pre-COVID levels. “That is a success story. They were really happy about the app.”
Becoming a GDE
Gaston has been a GDE for seven years. When he was working on his last startup, he found himself regularly answering questions about Android development and Firebase on StackOverflow and creating developer content in the form of blog posts and YouTube videos. When he learned about the GDE program, it seemed like a perfect way to continue to contribute his Android development knowledge to an even broader developer community. Once he was selected, he continued writing blog posts and making videos—and now, they reach a broader audience.
“I created a course on Udemy that I keep updated, and I’m still writing the blog posts,” he says. “We also started the GDG here in Cordoba, and we try to have a new talk every month.”
Gaston enjoys the GDE community and sharing his ideas about Firebase and Android with other developers. He and several fellow Firebase developers started a WhatsApp group to chat about Firebase. “I enjoy being a Google Developer Expert because I can meet members of the community that do the same things that I do. It’s a really nice way to keep improving my skills and meet other people who also contribute and make videos and blogs about what I love: Android.”
The Android platform provides developers with state-of-the art tools to build apps for user. Firebase allows developers to accelerate and scale app development without managing infrastructure; release apps and monitor their performance and stability; and boost engagement with analytics, A/B testing, and messaging campaigns.
Future plans
Gaston looks forward to developing Uh-La-La further and building more apps, like a coworking space reservation app that would show users the hours and locations of nearby coworking spaces and allow them to reserve a space at a certain time. He is also busy as an Android developer with Distillery.
Gaston’s advice to future developers
“Keep moving forward. Any adversity that you will be having in your career will be part of your learning, so the more that you find problems and solve them, the more that you will learn and progress in your career.”
Posted by Google Cloud training & certifications team
Validated cloud skills are in demand. With Google Cloud certifications, employers know that certified individuals have proven knowledge of various professional roles within the cloud industry. Google Cloud certifications have also been recognized as some of the highest-paying IT certifications for the past several years. This year, the Google Cloud Certified Professional Data Engineer topped the list with an average salary of $171,749, while the Google Cloud Certified Professional Cloud Architect came in second place, with an average salary of $169,029.
You may be wondering what sort of background you need to take advantage of these opportunities: What sort of classes should you take? How exactly do you get started in the cloud without experience? Here are some tips to start learning about Google Cloud and build your cloud computing skills.
Get hands-on experience with cloud computing
Google Cloud training offers a wide range of learning paths featuring comprehensive courses and hands-on labs, so you get to practice with the real Google Cloud console. For instance, If you wanted to take classes to prepare for the Professional Data Engineer certification mentioned above, there is a complete learning path featuring four courses and 31 hands-on labs to help familiarize you with relevant topics like BigQuery, machine learning, IoT, TensorFlow, and more.
There are nine learning paths providing you with a launch pad to all major pillars of cloud computing, from networking, cloud security, database management, and hybrid cloud infrastructure. Each broader learning path contains specific learning paths to help you specifically train for job roles like Machine Learning Engineer. Visit the Google Cloud training page to find the right path for you.
Learn live from cloud experts
Google Cloud regularly hosts a half-day live training event called Cloud OnBoard which features hands-on learning led by experts. All sessions are also available to watch on-demand after the event.
If you’re a developer new to cloud computing, we recommend you start with Google Cloud Fundamentals, an entry-level course to learn about the basics of Google Cloud. Experts guide you through hands-on labs where you can practice using the Google Console, Google Cloud Shell, and more.
You’ll be introduced to core components of Google Cloud and given an overview of how its tools impact the entire cloud computing landscape. The curriculum covers Compute Engine and how to create VM instances from scratch and from existing templates, how to connect them together, and end with projects that can talk to each other safely and securely. You will also learn about the different storage and database options available on Google Cloud.
Other Cloud OnBoard event topics include cloud architecture, Kubernetes, data analytics, and cloud application development.
Explore Google Cloud infrastructure
Cloud infrastructure is the backbone of the internet. Understanding cloud infrastructure is a good starting point to start digging deeper into cloud concepts because it will give you a taste of the various aspects of cloud computing to figure out what you like best, whether it’s networking, security, or application development.
Build your foundational Google Cloud knowledge with our on-demand infrastructure training in the cloud infrastructure learning path. This learning path will provide you with practical experience through expert-guided labs which dive into Cloud Storage and other key application services like Google Cloud’s operations suite and Cloud Functions.
Show off your skills
Once you have a strong grasp on Google Cloud basics, you can start earning skill badges to demonstrate your experience.
Skill badges are digital credentials that recognize your ability to solve real-world problems with your cloud knowledge. You can share them on your resume or social profile so your professional network sees your technical skills. This can be useful for recruiters or employers as you transition to cloud computing work.Skill badges also enable you to get in-depth, hands-on experience with different Google Cloud offerings on the way to earning the credential.
You can also use them to start preparing for Google Cloud certifications which are more intensive and show employers that you are a cloud expert. Most Google Cloud certifications recommend having at least 6 months or up to several years of industry experience depending on the material.
Ready to get started in the cloud? Visit the Google Cloud training page to see all your options from in-person classes, online courses, special events, and more.
Posted by Brian Shen, Regional Lead for Mainland China Developer Communities
Every developer’s path to pursuing a career in tech can be traced back to a single moment. Such is the case for Ning Zhang, a developer from China, who found his early interest in web development as a high schooler at the age of fifteen. Ning built his first website for his English class to help his classmates succeed with their studies. He didn’t realize it at the time, but he was only just getting started. Throughout high school, he played with Google Webmaster Tools (now Google Search Console) and Google Adsense to create and manage numerous other websites for fun. Like so many aspiring developers before him, Ning knew he’d found his passion, but the path ahead remained unclear.
Enter Google Developer Groups
To grow his skills further and turn his hobby into a viable career path, Ning majored in data science at university in Qingdao. Here, he participated in data-modeling competitions like Kaggle Days, and other events that gave him more exposure to the tech community and allowed him to learn from his peers. It’s also where he first heard about Google Developer Groups (GDGs) and their many opportunities for learning, networking and collaboration.
It was perfect timing too. After graduation Ning got a job with a financial services firm in Shanghai, home to a very active GDG. He jumped at the chance to engage in activities and workshops to further his abilities and knowledge, especially in data science, which constitutes a significant part of his work responsibilities.
While Ning enjoys the formal learning opportunities the GDG offers, he finds the sense of community and support—the opportunity to learn from others and share his expertise as well – even more valuable.
“This kind of atmosphere is actually more inspiring than learning a new technology, new programming ideas, and new algorithms.”
“Everyone has different hands-on experience and expertise in different companies,” Ning explains. “GDG provides an environment where people can share their experience and listen to each other.”
The combination of community, developer success, and social impact has made a huge impression on him both personally and professionally. The international nature of GDGs also provides an expanded perspective and different ways of thinking about problems and solutions. “GDG really gave me a lot of new and fresh information and opened our eyes to more global approaches,” says Ning.
Group photo of GDG Shanghai Activity Center
Tapping into a global community
As the importance of technology continues to grow, the GDG community can play an even greater role by helping people learn valuable tech skills, supporting the dissemination of knowledge, and spurring innovation. Offerings that focus on sharing knowledge and other events can assist members in achieving their career goals as they have done for Ning. “I hope every member of GDG will experience the good atmosphere of the group in the future so that their value can be magnified,” says Zhang.
Welcome to #IamaGDE - a series of spotlights presenting Google Developer Experts (GDEs) from across the globe. Discover their stories, passions, and highlights of their community work.
Evelyn Mendes, the first transgender Google Developer Expert, is based in Porto Alegre, Brazil, and has worked in technology since 2002. “I've always loved technology!” she exclaims, flashing a dazzling smile. As a transgender woman, Evelyn faced discrimination in the tech world in Brazil and relied on her friends for emotional support and even housing and food, as she fought for a job in technology. Her excruciating journey has made her a tireless advocate for diversity, equity, and inclusion (DEI), as she works toward her vision of a world of empathy, acceptance, and love.
Meet Evelyn Mendes, Google Developer Expert in Firebase
Current professional role
Evelyn works in systems analysis and development and currently focuses on Angular, Flutter, and Firebase. “I believe they are technologies with frame frameworks and architectures that have a lot to offer,” she says.
As an architecture consultant and specialist software engineer at Bemol Digital, Evelyn manages development teams that work with many different technologies and led Bimol Digital, through the process of switching their mobile app, originally developed in React Native, to Flutter. Now, Evelyn supports the migration of all Bimol Digital’s mobile development to Flutter. “Today, all of our new mobile projects are developed in Flutter,” she says. “I’m responsible for the architecture. I'm a PO and a Scrum Master, but I also enjoy teamwork, and I love helping the team work better, more efficiently, and most importantly, enjoy their work!”
DEI Advocacy
Evelyn’s kindness toward others is reflected in her advocacy for diversity, equity, and inclusion (DEI) in the IT and tech world. She takes a broad approach to diversity, advocating for safe spaces in technology for mothers, women in technology, Black founders, immigrants, and Native Brazilians to learn. “Diversity and inclusion are not just values or attitudes to me; they are a part of who I am: my life, my struggles,” she says.
Evelyn views technology as a way to help underrepresented groups achieve more, feel empowered, and change their own lives. “Technology will give you a better shot to fight for a better life,” she says. “I want to bring more trans people to technology, so that they have real chances to continue evolving in their professional lives.”
When Evelyn came out as transgender, she experienced intolerance that kept her out of the workforce for over a year, despite her innumerable skills. “Brazil, especially the southern part where I’m from, is still, unfortunately, not a very tolerant society,” she says. “Due to who I was, I wasn’t able to find a job for over a year, because people didn’t want to work with someone who is transgender.”
Evelyn was fortunate enough to have friends who supported her financially (there were times when she didn’t even have enough money to buy food) and mentally, helping her believe she could be true to herself and find happiness. She encourages others in her position to seek financial and emotional independence. “In terms of your emotional wellbeing, I’d recommend starting with identifying the abusive relationships around you, which can come from different sides, even from your family,” she says. “Try distancing yourself from them and those who hurt you. This will help you in your evolution.”
Evelyn recommends trans people in Brazil connect with groups like EducaTransforma, which teaches technology to trans people, and TransEmpregos, which helps trans people to enter the labor market. For trans and cis women in Brazil, Aduaciosa Oficial facilitates networking (tech 101 for women, classic dev community, meetups workshops), and B2Mamy supports women’s entrepreneurship.
Evelyn often speaks to companies about diversity in IT and how to be welcoming to women, LGBTQIA+ people, and other underrepresented groups. “I like it because I see that more and more companies are interested in the subject, and I think I can be a voice that has never been heard,” Evelyn says. “I support inclusive events, and when invited, I participate in lectures, because I know that a trans woman, on a stage where only white, ‘straight,’ cis people are normally seen, makes a lot of difference for many people, especially LGBTs.”
At BrazilJS 2017, Evelyn invited every woman at the event to join her on stage for a photo, to show how many women are involved in technology and that women are integral to events. She called her fellow speakers and attendees, as well as the event’s caterers, cleaners, and security personnel to the stage and said, “Look at the stage. Now, no one can say there aren’t any women in tech.”
At her current company, Evelyn approaches diversity as a positive and transformative thing. “I know that I make a difference just with my presence, because people usually know my story.”
In addition to her technology work, Evelyn is involved in the Transdiálogos project, which aims to train professionals to end discrimination in health services. She is also part of TransEnem in Porto Alegre, an EJA-type prep course to help trans people go to college. “I don't miss the chance to fight for diversity and inclusion anywhere,” Evelyn says. “That's what my life is. This is my fight; that's who I am; that's why I'm here.”
Learning Firebase
Evelyn said she was drawn to Firebase because “Firebase is all about diversity. For poor, remote areas in Brazil, without WiFi or broadband, Firebase gives people with limited resources a reasonable stack to build with and deploy something to the world. Firebase uses basic HTML, is low code, and is free, so it’s for everyone. Plus, it’s easy to get familiar with the technology, as opposed to learning Java or Android.”
To demonstrate all the functionality and features that Firebase offers, Evelyn created a mobile conversation application that she often shows at events. “Many people see Firebase as just a NoSql database,” she says. “They don't know the real power that it can actually offer. With that in mind, I tried to put in it all the features I thought people could use: Authentication, Storage, Realtime Database with Data Denormalization, Hosting, Cloud Functions, Firebase Analytics, and Cloud Firestore.”
Users can send images and messages through the app. A user can take a picture, resize, and send it, and it will be saved in Storage. Before going to the timeline, messages go through a sanitization process, where Evelyn removes certain words and indexes them on a list called bad_words in the Realtime Database. Timeline messages are also stored in Realtime. Users can like and comment on messages and talk privately. Sanitization is done by Cloud Functions, in database triggers, which also denormalizes messages in lists dedicated to each context. For example, all the messages a user sends, besides going to the main list that would be the timeline, go to a list of messages the user sent. Another denormalization is a list of messages that contain images and those that only contain text, for quick search within the Realtime Database. Users can also delete and edit messages. Using some rules Evelyn created in Cloud Firestore, she can manage what people will or will not see inside the app, in real time. Here’s the source code for the project. “I usually show it happening live and in color at events, with Firebase Analytics,” Evelyn says. “I also know where people are logging in, and I can show this working in the dashboard, also in real time.”
Becoming a GDE
When Evelyn first started learning Firebase, she also began creating educational content on how to use it, based on everything she was learning herself—first articles, then video tutorials. At first, she didn’t want to show her face in her videos because she was afraid she wasn’t good enough and felt embarrassed about every little silly mistake she made, but as she gained confidence, she started giving talks and lectures. Now, Evelyn maintains her own website and YouTube channel, where she saves all her video tutorials and other projects.
Her expertise caught the attention of Google’s Developer Relations team, who invited Evelyn to apply to be a GDE. “At first, I was scared to death, also because I didn't speak any English,” Evelyn recalls. “It took me quite some time, but finally I took a leap of faith, and it worked! And today, #IamaGDE!”
As a GDE, Evelyn loves meeting people from around the world who share her passion for technology and appreciates the fact that her GDE expertise has allowed her to share her knowledge in remote areas. “The program has helped me to grow a lot, both personally and professionally,” she adds. “I learned a lot and continue learning, by attending many events, conferences, and meetups.”
Evelyn’s advice to anyone hoping to become a GDE
“Be a GDE before officially becoming one! Participating in this program is a recognition of what you have already been doing: your knowledge, expertise, and accomplishments, so keep learning, keep growing, and help your community. You may think you’re not a big enough expert, but the truth is, there are people out there who definitely know less than you and would benefit from your knowledge.”
We've reached the end of the year - and what a year it's been! Between all of our live (virtual) events including I/O, developer summits, meetups and more, there are a lot of highlights for App Actions, Smart Home Actions and Conversational Actions. Let's dive in and take a look.
App Actions
App Actions allows developers to extend their Android App to Google Assistant. App Actions integrates more cleanly with Android using new Android platform features. With the introduction of the beta shortcuts.xml configuration resource, expanding existing Android features and our latest Google Assistant Plug App Actions is moving closer to the Android platform.
App Actions Benefits:
Display app information on Google surfaces. Provide Android widgets for Assistant to display, offering inline answers, simple confirmations and brief interactions to users without changing context.
Launch features from Assistant. Connect your app's capabilities to user queries that match predefined semantic patterns (BII).
Suggest voice shortcuts from Assistant. Use Assistant to proactively suggest tasks for users to discover or replay, in the right context.
Core Integration
Capabilities is a new Android framework API that allows you to declare the types of actions users can take to launch your app and jump directly to performing a specific task. Assistant provides the first available concrete implementation of the capabilities API. You can utilize capabilities by creating a shortcuts.xml resource and defining your capabilities. Capabilities specify two things: how it's triggered and what to do when it's triggered. To add a capability, you’ll need to select a Built-In intent (BII), which are pre-built language models that provide all the Natural Language Understanding to map the user's input to individual fields. When a BII is matched by the user’s request, your capability will trigger an Android Intent that delivers the understood BII fields to your app, so you can determine what to show in response.
To support a user query like “Hey Google, Find waterfall hikes on ExampleApp,” you can use the GET_THING BII. This BII supports queries that request an “item” and extracts the “item” from the user query as the parameter thing.name. The best use case for the GET_THING BII is to search for things in the app. Below is an example of a capability that uses the GET_THING BII:
<!-- This is a sample shortcuts.xml --> <shortcuts xmlns:android="http://schemas.android.com/apk/res/android"> <capability android:name="actions.intent.GET_THING"> <intent android:action="android.intent.action.VIEW" android:targetPackage="YOUR_UNIQUE_APPLICATION_ID" android:targetClass="YOUR_TARGET_CLASS"> <!-- Eg. name = "waterfall hikes" --> <parameter android:name="thing.name" android:key="name"/> </intent> </capability> </shortcuts>
This framework integration is in the Beta release stage, and will eventually replace the original implementation of App Actions that uses actions.xml. If your app provides both the new shortcuts.xml and old actions.xml, the latter will be disregarded.
Learn how to add your first capability with this codelab.
Voice shortcuts
Google Assistant suggests relevant shortcuts to users during contextually relevant times. Users can see what shortcuts they have by saying “Hey Google, shortcuts.”
Shortcut for Google Assistant
You can use the Google Shortcuts Integration library, currently in beta, to push an unlimited number of dynamic shortcuts to Google to make your shortcuts visible to users as voice shortcuts. Assistant can suggest relevant shortcuts to users to help make it more convenient for the user to interact with your Android app.
Example of App using Dynamic Shortcuts CodeLab Tool
Simple Answers, Hands Free & Android Auto
During situations where users need a hand free experience, like on Android Auto, Assistant can display widgets to provide simple answers, brief confirmations and quick interactive experience as a response to a user’s inquiry. These widgets are displayed within the Assistant UI, and in order to implement a fully voice-forward interaction with your app, you can arrange for Assistant to speak a response with your widget, which is safe and natural for use in automobiles. A great re-engagement feature with widgets, is that a “Add this widget” chip can be included too!
Example of App using Dynamic Shortcuts CodeLab Tool
Re Engagement
Another re-engagement tool is In-App Promo SDK you can proactively suggest shortcuts in your app for actions that the user can repeat with a voice command to Assistant, in beta. The SDK allows you to check if the shortcut you want to suggest already exists for that user and prompt the user to create the suggested shortcut.
New Tooling
To support testing Capabilities, the Google Assistant plugin for Android Studio was launched. It contains an updated App Action Test Tool that creates a preview of your App Action, so you can test an integration before publishing it to the Play store.
New App Actions Learning Pathway, a comprehensive package learning experience with videos and codelabs to take developers from zero to building first App Action and pushing dynamic shortcuts.
A big focus of this year's Smart Home launches were new and updated tools. At events like I/O, Works With: SiLabs, and the Google Smart Home Developer Summit, we shared these new resources to help you quickly build a high quality smart home integration.
New Resources
To make implementing new features even easier for developers, we released many new tools to help you get your Smart Home Action up and running.
To help consumers discover Google-compatible smart home devices and associated routines, we released the smart home directory, accessible on the web and through the Google Home app.
We heard your requests for more ways to localize your integrations, so we added sample utterances in English (en-US), German (de-DE), and French (fr-FR) to several device guides. Additionally, we also rolled out Chinese (zh-TW) as one of the supported languages for the overall platform. To make our documentation more accessible, we added a Japanese translation of our developer guides.
When you're actively developing your integration, the Google Home Playground can simulate a virtual home with configurable device types and traits. Here you can view the types and traits in Home Graph, modify device attributes, and share device configurations.
The WebRTC Validator Tool acts as a WebRTC peer to stream to or from, and generally emulates the WebRTC player on smart displays with Google Assistant. If you're specifically working with a smart camera, WebRTC is now supported on the CameraStream trait.
Local Home
In order to continue striving towards quality responses to user queries, we also added support to the Local Home SDK to support local queries and responses. Additionally, to help users onboard new devices in their homes quickly and use Google Nest devices as local hubs, we launched BLE Seamless Setup.
Matter
The new Google Home IDE enables you to improve your development process by enabling in-IDE access to Google Assistant Simulator, Cloud Logging, and more for each of your projects. This plugin is available for VSCode.
Finally, as we get closer to the official launch of the Matter protocol, we're working hard to unify all of our smart home ecosystem tools together under a single name - Google Home. The Google Home Developer Center will enable you to quickly find resources for integrating your Matter-compatible smart devices and platforms with Nest, Android, Google Home app, and Google Assistant.
Conversational Actions
Way back in January of 2021, we rolled up an updated Actions for Families program, which provides guidelines for teams building actions meant for kids. Conversational Actions which are approved for the Actions for Families program get a special badge in the Assistant Directory, which lets parents know that your Action is family-friendly.
During the What's New in Google Assistant keynote at Google I/O, Director of Product for the Google Assistant Developer Platform Rebecca Nathenson mentioned several coming updates and changes for Conversational Actions. This included the launch of a Developer Preview for a new client-side fulfillment model for Interactive Canvas. Client-side fulfillment changes the implementation strategy for Interactive Canvas apps, removing the need for a webhook relaying information between the Assistant NLU and their web application. This simplifies the infrastructure needed to deploy an action that uses Interactive Canvas. Since the release of this Developer Preview, we’ve been listening closely to developers to get feedback on client-side fulfillment.
Interactive Canvas Developer Tools
We also released Interactive Canvas Developer tools - a Chrome extension which can help dev teams mock and debug the web app side of Interactive Canvas apps and games. Best of all, it’s open source! You can install the dev tools from the Chrome Web Store, or compile them from source yourself on GitHub at actions-on-google/interactive-canvas-dev-tools.
Updates to SSML
Earlier this year we announced support for new SSML features in Conversational Actions. This expanded support lets you build more detailed and nuanced features using text to speech. We produced a short demonstration of SSML Features on YouTube, and you can find more in our docs on SSML if you’re ready to dive in and start building already
Updates to Transaction UX for Smart Displays
Also announced at I/O for Conversational Actions - we released an updated workflow for completing transactions on smart displays. The new transaction process lets users complete transactions from their smart screens, by confirming the CVC code from their chosen payment method, rather than using a phone to enter a CVC code. If you’d like to get an idea of what the new process looks like, check out our demo video showing new transaction features on smart devices.
Tips on Launching your Conversational Action
Driving a successful launch for Conversational Actions contains helpful information to help you think through some strategies for putting together a marketing team and go-to-market plan for releasing your Conversational Action.