Category Archives: Google Developers Blog

News and insights on Google platforms, tools and events

Recommended strategies and best practices for designing and developing games and stories on Google Assistant

Posted by Wally Brill and Jessica Dene Earley-Cha

Illustration of pink car collecting coins

Since we launched Interactive Canvas, and especially in the last year we have been helping developers create great storytelling and gaming experiences for Google Assistant on smart displays. Along the way we’ve learned a lot about what does and doesn’t work. Building these kinds of interactive voice experiences is still a relatively new endeavor, and so we want to share what we've learned to help you build the next great gaming or storytelling experience for Assistant.

Here are three key things to keep in mind when you’re designing and developing interactive games and stories. These three were selected from a longer list of lessons learned (stay tuned to the end for the link for the 10+ lessons) because they are dependent on Action Builder/SDK functionality and can be slightly different for the traditional conversation design for voice only experiences.

1. Keep the Text-To-Speech (TTS) brief

Text-to-speech, or computer generated voice, has improved exponentially in the last few years, but it isn’t perfect. Through user testing, we’ve learned that users (especially kids) don’t like listening to long TTS messages. Of course, some content (like interactive stories) should not be reduced. However, for games, try to keep your script simple. Wherever possible, leverage the power of the visual medium and show, don’t tell. Consider providing a skip button on the screen so that users can read and move forward without waiting until the TTS is finished. In many cases the TTS and text on a screen won’t always need to mirror each other. For example the TTS may say "Great job! Let's move to the next question. What’s the name of the big red dog?" and the text on screen may simply say "What is the name of the big red dog?"

Implementation

You can provide different audio and screen-based prompts by using a simple response, which allows different verbiage in the speech and text sections of the response. With Actions Builder, you can do this using the node client library or in the JSON response. The following code samples show you how to implement the example discussed above:

candidates:
- first_simple:
variants:
- speech: Great job! Let's move to the next question. What’s the name of the big red dog?
text: What is the name of the big red dog?

Note: implementation in YAML for Actions Builder

app.handle('yourHandlerName', conv => {
conv.add(new Simple({
speech: 'Great job! Let\'s move to the next question. What’s the name of the big red dog?',
text: 'What is the name of the big red dog?'
}));
});

Note: implementation with node client library

2. Consider both first-time and returning users

Frequent users don't need to hear the same instructions repeatedly. Optimize the experience for returning users. If it's a user's first time experience, try to explain the full context. If they revisit your action, acknowledge their return with a "Welcome back" message, and try to shorten (or taper) the instructions. If you noticed the user has returned more than 3 or 4 times, try to get to the point as quickly as possible.

An example of tapering:

  • Instructions to first time users: “Just say words you can make from the letters provided. Are you ready to begin?”
  • For a returning user: “Make up words from the jumbled letters. Ready?”
  • For a frequent user: “Are you ready to play?”

Implementation

You can check the lastSeenTime property in the User object of the HTTP request. The lastSeenTime property is a timestamp of the last interaction with this particular user. If this is the first time a user is interacting with your Action, this field will be omitted. Since it’s a timestamp, you can have different messages for a user who’s last interaction has been more than 3 months, 3 weeks or 3 days. Below is an example of having a default message that is tapered. If the lastSeenTime property is omitted, meaning that it's the first time the user is interacting with this Action, the message is updated with the longer message containing more details.

app.handle('greetingInstructions', conv => {
let message = 'Make up words from the jumbled letters. Ready?';
if (!conv.user.lastSeenTime) {
message = 'Just say words you can make from the letters provided. Are you ready to begin?';
}
conv.add(message);
});

Note: implementation with node client library

3. Support strongly recommended intents

There are some commonly used intents which really enhance the user experience by providing some basic commands to interact with your voice app. If your action doesn’t support these, users might get frustrated. These intents help create a basic structure to your voice user interface, and help users navigate your Action.

  • Exit / Quit

    Closes the action

  • Repeat / Say that again

    Makes it easy for users to hear immediately preceding content at any point

  • Play Again

    Gives users an opportunity to re-engage with their favorite experiences

  • Help

    Provides more detailed instructions for users who may be lost. Depending on the type of Action, this may need to be context specific. Defaults returning users to where they left off in game play after a Help message plays.

  • Pause, Resume

    Provides a visual indication that the game has been paused, and provides both visual and voice options to resume.

  • Skip

    Moves to the next decision point.

  • Home / Menu

    Moves to the home or main menu of an action. Having a visual affordance for this is a great idea. Without visual cues, it’s hard for users to know that they can navigate through voice even when it’s supported.

  • Go back

    Moves to the previous page in an interactive story.

Implementation

Actions Builder & Actions SDK support System Intents that cover a few of these use case which contain Google support training phrase:

  • Exit / Quit -> actions.intent.CANCEL This intent is matched when the user wants to exit your Actions during a conversation, such as a user saying, "I want to quit."
  • Repeat / Say that again -> actions.intent.REPEAT This intent is matched when a user asks the Action to repeat.

For the remaining intents, you can create User Intents and you have the option of making them Global (where they can be triggered at any Scene) or add them to a particular scene. Below are examples from a variety of projects to get you started:

So there you have it. Three suggestions to keep in mind for making amazing interactive games and story experiences that people will want to use over and over again. To check out the full list of our recommendations go to the Lessons Learned page.

Thanks for reading! To share your thoughts or questions, join us on Reddit at r/GoogleAssistantDev.

Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on. Can’t wait to see what you build!

Recommended strategies and best practices for designing and developing games and stories on Google Assistant

Posted by Wally Brill and Jessica Dene Earley-Cha

Illustration of pink car collecting coins

Since we launched Interactive Canvas, and especially in the last year we have been helping developers create great storytelling and gaming experiences for Google Assistant on smart displays. Along the way we’ve learned a lot about what does and doesn’t work. Building these kinds of interactive voice experiences is still a relatively new endeavor, and so we want to share what we've learned to help you build the next great gaming or storytelling experience for Assistant.

Here are three key things to keep in mind when you’re designing and developing interactive games and stories. These three were selected from a longer list of lessons learned (stay tuned to the end for the link for the 10+ lessons) because they are dependent on Action Builder/SDK functionality and can be slightly different for the traditional conversation design for voice only experiences.

1. Keep the Text-To-Speech (TTS) brief

Text-to-speech, or computer generated voice, has improved exponentially in the last few years, but it isn’t perfect. Through user testing, we’ve learned that users (especially kids) don’t like listening to long TTS messages. Of course, some content (like interactive stories) should not be reduced. However, for games, try to keep your script simple. Wherever possible, leverage the power of the visual medium and show, don’t tell. Consider providing a skip button on the screen so that users can read and move forward without waiting until the TTS is finished. In many cases the TTS and text on a screen won’t always need to mirror each other. For example the TTS may say "Great job! Let's move to the next question. What’s the name of the big red dog?" and the text on screen may simply say "What is the name of the big red dog?"

Implementation

You can provide different audio and screen-based prompts by using a simple response, which allows different verbiage in the speech and text sections of the response. With Actions Builder, you can do this using the node client library or in the JSON response. The following code samples show you how to implement the example discussed above:

candidates:
- first_simple:
variants:
- speech: Great job! Let's move to the next question. What’s the name of the big red dog?
text: What is the name of the big red dog?

Note: implementation in YAML for Actions Builder

app.handle('yourHandlerName', conv => {
conv.add(new Simple({
speech: 'Great job! Let\'s move to the next question. What’s the name of the big red dog?',
text: 'What is the name of the big red dog?'
}));
});

Note: implementation with node client library

2. Consider both first-time and returning users

Frequent users don't need to hear the same instructions repeatedly. Optimize the experience for returning users. If it's a user's first time experience, try to explain the full context. If they revisit your action, acknowledge their return with a "Welcome back" message, and try to shorten (or taper) the instructions. If you noticed the user has returned more than 3 or 4 times, try to get to the point as quickly as possible.

An example of tapering:

  • Instructions to first time users: “Just say words you can make from the letters provided. Are you ready to begin?”
  • For a returning user: “Make up words from the jumbled letters. Ready?”
  • For a frequent user: “Are you ready to play?”

Implementation

You can check the lastSeenTime property in the User object of the HTTP request. The lastSeenTime property is a timestamp of the last interaction with this particular user. If this is the first time a user is interacting with your Action, this field will be omitted. Since it’s a timestamp, you can have different messages for a user who’s last interaction has been more than 3 months, 3 weeks or 3 days. Below is an example of having a default message that is tapered. If the lastSeenTime property is omitted, meaning that it's the first time the user is interacting with this Action, the message is updated with the longer message containing more details.

app.handle('greetingInstructions', conv => {
let message = 'Make up words from the jumbled letters. Ready?';
if (!conv.user.lastSeenTime) {
message = 'Just say words you can make from the letters provided. Are you ready to begin?';
}
conv.add(message);
});

Note: implementation with node client library

3. Support strongly recommended intents

There are some commonly used intents which really enhance the user experience by providing some basic commands to interact with your voice app. If your action doesn’t support these, users might get frustrated. These intents help create a basic structure to your voice user interface, and help users navigate your Action.

  • Exit / Quit

    Closes the action

  • Repeat / Say that again

    Makes it easy for users to hear immediately preceding content at any point

  • Play Again

    Gives users an opportunity to re-engage with their favorite experiences

  • Help

    Provides more detailed instructions for users who may be lost. Depending on the type of Action, this may need to be context specific. Defaults returning users to where they left off in game play after a Help message plays.

  • Pause, Resume

    Provides a visual indication that the game has been paused, and provides both visual and voice options to resume.

  • Skip

    Moves to the next decision point.

  • Home / Menu

    Moves to the home or main menu of an action. Having a visual affordance for this is a great idea. Without visual cues, it’s hard for users to know that they can navigate through voice even when it’s supported.

  • Go back

    Moves to the previous page in an interactive story.

Implementation

Actions Builder & Actions SDK support System Intents that cover a few of these use case which contain Google support training phrase:

  • Exit / Quit -> actions.intent.CANCEL This intent is matched when the user wants to exit your Actions during a conversation, such as a user saying, "I want to quit."
  • Repeat / Say that again -> actions.intent.REPEAT This intent is matched when a user asks the Action to repeat.

For the remaining intents, you can create User Intents and you have the option of making them Global (where they can be triggered at any Scene) or add them to a particular scene. Below are examples from a variety of projects to get you started:

So there you have it. Three suggestions to keep in mind for making amazing interactive games and story experiences that people will want to use over and over again. To check out the full list of our recommendations go to the Lessons Learned page.

Thanks for reading! To share your thoughts or questions, join us on Reddit at r/GoogleAssistantDev.

Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on. Can’t wait to see what you build!

Google Developer Group Spotlight: A conversation with Cloud Architect, Ilias Papachristos

Posted by Jennifer Kohl, Global Program Manager, Google Developer Communities

The Google Developer Groups Spotlight series interviews inspiring leaders of community meetup groups around the world. Our goal is to learn more about what developers are working on, how they’ve grown their skills with the Google Developer Group community, and what tips they might have for us all.

We recently spoke with Ilias Papachristos, Google Developer Group Cloud Thessaloniki Lead in Greece. Check out our conversation with Ilias on Cloud architecture, reading official documentation, and suggested resources to help developers grow professionally.

Tell us a little about yourself?

I’m a family man, ex-army helicopter pilot, Kendo sensei, beta tester at Coursera, Lead of the Google Developer Group Cloud Thessaloniki community, Google Cloud Professional Architect, and a Cloud Board Moderator on the Google Developers Community Leads Platform (CLP).

I love outdoor activities, reading books, listening to music, and cooking for my family and friends!

Can you explain your work in Cloud technologies?

Over my career, I have used Compute Engine for an e-shop, AutoML Tables for an HR company, and have architected the migration of a company in Mumbai. Now I’m consulting for a company on two of their projects: one that uses Cloud Run and another that uses Kubernetes.

Both of them have Cloud SQL and the Kubernetes project will use the AI Platform. We might even end up using Dataflow with BigQuery for the streaming and Scheduler or Manager, but I’m still working out the details.

I love the chance to share knowledge with the developer community. Many days, I open my PC, read the official Google Cloud blog, and share interesting articles on the CLP Cloud Board and GDG Cloud Thessaloniki’s social media accounts. Then, I check Google Cloud’s Medium publication for extra articles. Read, comment, share, repeat!

How did the Google Developer Group community help your Cloud career?

My overall knowledge of Google Cloud has to do with my involvement with Google Developer Groups. It is not just one thing. It’s about everything! At the first European GDG Leads Summit, I met so many people who were sharing their knowledge and offering their help. For a newbie like me it was and still is something that I keep in my heart as a treasure

I’ve also received so many informative lessons on public speaking from Google Developer Group and Google Developer Student Club Leads. They always motivate me to continue talking about the things I love!

What has been the most inspiring part of being a part of your local Google Developer Group?

Collaboration with the rest of the DevFest Hellas Team! For this event, I was a part of a small group of 12 organizers, all of whom never had hosted a large meetup before. With the help of Google Developer Groups, we had so much fun while creating a successful DevFest learning program for 360 people.

What are some technical resources you have found the most helpful for your professional development?

Besides all of the amazing tricks and tips you can learn from the Google Cloud training team and courses on the official YouTube channel, I had the chance to hear a talk by Wietse Venema on Cloud Run. I also have learned so much about AI from Dale Markovitz’s videos on Applied AI. And of course, I can’t leave out Priyanka Vergadia’s posts, articles, and comic-videos!

Official documentation has also been a super important part of my career. Here are five links that I am using right now as an Architect:

  1. Google Cloud Samples
  2. Cloud Architecture Center
  3. Solve with Google Cloud
  4. Google Cloud Solutions
  5. 13 sample architectures to kickstart your Google Cloud journey

How did you become a Google Developer Group Lead?

I am a member of the Digital Analytics community in Thessaloniki, Greece. Their organizer asked me to write articles to start motivating young people. I translated one of the blogs into English and published it on Medium. The Lead of GDG Thessaloniki read them and asked me to become a facilitator for a Cloud Study Jams (CSJ) workshop. I accepted and then traveled to Athens to train three people so that they could also become CSJ facilitators. At the end of the CSJ, I was asked if I wanted to lead a Google Developer Group chapter. I agreed. Maria Encinar and Katharina Lindenthal interviewed me, and I got it!

What would be one piece of advice you have for someone looking to learn more about a specific technology?

Learning has to be an amusing and fun process. And that’s how it’s done with Google Developer Groups all over the world. Join mine, here. It’s the best one. (Wink, wink.)

Want to start growing your career and coding knowledge with developers like Ilias? Then join a Google Developer Group near you, here.

Celebrating Earth Day with our inaugural Google for Startups Accelerator: Climate Change cohort

Posted by Jason Scott, Head of Startup Developer Ecosystem, USA | Nick Zakrasek, Global Product Lead, Sustainability

GIF of Climate Change Class Announcement

Today, people across the world will celebrate and participate in Earth Day. In line with Google’s broader commitment to address climate change, we are proud to join in this celebration by announcing the first cohort for our Google for Startups Accelerator: Climate Change program. The 10-week digital accelerator is designed to help North American sustainable technology startups take their businesses to the next level.

Meet the cohort of 11 companies, who are collectively leveraging technology and data to combat the challenge of climate change:

75F, Bloomington, Minnesota, USA

75F is a vertically-integrated building intelligence company using smart sensors, controllers and software to make commercial buildings more efficient and comfortable.

BlocPower, Brooklyn, New York, USA

BlocPower is providing software and financial tools to analyze, finance, and manage the challenge of converting millions of urban buildings off of fossil fuels.

CarbiCrete, Montreal, Quebec, Canada

CarbiCrete's concrete-making solution completely eliminates the need for cement, making it cheaper and stronger than traditional concrete, all through an overall carbon-negative process.

Enexor BioEnergy, Franklin, Tennessee, USA

Enexor BioEnergy delivers renewable electricity and thermal using organic, biomass, and plastic feedstock, helping to mitigate climate change while addressing global waste overabundance challenges.

FARM-TRACE, Vancouver, British Columbia, Canada

FARM-TRACE is a software platform which delivers verified reforestation impacts created by farmers to brands wanting to reduce their climate footprints.

Fermata Energy, Charlottesville, Virginia, USA

Fermata Energy designs, supplies, and operates technology that turns electric vehicles into energy storage assets that combat climate change, increase resilience, and dramatically lower the cost of ownership.

Flair, San Francisco, California, USA

Flair makes buildings more comfortable using less energy while promoting energy efficiency, electrification, and smart grid integration.

Heatworks, Mt. Pleasant, South Carolina, USA

Heatworks uses electronic controls and graphite electrodes to heat water instantly, endlessly, and precisely, without energy loss.

Wild Earth, Berkeley, California, USA

Wild Earth is a plant-based pet food company that harnesses biotech to create cruelty free products with less environmental impact.

Yard Stick PBC, Cambridge, Massachusetts, USA

Yard Stick fights climate change by measuring soil carbon accurately, instantly, and affordably, providing the “missing link” to carbon sequestration on the gigaton per year scale.

Zauben, Chicago, Illinois, USA

Zauben is designing the world's smartest green products, like sensor-driven, IoT-connected green roofs and living walls, to create healthier and happier environments for humans and our planet.

The program kicks off on Monday, June 7th and will focus on product design, technical infrastructure, customer acquisition, and leadership development - granting our founders access to an expansive network of mentors, senior executives, and industry leaders.

We are incredibly excited to support this group of entrepreneurs over the next three months and beyond, connecting them with the best of our people, products, and programming to advance their companies and solutions.

Be sure to join us as we showcase their accomplishments on Thursday, August 12th from 12:30pm - 2:00pm EST at our Google for Startups Accelerator: Climate Change Demo Day.

Inviting educators to the Google Developers India Faculty Summit on 23rd April, 2021

Posted by Harsh Dattani, Community Manager

University Professor's have a large impact by educating the next generation of developers and engineers. Google Developers wants to enable university faculty with the best curriculum on Android development and programs. Earlier this year we announced the launch of our new faculty-led curriculum for Android Development with Kotlin in India. The curriculum is based on classroom learning (virtual or in-person) with an instructor delivering lectures on important Android concepts and students receiving hands-on practice through interactive pathways.

The Google Developers India Faculty Summit 2021 will kick things off on April 23rd. The Faculty Development Training program provides professors or faculty from different universities or skilling partners in India with resources designed by a team at Google.

Current partners in India include: Shivaji University, I. K. Gujral Punjab Technical University, Chandigarh University, Ganpat University, Telangana Academy for Skill and Knowledge (TASK), Learn 4 Grow, Teerthanker Mahaveer University and Information, and Communication Technology Academy of Kerala. These organizations will be the first to learn an offer this curriculum to their students with more universities to follow in upcoming semesters

Leading scholars and educational influencers from computer-science faculties at Indian universities will be in attendance, giving attendees the perfect chance to network with other professionals in Android development

Chief guest for the event is Professor Anil D. Sahasrabudhe, Chairman, All India Council for Technical Education (AICTE). As he describes the upcoming summit,

"Google India Faculty Summit 2021 is a great opportunity for faculties of technical institutes to understand & assimilate nuances of Android Application Development using Kotlin and in turn disseminate the same to students, challenge them to create new applications, innovative solutions and make them entrepreneurs/employable engineers."

Register today for the summit here and see you on April 23rd at 10 AM IST.

SignAll SDK: Sign language interface using MediaPipe is now available for developers

A guest post by the Engineering team at SignAll | Twitter handle | MediaPipe team

Please note that the information, uses, and applications expressed in the below post are solely those of our guest author, SignAll.

SignAll SDK: Sign language interface using MediaPipe is now available for developers

When Google published the first versions of its on-device hand tracking technology in MediaPipe, the work could serve as a basis for developers to build sign language recognition solutions into their own apps. Later updates to this hand tracking solution have further improved its accuracy where other technologies have fallen short (Figure 1).

Illustrating MediaPipe’s improvement over time: hand skeleton tracking output of an older version (2020.02.10) and the latest version (2020.12.16).

Figure 1. Illustrating MediaPipe’s improvement over time: hand skeleton tracking output of an older version (2020.02.10) and the latest version (2020.12.16). This handshape is used in sign language frequently, but often missed because of the lack of representation in training datasets.
Watch the full video

SignAll is a startup working on sign language translation technology. Its mission is to make sign language interpretation universally available, both through communication between Deaf and hearing parties, and between Deaf individuals and computers. The SignAll’s products, which are used nationwide in the US for both communications and education, employ a complex multi-camera setup and gloves with colored markers. While sign languages’ complexity goes far beyond handshapes (facial features, body, grammar, etc.), it is true that the accurate tracking of the hands has been a huge obstacle in the first layer of processing – computer vision. MediaPipe unlocked the possibility to offer SignAll’s solutions not only glove free, but also by using a single camera. SignAll has just announced the availability of the first SDK of its kind, so developers can now enable sign language input in their apps.

The company recently published an interactive educational app in the App Store, that lets the user practice signing with immediate feedback, this app also serves as a demonstration of the possibilities with the SDK.

SignAll with MediaPipe Hands

Our system uses several layers for sign recognition, and each one uses more and more abstract data. The low-level layer extracts crucial hand, body, and face data from 2D and 3D cameras. In our first implementation, this layer detects the colors of the gloves and creates 3D hand data. Replacing this with MediaPipe Hands (supplemented by the MediaPipe Pose and MediaPipe Face Mesh) has been a game changer for using our system without gloves or special lighting.

Demo of our SignAll SDK developed using MediaPipe asking how are you
Demo greeting of our SignAll SDK developed using MediaPipe

Figure 2. Demo of our SignAll SDK developed using MediaPipe

As mentioned earlier, we use multiple cameras with depth sensors which are calibrated in the real world. This allows for a more accurate 3D world space than that of a local camera or tensor spaces, but requires the use of hand landmark detection for each camera. The cameras are placed distinctly to each other in position and orientation so the hands are visible more frequently, as one hand might cover the other from one camera but not necessarily from the others.

Corrected 3D hand shape based on three cameras. Due to the unique orientation, the detection from the front camera is wrong, but the side camera can correct the result.

Figure 3. Corrected 3D hand shape based on three cameras. Due to the unique orientation, the detection from the front camera is wrong, but the side camera can correct the result.

The next step is to filter and smooth the data to replicate the precise measurements offered by our colored glove markers. Although SignAll’s markers are different from the landmarks given by MediaPipe, we used our hand model to generate colored markers from landmarks. Therefore, the new mocap data is fully compatible with the previous one.

Although we are focused mainly on the hands, we also integrated MediaPipe Pose and MediaPipe Face Mesh. The pose landmarks provide accurate hand position information, even when touching or close to each other.

While the two versions of mocap are compatible, the nature of the artifacts is different: direct measure of each marker and simulated markers from a globally detected hand. Due to this discrepancy, we had to refine the parameters on higher levels. On the other hand, we could still use our huge sign database for the gloveless configuration. By replacing the low-level data and refining our higher-level data, we could test our system without gloves. Going gloveless can be a huge step for using our sign recognition technology easily worldwide.

Demonstration of compatible mocap from different low-level trackings.

Figure 4. Demonstration of compatible mocap from different low-level trackings. The right side is without gloves; the left side is with gloves. This compatibility enables the usage of SignAll’s meticulously labeled dataset of 300,000+ sign language videos to be used for the training of recognition models based on different low-level data. Watch the full video

The SignAll system using MediaPipe framework

After integrating MediaPipe Hands into our system, we also wanted to take advantage of the customization and scaling opportunities provided by the MediaPipe framework on multiple platforms. This allowed us to not only prototype our research state methods in Python, but also deliver our end-user solutions for Windows, iOS, Android, and even the Web. Thanks to the similarities between our module graph system and the calculator graph of MediaPipe, our existing processing units can be reused in this new framework with minor modifications. With that said, the extended platform set also comes with other challenges, like using only a single 2D camera in most cases instead of a calibrated multi-camera system.

The models, algorithms and techniques we have used were mostly developed to work on our mocap data interpreted in the 3D global world. The data extracted from a single-camera setup, of course, cannot be as detailed. That is the reason we had to make some adjustments to our implementations, fine-tune the algorithms and add some extra logic (e.g., dynamically adapting to the changes of space resulted by the hand-held camera use-case). Luckily, the MediaPipe framework enables us to implement the core processing units in C++, so we can still benefit from the runtime-optimized core solutions we previously developed.

Some higher-level models trained on 3D data also needed to be re-trained in order to perform better on the data originating from a single 2D source. The MediaPipe landmarks are defined by 3D coordinates, which makes it possible to reuse the existing training methods and concepts. On the other hand, the 2D information is more directly extracted and therefore more stable than the third coordinate, which was taken into consideration while designing the training modifications.

Luckily, it is not necessary to make an entirely new data recording for this purpose. We can still use our huge video database annotated in great detail. The preprocessed mocap data we can extract from our recordings and interpreted in the 3D world can be used to simulate hand, skeleton, or face landmark detections in any virtual camera view.

Among the data on virtual camera views, we also use traditional 2D recordings in sufficient proportion to cover the unique noise characteristics of the landmark detections. Thanks to having most of these types of data in advance, we can focus on the most exciting part - trying the latest techniques and training new models.

Summary

The advancement made possible by MediaPipe enabled SignAll to change its model. In addition to offering all-in-one products for sign language education and translation, SignAll is now starting to offer an SDK for developers. The capabilities of this SDK depend on the type of the camera or cameras being used and the available computational capacity. The possible functions enabled by using the SDK vary from launching video calls by signing the contact’s name (watch a demo here), adding addresses into navigation by signing (as a counterpart to speech input), or ordering food on a fast-food restaurant’s kiosk or drive-thru. With its mission to make sign language an alternative everywhere that voice can be used, SignAll is excited to see more and more apps implementing this feature.

We are eager to try future updates of MediaPipe, which could bring us closer to our ultimate goal of having our solutions available for everyone on any device. The most awaited update is the ability to build custom MediaPipe graphs and add our own calculators for web-based solutions aided by the WebAssembly technology, so websites will be able to use a new level of accessibility features for Deaf visitors.

Google Developer Student Club 2021 Lead applications are open!

Posted by Erica Hanson, Global Program Manager, Google Developer Student Clubs

Hey, student developers! If you’re passionate about programming and are ready to use your technology skills to help your community, then you should become a Google Developer Student Clubs Lead!

Application forms for the upcoming 2021-2022 academic year are NOW OPEN. Get started at goo.gle/gdsc-leads.

Want to know more? Learn more about the program below.

What are Google Developer Student Clubs?

Google Developer Student Clubs are university based community groups for students interested in Google developer technologies. With clubs hosted in 106 countries around the world, students from undergraduate and graduate programs with an interest in leading a community are welcome. Together, students learn the latest in Android App Development, Google Cloud Platform, Flutter, and so much more.

By joining a GDSC, students grow their knowledge in a peer-to-peer learning environment and put theory to practice by building solutions for local businesses and their community.

How will I improve my skills?

As a Google Developer Student Club Lead you will have the chance to…

  • Gain mentorship from Google.
  • Join a global community of leaders.
  • Practice by sharing your skills.
  • Help students grow.
  • Build solutions for real life problems.

How can I find a Google Developer Student Club near me?

Google Developer Student Clubs are now in 106 countries with 1250+ groups. Find a club near you or learn how to start your own, here.

When do I need to submit the Application form?

We encourage students to submit their forms as soon as possible. You can learn more about your region’s application deadline, here. Make sure to learn more about our program criteria.

Get Started

From working to solve the United Nations Sustainable Development Goals to helping local communities make informed voting decisions, Google Developer Student Club leads are learning valuable coding skills while making a true difference. As a lead from a Club in Kuala Lumpur, Malaysia put it,

“The secret to our club’s success was that we were able to cultivate a heart of service and a culture of open mentorship.”

We can’t wait to see what our next group of Google Developer Student Club leads will accomplish this year. Join the fun and get started, here.



*Google Developer Student Clubs are student-led independent organizations, and their presence does not indicate a relationship between Google and the students' universities.

Modernizing your Google App Engine applications

Posted by Wesley Chun, Developer Advocate, Google Cloud

Modernizing your Google App Engine applications header

Next generation service

Since its initial launch in 2008 as the first product from Google Cloud, Google App Engine, our fully-managed serverless app-hosting platform, has been used by many developers worldwide. Since then, the product team has continued to innovate on the platform: introducing new services, extending quotas, supporting new languages, and adding a Flexible environment to support more runtimes, including the ability to serve containerized applications.

With many original App Engine services maturing to become their own standalone Cloud products along with users' desire for a more open cloud, the next generation App Engine launched in 2018 without those bundled proprietary services, but coupled with desired language support such as Python 3 and PHP 7 as well as introducing Node.js 8. As a result, users have more options, and their apps are more portable.

With the sunset of Python 2, Java 8, PHP 5, and Go 1.11, by their respective communities, Google Cloud has assured users by expressing continued long-term support of these legacy runtimes, including maintaining the Python 2 runtime. So while there is no requirement for users to migrate, developers themselves are expressing interest in updating their applications to the latest language releases.

Google Cloud has created a set of migration guides for users modernizing from Python 2 to 3, Java 8 to 11, PHP 5 to 7, and Go 1.11 to 1.12+ as well as a summary of what is available in both first and second generation runtimes. However, moving from bundled to unbundled services may not be intuitive to developers, so today we're introducing additional resources to help users in this endeavor: App Engine "migration modules" with hands-on "codelab" tutorials and code examples, starting with Python.

Migration modules

Each module represents a single modernization technique. Some are strongly recommended, others less so, and, at the other end of the spectrum, some are quite optional. We will guide you as far as which ones are more important. Similarly, there's no real order of modules to look at since it depends on which bundled services your apps use. Yes, some modules must be completed before others, but again, you'll be guided as far as "what's next."

More specifically, modules focus on the code changes that need to be implemented, not changes in new programming language releases as those are not within the domain of Google products. The purpose of these modules is to help reduce the friction developers may encounter when adapting their apps for the next-generation platform.

Central to the migration modules are the codelabs: free, online, self-paced, hands-on tutorials. The purpose of Google codelabs is to teach developers one new skill while giving them hands-on experience, and there are codelabs just for Google Cloud users. The migration codelabs are no exception, teaching developers one specific migration technique.

Developers following the tutorials will make the appropriate updates on a sample app, giving them the "muscle memory" needed to do the same (or similar) with their applications. Each codelab begins with an initial baseline app ("START"), leads users through the necessary steps, then concludes with an ending code repo ("FINISH") they can compare against their completed effort. Here are some of the initial modules being announced today:

  • Web framework migration from webapp2 to Flask
  • Updating from App Engine ndb to Google Cloud NDB client libraries for Datastore access
  • Upgrading from the Google Cloud NDB to Cloud Datastore client libraries
  • Moving from App Engine taskqueue to Google Cloud Tasks
  • Containerizing App Engine applications to execute on Cloud Run

Examples

What should you expect from the migration codelabs? Let's preview a pair, starting with the web framework: below is the main driver for a simple webapp2-based "guestbook" app registering website visits as Datastore entities:

class MainHandler(webapp2.RequestHandler):
'main application (GET) handler'
def get(self):
store_visit(self.request.remote_addr, self.request.user_agent)
visits = fetch_visits(LIMIT)
tmpl = os.path.join(os.path.dirname(__file__), 'index.html')
self.response.out.write(template.render(tmpl, {'visits': visits}))

A "visit" consists of a request's IP address and user agent. After visit registration, the app queries for the latest LIMIT visits to display to the end-user via the app's HTML template. The tutorial leads developers a migration to Flask, a web framework with broader support in the Python community. An Flask equivalent app will use decorated functions rather than webapp2's object model:

@app.route('/')
def root():
'main application (GET) handler'
store_visit(request.remote_addr, request.user_agent)
visits = fetch_visits(LIMIT)
return render_template('index.html', visits=visits)

The framework codelab walks users through this and other required code changes in its sample app. Since Flask is more broadly used, this makes your apps more portable.

The second example pertains to Datastore access. Whether you're using App Engine's ndb or the Cloud NDB client libraries, the code to query the Datastore for the most recent limit visits may look like this:

def fetch_visits(limit):
'get most recent visits'
query = Visit.query()
visits = query.order(-Visit.timestamp).fetch(limit)
return (v.to_dict() for v in visits)

If you decide to switch to the Cloud Datastore client library, that code would be converted to:

def fetch_visits(limit):
'get most recent visits'
query = DS_CLIENT.query(kind='Visit')
query.order = ['-timestamp']
return query.fetch(limit=limit)

The query styles are similar but different. While the sample apps are just that, samples, giving you this kind of hands-on experience is useful when planning your own application upgrades. The goal of the migration modules is to help you separate moving to the next-generation service and making programming language updates so as to avoid doing both sets of changes simultaneously.

As mentioned above, some migrations are more optional than others. For example, moving away from the App Engine bundled ndb library to Cloud NDB is strongly recommended, but because Cloud NDB is available for both Python 2 and 3, it's not necessary for users to migrate further to Cloud Datastore nor Cloud Firestore unless they have specific reasons to do so. Moving to unbundled services is the primary step to giving users more flexibility, choices, and ultimately, makes their apps more portable.

Next steps

For those who are interested in modernizing their apps, a complete table describing each module and links to corresponding codelabs and expected START and FINISH code samples can be found in the migration module repository. We are also working on video content based on these migration modules as well as producing similar content for Java, so stay tuned.

In addition to the migration modules, our team has also setup a separate repo to support community-sourced migration samples. We hope you find all these resources helpful in your quest to modernize your App Engine apps!

Local students team up to help small businesses go online

Posted by Erica Hanson, Global Program Manager, Google Developer Student Clubs

Recently young developers in Saudi Arabia from Google Developer Student Clubs, a program of university based community groups for students interested in Google technologies, came together to help local small businesses. As more companies across the globe rely on online sales, these students noticed that many of their favorite local stores did not have a presence on the web.

So to help these local shops compete, these up-and-coming developers went into the community and began running workshops to teach local store owners the basics of building a website. Inspired by Google’s fundamentals of digital marketing course, these learning sessions focused on giving small business owners basic front-end skills, while introducing them to easy to use coding tools.

Front-end skills for small business owners

Image of Chrome Devtools

The first goal of these student-run workshops was to teach local store owners the basics of building web interfaces. In particular, they focused on websites that made it easy for customers to make purchases. To do this, the students first taught store owners the basics of HTML, CSS, and JS code. Then, they showed them how to deploy Chrome DevTools, a collection of web developer tools built directly into the Google Chrome browser that allows programmers to inspect and edit HTML, CSS, and JS code to optimize user experience.

Next, the students challenged participants to put their knowledge to use by creating demos of their businesses' new websites. The young developers again used Chrome DevTools to highlight the best practices for testing the demo sites on different devices and screen sizes.

Introduction to coding toolkits

Image of demo created and maintained in workshop.

With the basics of HTML, CSS, JS code, and Chrome DevTools covered, the students also wanted to give the store owners tools to help maintain their new websites. To do this, they introduced the small businesses to three toolkits:

  1. Bootstrap, to help templatize future workflow for the websites.
  2. Codepen, to make testing new features and aspects of the websites easier.
  3. Figma, to assist in the development of initial mockups.

With these basic coding skills, access to intuitive toolkits, and completed website demos, the local businesses owners now had everything they needed to launch their sites to the public - all thanks to a few dedicated students.

Ready to join a Google Developer Student Club near you?

All over the world, students are coming together to learn programming and make a difference in their community as members of local Google Developer Student Clubs. Learn more on how to get involved in projects like this one, here.

Local students team up to help small businesses go online

Posted by Erica Hanson, Global Program Manager, Google Developer Student Clubs

Recently young developers in Saudi Arabia from Google Developer Student Clubs, a program of university based community groups for students interested in Google technologies, came together to help local small businesses. As more companies across the globe rely on online sales, these students noticed that many of their favorite local stores did not have a presence on the web.

So to help these local shops compete, these up-and-coming developers went into the community and began running workshops to teach local store owners the basics of building a website. Inspired by Google’s fundamentals of digital marketing course, these learning sessions focused on giving small business owners basic front-end skills, while introducing them to easy to use coding tools.

Front-end skills for small business owners

Image of Chrome Devtools

The first goal of these student-run workshops was to teach local store owners the basics of building web interfaces. In particular, they focused on websites that made it easy for customers to make purchases. To do this, the students first taught store owners the basics of HTML, CSS, and JS code. Then, they showed them how to deploy Chrome DevTools, a collection of web developer tools built directly into the Google Chrome browser that allows programmers to inspect and edit HTML, CSS, and JS code to optimize user experience.

Next, the students challenged participants to put their knowledge to use by creating demos of their businesses' new websites. The young developers again used Chrome DevTools to highlight the best practices for testing the demo sites on different devices and screen sizes.

Introduction to coding toolkits

Image of demo created and maintained in workshop.

With the basics of HTML, CSS, JS code, and Chrome DevTools covered, the students also wanted to give the store owners tools to help maintain their new websites. To do this, they introduced the small businesses to three toolkits:

  1. Bootstrap, to help templatize future workflow for the websites.
  2. Codepen, to make testing new features and aspects of the websites easier.
  3. Figma, to assist in the development of initial mockups.

With these basic coding skills, access to intuitive toolkits, and completed website demos, the local businesses owners now had everything they needed to launch their sites to the public - all thanks to a few dedicated students.

Ready to join a Google Developer Student Club near you?

All over the world, students are coming together to learn programming and make a difference in their community as members of local Google Developer Student Clubs. Learn more on how to get involved in projects like this one, here.