How to build a conversational app using Cloud Machine Learning APIs, Part 2

In part 1 of this blogpost, we gave you an overview of what a conversational tour guide iOS app might look like built on Cloud Machine Learning APIs and API.AI. We also demonstrated how to create API.AI intents and contexts. In part 2, we’ll discuss an advanced API.AI topic — webhook with Cloud Functions. We’ll also show you how to use Cloud Machine Learning APIs (Vision, Speech and Translation) and how to support a second language.

Webhooks via Cloud Functions 

In API.AI, Webhook integrations allow you to pass information from a matched intent into a web service and get a result from it. Read on to learn how to request parade info from Cloud Functions.
  1. Go to Log in with your own account and create a new project. 

  2. Once you’ve created a new project, navigate to that project. 
  3. Enable the Cloud Functions API. 

  4. Create a function. For the purposes of this guide, we’ll call the function “parades”. Select the “HTTP” trigger option, then select “inline” editor. 

  5. Don’t forget to specify the function to execute to “parades”.

    You’ll also need to create a “stage bucket”. Click on “browse” — you’ll see the browser, but no buckets will exist yet. 

  6. Click on the “+” button to create the bucket.
    • Specify a unique name for the bucket (you can use your project name, for instance), select “regional” storage and keep the default region (us-central1).
    • Click back on the “select” button in the previous window.
    • Click the “create” button to create the function.

    The function will be created and deployed: 

  7. Click the “parades” function line. In the “source” tab, you’ll see the sources. 
Now it’s time to code our function! We’ll need two files: the “index.js” file will contain the JavaScript / Node.JS logic, and the “package.json” file contains the Node package definition, including the dependencies we’ll need in our function.

Here’s our package.json file. This is dependent on the actions-on-google NPM module to ease the integration with API.AI and the Actions on Google platform that allows you to extend the Google Assistant with your own extensions (usable from Google Home):
  "name": "parades",
  "version": "0.0.1",
  "main": "index.js",
  "dependencies": {
    "actions-on-google": "^1.1.1"

In the index.js file, here’s our code:

const ApiAiApp = require('actions-on-google').ApiAiApp;
function parade(app) {
  app.ask(`Chinese New Year Parade in Chinatown from 6pm to 9pm.`);
exports.parades = function(request, response) {
    var app = new ApiAiApp({request: request, response: response});
    var actionMap = new Map();
    actionMap.set("inquiry.parades", parade);

In the code snippets above: 
  1. We require the actions-on-google NPM module. 
  2. We use the ask() method to let the assistant send a result back to the user. 
  3. We export a function where we’re using the actions-on-google module’s ApiAiApp class to handle the incoming request. 
  4. We create a map that maps “intents” from API.AI to a JavaScript function. 
  5. Then, we call the handleRequest() to handle the request. 
  6. Once done, don’t forget to click the “create” function button. It will deploy the function in the cloud. 
There's subtle difference between tell() and ask() APIs. tell() will end the conversation and close the mic, while ask() will not. This difference doesn’t matter for API.AI projects like the one we demonstrate here in part 1 and part 2 of this blogpost. When we integrate Actions on Google in part 3, we’ll explain this difference in more detail. 

As shown below, the “testing” tab invokes your function, the “general” tab shows statistics and the “trigger” tab reveals the HTTP URL created for your function: 

Your final step is to go to the API.AI console, then click the Fulfillment tab. Enable webhook and paste the URL above into the URL field. 

With API.AI, we’ve built a chatbot that can converse with a human by text. Next, let’s give the bot “ears” to listen with Cloud Speech API, “eyes” to see with Cloud Vision API, a “mouth” to talk with the iOS text-to-speech SDK and “brains” for translating languages with Cloud Translation API.

Using Cloud Speech API 

Cloud Speech API includes an iOS sample app. It’s quite straightforward to integrate the gRPC non-streaming sample app into our chatbot app. You’ll need to acquire an API key from Google Cloud Console and replace this line in SpeechRecognitionService.m with your API key.


Landmark detection 

 NSDictionary *paramsDictionary =
              @{@"type":@"LANDMARK_DETECTION", @"maxResults":@1}]}]};

Follow this example to use Cloud Vision API on iOS. You’ll need to replace the label and face detection with landmark detection as shown below. 

You can use the same API key you used for Cloud Speech API. 

Text to speech

iOS 7+ has a built-in text-to-speech SDK, AVSpeechSynthesizer. The code below is all you need to convert text to speech.

AVSpeechUtterance *utterance = [[AVSpeechUtterance alloc] initWithString:message];
AVSpeechSynthesizer *synthesizer = [[AVSpeechSynthesizer alloc] init];
[synthesizer speakUtterance:utterance];

Supporting multiple languages

Supporting additional languages in Cloud Speech API is a one-line change on the iOS client side. (Currently, there's no support for mixed languages.) For Chinese, replace this line in SpeechRecognitionService.m

recognitionConfig.languageCode = @"en-US";
recognitionConfig.languageCode = @"zh-Hans";

To support additional text-to-speech languages, add this line to the code:

AVSpeechUtterance *utterance = [[AVSpeechUtterance alloc] initWithString:message];
utterance.voice = [AVSpeechSynthesisVoice voiceWithLanguage:@"zh-Hans"];
AVSpeechSynthesizer *synthesizer = [[AVSpeechSynthesizer alloc] init];
[synthesizer speakUtterance:utterance];
Both Cloud Speech API and Apple’s AVSpeechSynthesisVoice support BCP-47 language code.

Cloud Vision API landmark detection currently only supports English, so you’ll need to use the Cloud Translation API to translate to your desired language after receiving the English-language landmark description. (You would use Cloud Translation API similarly to Cloud Vision and Speech APIs.) 

On the API.AI side, you’ll need to create a new agent and set its language to Chinese. One agent can support only one language. If you try to use the same agent for a second language, machine learning won’t work for that language. 
You’ll also need to create all intents and entities in Chinese. 
And you’re done! You’ve just built a simple “tour guide” chatbot that supports English and Chinese.

Next time 

We hope this example has demonstrated how simple it is to build an app powered by machine learning. For more getting-started info, you might also want to try:
You can download the source code from Github.

In part 3, we’ll cover how to build this app on Google Assistant with Actions on Google integration.

There’s no place like home, in Google Earth

When you opened Google Earth for the very first time, where did you go? For most people there's a common destination: Home. The definition of "home" changes by country, culture and climate. So as part of the relaunch of Google Earth back in April, we introduced This is Home, an interactive tour to five traditional homes around the world. You could step inside the colorful home of Kancha Sherpa in Nepal, or head to the desert and learn how an extended drought changed the lives of the Bedouin people.

Since then, we’ve traveled to dozens more homes across six continents and today we’re bringing 22 new homes and cultures to explore in Google Earth.
This is Ngaramat Loongito, Kenya, home to a Maasai community. Photo courtesy of Maasai Wilderness Conservation Trust

Start with a Torajan home, built to withstand Indonesia’s wet season. Then head to Fujian Province, China, to peek inside the immense walls the Hakka people built to keep away bandits, beasts and warlords. See the shape-shifting yurt homes Mongolian country-dwellers use to move where their herds roam. Visit a village on Madagascar’s southwest coast where the Vezo people live off the third largest coral reef system in the world. Finally, see how a Paiwan shaman has integrated her spirituality into the walls of her home in Taiwan.


To tell these stories, we worked with partners and communities to digitally preserve homes of different cultures in Street View. Many of these homes belong to indigenous people, such as The Garasia people of India, the Chatino people of Mexico, the Torajan people of Indonesia, and the Māori people of New Zealand. Their homes represent their unique cultural identity and ways of relating to the environment.

Some of the images and stories provide a snapshot in time of cultures, who face economic, environmental and population pressures. For example, the Inuit people of Sanikiluaq have been building igloos for schoolchildren to learn in for decades, but in recent winters, conditions haven’t been cold enough to create the right type of snow. It’s important to document these lifestyles now, because some may be disappearing.

Thank you to the families who shared their homes, their customs and their culture with the world!

Meet the fifth grader turning water bottles into light bulbs to brighten communities

Schools in Latin America and around the world are searching for ways to take student impact beyond the classroom. In Mexico, we wanted to explore how teachers and students are using technology to empower a rising generation of innovative changemakers—and this week, we’re sharing some of the stories we found. Tune into the hashtag #innovarparami to see how education leaders in Latin America are thinking about innovation.

Twelve-year-old Bryan Gonzalez was traveling through a neighborhood near his school when the unlit windows of several homes caught his attention. When his parents and teachers explained to him that those homes lacked electricity, he started to search for information about access to lighting in communities in Mexico and around the globe. His research led him to discover that nearly 15 percent of the world’s population lives without light.

Believing that every community deserves access to commodities as basic as lighting, Bryan decided to turn his annual school science project into a mission to defeat darkness. With the support of his peers, teachers and parents, Bryan began to brainstorm sustainable, affordable methods to illuminate the world around him.

His solution? Converting water bottles into light bulbs!
This fifth grader uses water bottles to brighten communities. #innovarparami

Bryan recently implemented his prototype in the field for the first time, and we captured the experience as he began to install his homemade light bulbs in the very houses that had initially inspired him to take on his project. In the moments after Bryan installed his lightbulbs, community members began to process the impact of Bryan’s invention. Families reflected on the difficulties inherent in relying on candlelight to assist kids with homework, the daily pressure to finish working by sunset because no work could get done in the dark, and what unlit houses and streets meant for the physical safety of children and parents alike. “Things are going to be different now. This 12-year-old boy has changed this family’s life,” said Doña Sofía, a mother and grandmother, as she embraced him.

This image was captured just moments after Doña Sofía’s house had lighting for the very first time, thanks to Bryan’s efforts.

Seeing his efforts materialize into real-world impact has been extremely gratifying for Bryan, but he knows this is just the beginning. As Bryan sets his eyes on new horizons, he hopes to start inspiring other young people around the world to implement the prototype in homes that lack electricity in their own communities.

Your age doesn’t matter. Your idea does. Bryan

Bryan’s definition of innovation is “finding creative ways to help a community solve their problems.” Follow the hashtag #innovarparami to see how other people are defining—and cultivating—innovation.

My Path to Google: Zaven Muradyan, Software Engineer

Welcome to the sixth installment of our blog series “My Path to Google”. These are real stories from Googlers highlighting how they got to Google, what their roles are like, and even some tips on how to prepare for interviews.

Today’s post is all about Zaven Muradyan. Read on!
Can you tell us a bit about yourself?
I was born in Armenia and lived there for about seven years before moving to Dallas, Texas. After several subsequent moves, I eventually ended up in the Tri-Cities area of Washington, where I went to the local community college to study computer science and graduated with an associate degree.
What’s your role at Google?
I'm a software engineer on the Google Cloud Console team, working on the frontend infrastructure. In addition to working on framework code that affects the rest of the project, I also work on tooling that improves other developers' productivity, with the ultimate goal of improving the experience for all users of Google Cloud Platform.
What inspires you to come in every day?
My colleagues! It's a joy to work on challenging and large-scale technical problems with so many talented and kind people, and I am able to learn from my coworkers every day. I also get to work with several open source projects and collaborate closely with the Angular team at Google.
When did you join Google?
I officially joined Google a little more than two years ago. I had always admired Google's product quality and engineering culture, but prior to starting the recruitment process, I had never seriously considered applying because I didn't feel like I had the formal credentials.
How did the recruitment process go for you?
It started, in a sense, when one year I decided to try participating in Google Code Jam just for fun (and I didn't even get very far in the rounds!). A little while later, I was contacted by a recruiter from Google who had seen some of my personal open source projects. To my surprise, they had originally found me because I had participated in Code Jam! I was excited and decided to do my best at going through the interview process, but was prepared for it to not work out.
I studied as much as I could, and tried to hone my design and problem solving skills. I wasn't quite sure what to expect of the interviews, but when the time came, it ended up being an enjoyable, although challenging, experience. I managed to pass the interviews and joined my current team!
What do you wish you’d known when you started the process?
Prior to going through the interviews, I had the idea that only highly educated or extremely experienced engineers had a chance at joining Google. Even after passing the interviews, I was still worried that my lack of a 4-year degree would cause problems. Having gone through the process, and now having conducted interviews myself, I can say that that is certainly not the case. Googlers are made up of people from all kinds of different backgrounds!
Do you have any tips you’d like to share with aspiring Googlers?
Don't assume that you won't be able to succeed just because you may have a "nontraditional" background! Go ahead and apply, then prepare well for the interviews. What matters most is your ability to problem solve and design solutions to complex issues, so keep practicing and don't give up.

Can you tell us more about the resources you used to prepare your interviews?

I started by going through "Programming Interviews Exposed," which acted as a good intro to my preparation. After that, I tried learning and implementing many of the most common algorithms and data structures that I could find, while going through some example problems from sites like Topcoder and previous iterations of Code Jam. Finally, one specific resource that I found to be very helpful was HiredInTech, especially for system design.

Inviting students to participate in Code to Learn contest 2017

Over the years, Computer Science and Programming have evolved and become one of the strongest means of solving real-life problems. Children are exposed to technology much earlier than ever before, and these foundational years are the best time to start nurturing their scientific inquiry and curiosity by teaching them how to use technology to solve problems around them.

In line with this objective, Google India has been running the Code to Learn competition for school students  in India for the last 4 years. The program is now also adopted by the Ministry of Human Resource Development, Government of India under the Rashtriya Avishkar Abhiyan.  And we’re delighted to invite students from Class 5 to 10 from any school in India to participate in Google India Code to Learn contest 2017. Parents or Legal Guardians of students can register on the student's behalf on the contest website.

We use Scratch and App Inventor tools both developed at MIT to introduce students to programming and Computer Science in a much more fun and engaging way. Using these tools students create a wide variety of projects that include Games, Animations, Story-telling and even Android apps; without writing even one line of programming language code!

The contest registrations are already open and will stay open till September 10, 2017 which is also the last date for submitting projects. There are links to online tutorials for both Scratch and App Inventor on the contest site and are very easy to learn.

In addition to this, we will also host training programs for teachers who can further teach their students in the classrooms. We are excited about this year's contest, and are looking forward to seeing the innovation and creativity that students will present to us via their projects.

Code to Learn is co-organized by ACM India and IIIT Delhi. ACM is the worldwide society for scientific and educational computing with an aim to advance Computer Science both as a science and as a profession. IIIT Delhi is a research-oriented university based in Delhi.

Posted by Ashwani Sharma, Head of University Relations and Computer Science Outreach

Achoo! Watch out for seasonal sniffles with pollen forecasts on Google

While most of you out there are enjoying the dog days of summer, some are bracing themselves for the fall allergy season that’s right around the corner. In fact, one in five Americans suffer from seasonal allergies. Across the U.S., we see that search interest for allergies spikes each year in April and May and then again in September. To help you get ahead of your seasonal allergies symptoms, now when you search on mobile for pollen or allergy information on Google, you’ll see useful at-a-glance details on pollen levels in your area.

To make the most up-to-date and accurate information available, we’ve worked with The Weather Channel to integrate their pollen index and forecast data information directly into Google. To see more pollen and allergy details, you can tap the link within the pollen experience.


In addition, when the pollen count in your area is particularly high, you can receive reminders in the Google app. To opt in to these notifications, just search for pollen levels, pollen forecast or a similar query on Google, then tap “turn on” when prompted.

With this pollen info, you can better understand and prepare your seasonal allergy symptoms. Stop sneezing and go out and enjoy those fall colors!

Source: Search

The results are in for the 2017 Google Online Marketing Challenge!


More than 600 professors and 12,000 students from over 65 countries competed in the 2017 Google Online Marketing Challenge (GOMC)...and the results are in!

This year we introduced a new AdWords Certification award and algorithm evaluating performance across more campaign types, delivering some of the most impressive work seen in the history of GOMC. Check out our AdWords Business, AdWords Certification, and Social Impact Winners below, and reference our GOMC Past Challenges page for a full list of the 2017 Team Results.

Congratulations to the winners and a big round of applause for all teams that participated! Thanks to all of the support from professors and the thousands of students who have helped businesses and nonprofits in their communities, we have had much to celebrate together. Over the past 10 years, more than 120,000 students and professors across almost 100 countries have participated in the Google Online Marketing Challenge, helping more than 15,000 businesses and nonprofits grow online.

Though we are taking a step back from the Google Online Marketing Challenge as we know it and exploring new opportunities to support practical skill development for students, we are continuing to provide free digital skills trainings and encourage academics to keep fostering a learning environment that connects the classroom with industry. For resources that will help you carry on project work like GOMC, a place for sharing feedback to help us continue to provide useful student development programs and a way to stay updated on our latest offerings, visit our FAQ page on the GOMC website.

2017 Google Online Marketing Challenge Winners

AdWords Business Awards
Global Winners
  • School: James Madison University | United States
  • Professor: Theresa B. Clarke
  • Team: Michelle Mullins, George Shtern, Caroline Galiwango and Raquel Sheriff
Regional Winners
  • Region: Americas
  • School: James Madison University | United States
  • Professor: Theresa B. Clarke
  • Team: Jonathan Nicely, Ken Prevete, Jessica Drennon and Jesse Springer
  • Region: Asia & Pacific
  • School: University of Delhi | India
  • Professor: Ginmunlal Khongsai
  • Team: Prakriti Sharma, Raghav Shadija and Ankita Grewal
  • Region: Europe
  • School: Adam Mickiewicz University in Poznań | Poland
  • Professor:  Wojciech Czart
  • Team: Michał Paszyn, Marek Buliński, Kamil Poturalski, Aneta Disterheft, Damian Koniuszy and Kamila Malanowicz
  • Region: Middle East & Africa
  • School: Kenyatta University | Kenya
  • Professor: Paul Mwangi Gachanja
  • Team: Peter Wangugi, Jackson Ndung'u, Selpha Kung'u and Antony Gathathu
AdWords Certification Awards
Global Winners
  • School: University of Applied Sciences Würzburg-Schweinfurt | Germany
  • Professor: Mario Fischer
  • Team: Tobias Fröhlich, Lorenz Himmel, Sabine Zinkl, Thomas Lerch, Philipp Horsch and Maksym Vovk
Regional Winners
  • Region: Americas
  • School: James Madison University | United States
  • Professor: Theresa B. Clarke
  • Team: Nicole Carothers, Emily Vaeth, Annalise Capalbo and Brendan Reece
  • Region: Asia & Pacific
  • School: Indian Institute of Management Indore | India
  • Professor: Rajendra V. Nargundkar
  • Team: Kalaivani G, Swathika S, Chandran M, Akshaya S, Sadhana P and Mathan Kumar V
  • Region: Europe
  • School: University of Applied Sciences Würzburg-Schweinfurt | Germany
  • Professor: Mario Fischer
  • Team: Matthias Schloßareck, Michelle Skodowski, Lena Thauer, Yen Nguyet Dang, David Mohr and Sebastian Kaufmann
  • Region: Middle East & Africa
  • School: The Federal University of Technology, Akure | Nigeria
  • Professor: Ajayi Olumuyiwa Olubode
  • Team: John Afolabi, Adebayo Olaoluwa Egbetade, Olubusayo Amowe, Israel Temilola Olaleye, Raphael Oluwaseyi Lawrence and Taiwo Joel Akinlosotu
  • Client: Stutern
AdWords Social Impact Awards
  • 1st Place
  • School: The University of Texas at Austin | United States
  • Professor: Lisa Dobias
  • Team: Kaitlin Reid, Ben Torres, Zachary Kornblau, Kendall Troup, Kristin Kish and Angela Fayad
  • Client: Thinkery
  • 2nd Place
  • School: James Madison University | United States
  • Professor: Theresa B. Clarke
  • Team: Michelle Mullins, George Shtern, Caroline Galiwango and Raquel Sheriff
  • 3rd Place
  • School: James Madison University | United States
  • Professor: Theresa B. Clarke
  • Team: Jonathan Nicely, Ken Prevete, Jessica Drennon and Jesse Springer

Announcing v201708 of the DFP API

Today we’re pleased to announce several additions and improvements to the DFP API with the release of v201708.

CreativeService: The API now supports the skippableAdType attribute on VideoCreatives and the mezzanineFile asset on VideoRedirectCreatives.

CreativeWrapperService: The HTML header and footer fields have been renamed to htmlHeader and htmlFooter, and they are now strings instead of CreativeWrapperHtmlSnippets.

ProposalService: Proposals are now automatically synced with marketplace. Therefore, the proposal action SyncProposalsWithMarketplace has been removed (sending this action with performProposalAction is now a no-op in previous API versions).

PublisherQueryLanguageService: In v201702 the Change_History table was introduced. Now, new entities for Sales Management have been added to the EntityType column. The new entities are BASE_RATE, PREMIUM_RATE, PRODUCT, PRODUCT_PACKAGE, PRODUCT_PACKAGE_ITEM, PRODUCT_TEMPLATE, PROPOSAL, PROPOSAL_LINK, PROPOSAL_LINE_ITEM, PACKAGE, RATE_CARD, and WORKFLOW.

ReportService: DateRangeType now supports a new LAST_3_MONTHS option. Also, several deprecated reporting metrics have been removed. They can be replaced with their corresponding partner management metrics, so you will need to update any code using those fields. For more information, check out the support entry for partner management reporting metrics.

For a full list of API changes in v201708, see the release notes.

With each new release comes a new deprecation. If you're using v201611 or earlier, it's time to look into upgrading. Also, remember that v201608 will be sunset at the end of August 2017.

As always, if you have any questions, feel free to reach out to us on the DFP API forums or the Ads Developer Google+ page.

Making Great Mobile Games with Firebase

So much goes into building and maintaining a mobile game. Let’s say you want to ship it with a level builder for sharing content with other players and, looking forward, you want to roll out new content and unlockables linked with player behavior. Of course, you also need players to be able to easily sign into your soon-to-be hit game.

With a DIY approach, you’d be faced with having to build user management, data storage, server side logic, and more. This will take a lot of your time, and importantly, it would take critical resources away from what you really want to do: build that amazing new mobile game!

Our Firebase SDKs for Unity and C++ provide you with the tools you need to add these features and more to your game with ease. Plus, to help you better understand how Firebase can help you build your next chart-topper, we’ve built a sample game in Unity and open sourced it: MechaHamster. Check it out on Google Play or download the project from GitHub to see how easy it is to integrate Firebase into your game.
Before you dive into the code for Mecha Hamster, here’s a rundown of the Firebase products that can help your game be successful.


One of the best tools you have to maintain a high-performing game is your analytics. With Google Analytics for Firebase, you can see where your players might be struggling and make adjustments as needed. Analytics also integrates with Adwords and other major ad networks to maximize your campaign performance. If you monetize your game using AdMob, you can link your two accounts and see the lifetime value (LTV) of your players, from in-game purchases and AdMob, right from your Analytics console. And with Streamview, you can see how players are interacting with your game in realtime.

Test Lab for Android - Game Loop Test

Before releasing updates to your game, you’ll want to make sure it works correctly. However, manual testing can be time consuming when faced with a large variety of target devices. To help solve this, we recently launched Firebase Test Lab for Android Game Loop Test at Google I/O. If you add a demo mode to your game, Test Lab will automatically verify your game is working on a wide range of devices. You can read more in our deep dive blog post here.


Another thing you’ll want to be sure to take care of before launch is easy sign-in, so your users can start playing as quickly as possible. Firebase Authentication can help by handling all sign-in and authentication, from simple email + password logins to support for common identity providers like Google, Facebook, Twitter, and Github. Just announced recently at I/O, Firebase also now supports phone number authentication. And Firebase Authentication shares state cross-device, so your users can pick up where they left off, no matter what platforms they’re using.

Remote Config

As more players start using your game, you realize that there are few spots that are frustrating for your audience. You may even see churn rates start to rise, so you decide that you need to push some adjustments. With Firebase Remote Config, you can change values in the console and push them out to players. Some players having trouble navigating levels? You can adjust the difficulty and update remotely. Remote Config can even benefit your development cycle; team members can tweak and test parameters without having to make new builds.

Realtime Database

Now that you have a robust player community, you’re probably starting to see a bunch of great player-built levels. With Firebase Realtime Database, you can store player data and sync it in real-time, meaning that the level builder you’ve built can store and share data easily with other players. You don't need your own server and it’s optimized for offline use. Plus, Realtime Database integrates with Firebase Auth for secure access to user specific data.

Cloud Messaging & Dynamic Links

A few months go by and your game is thriving, with high engagement and an active community. You’re ready to release your next wave of new content, but how can you efficiently get the word out to your users? Firebase Cloud Messaging lets you target messages to player segments, without any coding required. And Firebase Dynamic Links allow your users to share this new content — or an invitation to your game — with other players. Dynamic Links survive the app install process, so a new player can install your app and then dive right into the piece of content that was shared with him or her.

At Firebase, our mission is to help mobile developers build better apps and grow successful businesses. When it comes to games, that means taking care of the boring stuff, so you can focus on what matters — making a great game. Our mobile SDKs for C++ and Unity are available now at

By Darin Hilton, Art Director

Around the Globe – Improved Operations for Girl Scouts Japan

For this segment of G4NP Around the Globe, we’re highlighting Girl Scouts of Japan: a nonprofit that supports more than 30,000 young women across the country with its vibrant community and empowering programs. With such a large network of members, the nonprofit needed technology to effectively keep members updated on events, ensure personal information stays secure, and manage their Local Council’s communications. The suite of tools provided by Google for Nonprofits has allowed Girl Scouts of Japan to improve their productivity and increase their member base, giving them more time to focus on supporting young women.  

Operations - G Suite

GSuite has helped Girl Scouts of Japan operate more efficiently and provide a positive experience for their members. More than 7,000 attendees signed up through Google Forms for e-learning programs about safety procedures before they headed off on a scouting adventure. Google Sheets helped the chapter to quickly access and organize this data. And by migrating to Gmail, the nonprofit feels secure with their custom Google privacy settings and the tool’s ability to weed out spam and malware.

Girl Scouts of Japan has also used technology to revolutionize a central component of the global Girl Scout organization: badges. Typically, Girl Scouts can earn woven badges for their vests by completing tasks or trainings. With the help of Google tools, Girl Scouts of Japan has created an interesting twist to this tradition: using Forms to create quizzes on their Google Site and reward women with digital badges.  

Furthermore, the nonprofit creates engaging content with Google Sites and shares their manuals and materials on Google Drive so each Local Council can always access the most updated trainings. With G Suite scaled to the entire organization, the nonprofit seamlessly keeps all communications and information safely stored in one place—allowing them to spend less time handling administrative tasks, and more freedom to plan engaging events.

Girl Scouts Japan - Virtual Tour of WAGGGS World Centers

Girl Scouts Japan - Virtual Tour of WAGGGS World Centers

Visibility - Google AdGrants, YouTube, Google Maps

Girl Scouts of Japan recognized an opportunity to connect with their young target audience by building a strong online presence. Ad Grants helps them reach new members with over 3,000 monthly visitors to their site—a 500% increase in just two months. To further enhance their online engagement, the nonprofit created a YouTube channel to showcase their thriving community and impactful programs with original content. Their videos showcase the strength of their community and the empowering programs they provide. And with Google Maps, members can easily find events happening nearby, resulting in over 18,000 views about event information.

Lastly, to spread awareness and encourage women to get involved, Girl Scouts of Japan uses Google Earth to provide a global view of their expansive network. Using instructions from Earth Outreach tutorials, they created this Virtual Tour to share with members to encourage a global perspective and community of Girl Scouts.

From G Suite to YouTube, Girl Scouts of Japan has successfully harnessed the power of technology to cultivate a strong community of women who support each other and grow together. Read the full story by visiting our Community Stories page on our Google for Nonprofits site.


To see if your nonprofit is eligible to participate, review the Google for Nonprofits eligibility guidelines. Google for Nonprofits offers organizations like yours free access to Google tools like Gmail, Google Calendar, Google Drive, Google Ad Grants, YouTube for Nonprofits and more. These tools can help you reach new donors and volunteers, work more efficiently, and tell your nonprofit’s story. Learn more and enroll here.

Footnote:  Statements are provided by Nonprofits that received products as part of the Google for Nonprofits program, which offers products at no charge to qualified nonprofits.