Category Archives: Google Developers Blog

News and insights on Google platforms, tools and events

Introducing Cast Connect: a better way to integrate Google Cast directly into your Android TV apps

Posted by Meher Vurimi, Product Manager

For more than seven years, Google Cast has made it easy for users to enjoy your content on the big screen with Chromecast or a Chromecast built-in TV. We’re always looking to improve the casting experiences, which is why we’re excited to introduce Cast Connect. This new feature allows users to cast directly to your Android TV app while still allowing control from your sender app.

Why is this helpful?

With Google Cast your app is the remote control - helping users find, play, pause, seek, stop, and otherwise control what they’re watching. It enables you to extend video or audio from your Android, iOS, or Chrome app to a TV or sound system. With Android TV, partners are able to build apps that let users experience your app's immersive content on their TV screen and control with a remote.

With Cast Connect, we are combining the best of both worlds; first, by prioritizing playback via the Android TV app to deliver a richer and more immersive experience, and second, by allowing the user to still control their experience from your Android, iOS or Chrome app, and now, also directly using their Android TV’s remote control. Cast Connect helps the user easily engage with other content directly on the TV instead of only having to use your mobile device to browse for additional content.

Cast Connect User Journey on Stan

Availability

We’re working closely with a number of partners on bringing Cast Connect to their Apps and, most recently, we’re excited to announce that CBS and our Australian SVOD partner, Stan have launched Cast Connect. Starting today, the Cast Connect library is available on Android, iOS and Chrome. To get started with adding Cast Connect to your existing framework, head over to the Google Cast Developers site. Along the way, the Cast SDK team and the developer community are available to help you and answer questions on Stack Overflow by using the google-cast-connect tag.

Happy Casting!

Digital Ink Recognition in ML Kit

Posted by Mircea Trăichioiu, Software Engineer, Handwriting Recognition

A month ago, we announced changes to ML Kit to make mobile development with machine learning even easier. Today we're announcing the addition of the Digital Ink Recognition API on both Android and iOS to allow developers to create apps where stylus and touch act as first class inputs.

Digital ink recognition: the latest addition to ML Kit’s APIs

Digital Ink Recognition is different from the existing Vision and Natural Language APIs in ML Kit, as it takes neither text nor images as input. Instead, it looks at the user's strokes on the screen and recognizes what they are writing or drawing. This is the same technology that powers handwriting recognition in Gboard - Google’s own keyboard app, which we described in detail in a 2019 blog post. It's also the same underlying technology used in the Quick, Draw! and AutoDraw experiments.

Handwriting input in Gboard

Turning doodles into art with Autodraw

With the new Digital Ink Recognition API, developers can now use this technology in their apps as well, for everything from letting users input text and figures with a finger or stylus to transcribing handwritten notes to make them searchable; all in near real time and entirely on-device.

Supports many languages and character sets

Digital Ink Recognition supports 300+ languages and 25+ writing systems including all major Latin languages, as well as Chinese, Japanese, Korean, Arabic, Cyrillic, and more. Classifiers parse written text into a string of characters

Recognizes shapes

Other classifiers can describe shapes, such as drawings and emojis, by the class to which they belong (circle, square, happy face, etc). We currently support an autodraw sketch recognizer, an emoji recognizer, and a basic shape recognizer.

Works offline

Digital Ink Recognition API runs on-device and does not require a network connection. However, you must download one or more models before you can use a recognizer. Models are downloaded on demand and are around 20MB in size. Refer to the model download documentation for more information.

Runs fast

The time to perform a recognition call depends on the exact device and the size of the input stroke sequence. On a typical mobile device recognizing a line of text takes about 100 ms.

How to get started

If you would like to start using Digital Ink Recognition in your mobile app, head over to the documentation or check out the sample apps for Android and iOS to see the API in action. For questions or feedback, please reach out to us through one of our community channels.

Automatic Deployment of Hugo Sites on Firebase Hosting and Drafts on Cloud Run

Posted by James Ward, Developer Advocate

Recently I completed the migration of my blog from Wordpress to Hugo and I wanted to take advantage of it now being a static site by hosting it on a Content Delivery Network (CDN). With Hugo the source content is plain files instead of rows in a database. In the case of my blog those files are in git on GitHub. But when the source files change, the site needs to be regenerated and redeployed to the CDN. Also, sometimes it is nice to have drafts available for review. I setup a continuous delivery pipeline which deploys changes to my prod site on Firebase Hosting and drafts on Cloud Run, using Cloud Build. Read on for instructions for how to set all this up.

Step 1a) Setup A New Hugo Project

If you do not have an existing Hugo project you can create a GitHub copy (i.e. fork) of my Hugo starter repo:

Step 1b) Setup Existing Hugo Project

If you have an existing Hugo project you'll need to add some files to it:

.firebaserc

{
"projects": {
"production": "hello-hugo"
}
}

cloudbuild-draft.yaml

steps:
- name: 'gcr.io/cloud-builders/git'
entrypoint: '/bin/sh'
args:
- '-c'
- |
# Get the theme git submodule
THEME_URL=$(git config -f .gitmodules --get-regexp '^submodule\..*\.url$' | awk '{ print $2 }')
THEME_DIR=$(git config -f .gitmodules --get-regexp '^submodule\..*\.path$' | awk '{ print $2 }')
rm -rf themes
git clone $$THEME_URL $$THEME_DIR

- name: 'gcr.io/cloud-builders/docker'
entrypoint: '/bin/sh'
args:
- '-c'
- |
docker build -t gcr.io/$PROJECT_ID/$REPO_NAME-$BRANCH_NAME:$COMMIT_SHA -f - . << EOF
FROM klakegg/hugo:latest
WORKDIR /workspace
COPY . /workspace
ENTRYPOINT hugo -D -p \$$PORT --bind \$$HUGO_BIND --renderToDisk --disableLiveReload --watch=false serve
EOF
docker push gcr.io/$PROJECT_ID/$REPO_NAME-$BRANCH_NAME:$COMMIT_SHA

- name: 'gcr.io/cloud-builders/gcloud'
args:
- run
- deploy
- --image=gcr.io/$PROJECT_ID/$REPO_NAME-$BRANCH_NAME:$COMMIT_SHA
- --platform=managed
- --project=$PROJECT_ID
- --region=us-central1
- --memory=512Mi
- --allow-unauthenticated
- $REPO_NAME-$BRANCH_NAME

cloudbuild.yaml

steps:
- name: 'gcr.io/cloud-builders/git'
entrypoint: '/bin/sh'
args:
- '-c'
- |
# Get the theme git submodule
THEME_URL=$(git config -f .gitmodules --get-regexp '^submodule\..*\.url$' | awk '{ print $2 }')
THEME_DIR=$(git config -f .gitmodules --get-regexp '^submodule\..*\.path$' | awk '{ print $2 }')
rm -rf themes
git clone $$THEME_URL $$THEME_DIR

- name: 'gcr.io/cloud-builders/curl'
entrypoint: '/bin/sh'
args:
- '-c'
- |
curl -sL https://github.com/gohugoio/hugo/releases/download/v0.69.2/hugo_0.69.2_Linux-64bit.tar.gz | tar -zxv
./hugo

- name: 'gcr.io/cloud-builders/wget'
entrypoint: '/bin/sh'
args:
- '-c'
- |
# Get firebase CLI
wget -O firebase https://firebase.tools/bin/linux/latest
chmod +x firebase
# Deploy site
./firebase deploy --project=$PROJECT_ID --only=hosting

firebase.json

{
"hosting": {
"public": "public"
}
}


Step 2) Setup Cloud Build Triggers

In the Google Cloud Build console, connect to your newly forked repo: Select the newly forked repo: Create the default push trigger: Edit the new trigger: Set the trigger to only fire on changes to the ^master$ branch: Create a new trigger: Give it a name like drafts-trigger, specify the branch selector as .* (i.e. any branch), and the build configuration file type to "Cloud Build configuration file" with a value of cloudbuild-draft.yaml Setup permissions for the Cloud Build process to manage Cloud Run and Firebase Hosting by visiting the IAM management page, locate the member with the name ending with @cloudbuild.gserviceaccount.com, and select the "pencil" / edit button: Add a role for "Cloud Run Admin" and another for "Firebase Hosting Admin": Your default "prod" trigger isn't read to test yet, but you can test the drafts on Cloud Run by going back to the Cloud Build Triggers page, and clicking the "Run Trigger" button on the "drafts-trigger" line. Check the build logs by finding the build in the Cloud Build History. Once the build completes visit the Cloud Run console to find your newly created service which hosts the drafts version of your new blog. Note that the service name includes the branch so that you can see drafts from different branches.

Step 3) Setup Firebase Hosting

To setup your production / CDN'd site, login to the firebase console and select your project:

Now you'll need your project id, which can be found in the URL on the Firebase Project Overview page. The URL for my project is:

console.firebase.google.com/project/jw-demo/overview

Which means my project id is: jw-demo

Now copy your project id go into your GitHub fork, select the .firebaserc file and click the "pencil" / edit button:

Replace the hello-hugo string with your project id and commit the changes. This commit will trigger two new builds, one for the production site and one for the drafts site on Cloud Run. You can check the status of those builds on the Cloud Build History page. Once the default trigger (the one for Firebase hosting) finishes, check out your Hugo site running on Firebase Hosting by navigating to (replacing YOUR_PROJECT_ID with the project id you used above): https://YOUR_PROJECT_ID.web.app/

Your prod and drafts sites are now automatically deploying on new commits!

Step 4) (Optional) Change Hugo Theme

There are many themes for Hugo and they are easy to change. Typically themes are pulled into Hugo sites using git submodules. To change the theme, edit your .gitmodules file and set the subdirectories and url. As an example, here is the content when using the mainroad theme:

[submodule "themes/mainroad"]
path = themes/mainroad
url = https://github.com/vimux/mainroad.git

You will also need to change the theme value in your config.toml file to match the directory name in the themes directory. For example:

theme = "mainroad"

Note: At the time of writing this, Cloud Build does not clone git submodules so the cloudbuild.yaml does the cloning instead.

Step 5) (Optional) Setup Local Editing

To setup local editing you will first need to clone your fork. You can do this with the GitHub desktop app. Or from the command line:

git clone --recurse-submodules https://github.com/USER/REPO.git

Once you have the files locally, install Hugo, and from inside the repo's directory, run:

hugo -D serve

This will serve the drafts in the site. You can check out the site at: localhost:1313

Committing non-draft changes to master and pushing those changes to GitHub will kick off the build which will deploy them on your prod site. Committing draft to any branch will kick off the build which will deploy them on a Cloud Run site.

Hopefully that all helps you with hosting your Hugo sites! Let me know if you run into any problems.

Meet the global students solving local problems with code

Posted by Erica Hanson, Developer Student Clubs Program Manager, Google

Google Developer Student Clubs (DSC) are university based community groups for students who are interested in Google’s developer technologies. Each year, Google puts a call out to the entire DSC global community, asking students to answer one simple question: Can you solve a local problem in your community by building with Google’s technologies?

This event is known as the DSC Solution Challenge and this year’s winners went above and beyond to answer the call - so much so that we couldn’t just pick one winner, we chose 10.

While we initially thought we were the ones sending out the challenge, these young developers instead flipped the script back on us. Through their innovative designs and uncompromised creative spirit, they’ve pushed our team here at Google to stretch our thinking about how developers can build a more hopeful future.

With this, we’re welcoming these passionate students and anyone interested to the virtual Solution Challenge Demo Day on August 26th, where the students will present their winning ideas in detail.

Ahead of the event, learn more about this incredible group of thinkers and their solutions below.

1. FreeSpeak - Technical University of Munich, Germany

Maria Pospelova, Wei Wen Qing, and Almo Gunadya Sutedjo developed FreeSpeak, a software that uses modern machine learning and video/audio analyzing tools by leveraging TensorFlow and Google Cloud’s Natural Language to analyze presentations and give individual feedback and tips as a “virtual coach.”

“We’ve loved connecting with talented people from around the world and exchanging ideas with them. We see that we can provide impact not only to our local neighborhood, but also around the world and help people. This motivates us to work a lot harder.”

2. CoronaAI - University of California Berkeley, United States

Anushka Purohit, Anupam Tiwari, and Neel Desai created CoronaAI, a TensorFlow based technology that helps examine COVID-19 data. Specifically, the device is made up of a band worn around a patient's chest that uses electrodes to extract real-time images of the lungs. From here, the band connects to a monitor that allows doctors to examine patients in real time without being near them.

“We're honestly huge fans of the Google Cloud Platform because of its simplicity, familiarity, and the large number of resources available. Developing this project was the best learning experience.”

3. Worthy Walk - National University of Computer & Emerging Sciences, Pakistan

Syed Moazzam Maqsood, Krinza Momin, Muhammad Ahmed Gul, and Hussain Zuhair built Worthy Walk: an Android and iOS app that provides its users a platform to achieve health goals by walking, running, or cycling. To encourage users, Worthy Walk provides an inbuilt currency called Knubs that can be redeemed as discounts from local businesses, shops, and startups.

“Being a part of DSC means friendship - sharing knowledge and resources - all while developing a social infrastructure that gives people the power to build a global community that works for all of us.”

4. Simhae : Deep sea of mind - Soonchunhyang University, South Korea

Yuna Kim, Young hoon Jo, Jeong yoon Joo, and Sangbeom Hwang created Simhae, a platform created with Flutter and Google Cloud that allows users to access basic information and activities to inspire them to attend self-help gatherings run by suicide prevention centers. They believe that this experience is an important point that can lead to solidarity of suicide survivors.

“It's so nice to have a chance to meet more diverse people. Through these communities, I can make up for my shortcomings and share more information with those who have different experiences than me - all while developing my own potential.”

5. Emergency Response Assistance - University of Education, Winneba (College of Technology Kumasi), Ghana

Elvis Antwi Sarfo, Yaw Barnieh Anane, Ampomah Ata Acheampong Prince, and Perditha Abena Acheampong constructed Emergency Response Assistance, an Android application to help health authorities post the latest first aid steps to educate the public and also help the victims report emergency cases with a click of a button. The Emergency Response team will also be able to track the exact location of the victims on the map.

“DSC is not just a community, it’s an inspiration. It’s outstanding how the platform has brought all of these students, lecturers, and teaching assistants, who are all so passionate about using tech to solve problems, together.”

6. Tulibot - Politeknik Elektronika Negeri Surabaya, Indonesia

Muhammad Alan Nur, Pravasta Caraka Bramastagiri, Eva Rahmadanti, and Namira Rizqi Annisa created Tulibot: an integrated assistive technology, built with the Google Speech API, that’s made to bridge communication between deaf people and society. The group made two main devices, Smart Glasses and Smart Gloves. Smart Glasses help with communication for the hearing impaired by showing real time answers directly on the glasses from its interlocutors. Smart Gloves transcribe gesture input into audio output by implementing gesture to text technology.

“This has been an amazing opportunity for us because with this challenge, we can learn many things like coding, management, business, and more. The special materials we can get directly from Google is so helpful.”

7. Picare - The Hong Kong University of Science & Technology, Hong Kong

Sze Yuk Yin, Kwok Ue Nam, Ng Chi Ting, Chong Cheuk Hei, and Silver Ng developed Picare, a healthcare matching platform built with Flutter and Google Machine Learning to help elderly people in Hong Kong. Users will be able to use the app to research, schedule, and pay caregivers directly through the app.

“Our community hosted several workshops ranging from design thinking to coding techniques. This boosted our development by introducing us to various state-of-the-art technologies, such as Machine Learning and Cloud computing, which helped us reach our development goals.”

8. Shareapy - Ho Chi Minh City University of Technology, Vietnam

Vo Ngoc Khanh Linh, Tran Lam Bao Khang, Nguyen Dang Huy, and Nguyen Thanh Nhan built Shareapy: a digitized support group app created with Android that helps bring people together who share similar problems regardless of their age, gender, religion, financial status, etc. After conducting an extremely rigorous user testing phase, this team had the chance to see all that TensorFlow and Firebase could do.

“My team loves Firebase so much. One of our team members now uses it to help do some of his homework problems.”

9. Capstone - Midlands State University, Zimbabwe

Victor Chinyavada, Marvellous Humphery Chirunga, and Lavender Zandile Tshuma started Capstone, a service hosted on the Google Cloud Platform that aims to combat plagiarism among students, authors, and researchers. In particular, the technology aims to develop more effective algorithms that will incorporate the latest in big data, artificial intelligence, and data mining. As a team, the group bonded over applying technologies from Google to their project, but their real takeaway was working together to solve problems.

“To submit our project on time, we started all night hackathons, which helped us finish all of our work while having fun and getting to know each other better.”

10. MiCamp - Dr. B.R. Ambedkar National Institute of Technology, India

Praveen Agrawal built MiCamp, an Android app that holds all the info students from his campus need. Features include a calendar with upcoming campus events, student profiles, a used book marketplace, hostel management, online food ordering, and more. As a team of one, Praveen needed to speed up his development, so he applied his new knowledge of Flutter to finish.

“I’d heard of technologies like Flutter, but never used them until joining DSC; they inspired us to use those technologies, which really improved my solution.”

________________________


Want to learn more about Developer Student Clubs? Join a club near you, here and stay tuned for our upcoming virtual Solution Challenge Demo Day on August 26th, here.

Summer updates from Coral

Posted by the Coral Team

Summer has arrived along with a number of Coral updates. We're happy to announce a new partnership with balena that helps customers build, manage, and deploy IoT applications at scale on Coral devices. In addition, we've released a series of updates to expand platform compatibility, make development easier, and improve the ML capabilities of our devices.

Open-source Edge TPU runtime now available on GitHub

First up, our Edge TPU runtime is now open-source and available on GitHub, including scripts and instructions for building the library for Linux and Windows. Customers running a platform that is not officially supported by Coral, including ARMv7 and RISC-V can now compile the Edge TPU runtime themselves and start experimenting. An open source runtime is easier to integrate into your customized build pipeline, enabling support for creating Yocto-based images as well as other distributions.

Windows drivers now available for the Mini PCIe and M.2 accelerators

Coral customers can now also use the Mini PCIe and M.2 accelerators on the Microsoft Windows platform. New Windows drivers for these products complement the previously released Windows drivers for the USB accelerator and make it possible to start prototyping with the Coral USB Accelerator on Windows and then to move into production with our Mini PCIe and M.2 products.

New fresh bits on the Coral ML software stack

We’ve also made a number of new updates to our ML tools:

  • The Edge TPU compiler is now version 14.1. It can be updated by running sudo apt-get update && sudo apt-get install edgetpu, or follow the instructions here
  • Our new Model Pipelining API allows you to divide your model across multiple Edge TPUs. The C++ version is currently in beta and the source is on GitHub
  • New embedding extractor models for EfficientNet, for use with on-device backpropagation. Embedding extractor models are compiled with the last fully-connected layer removed, allowing you to retrain for classification. Previously, only Inception and MobileNet were available and now retraining can also be done on EfficientNet
  • New Colab notebooks to retrain a classification model with TensorFlow 2.0 and build C++ examples

Balena partners with Coral to enable AI at the edge

We are excited to share that the Balena fleet management platform now supports Coral products!

Companies running a fleet of ML-enabled devices on the edge need to keep their systems up-to-date with the latest security patches in order to protect data, model IP and hardware from being compromised. Additionally, ML applications benefit from being consistently retrained to recognize new use cases with maximum accuracy. Coral + balena together, bring simplicity and ease to the provisioning, deployment, updating, and monitoring of your ML project at the edge, moving early prototyping seamlessly towards production environments with many thousands of devices.

Read more about all the benefits of Coral devices combined with balena container technology or get started deploying container images to your Coral fleet with this demo project.

New version of Mendel Linux

Mendel Linux (5.0 release Eagle) is now available for the Coral Dev Board and SoM and includes a more stable package repository that provides a smoother updating experience. It also brings compatibility improvements and a new version of the GPU driver.

New models

Last but not least, we’ve recently released BodyPix, a Google person-segmentation model that was previously only available for TensorFlow.JS, as a Coral model. This enables real-time privacy preserving understanding of where people (and body parts) are on a camera frame. We first demoed this at CES 2020 and it was one of our most popular demos. Using BodyPix we can remove people from the frame, display only their outline, and aggregate over time to see heat maps of population flow.

Here are two possible applications of BodyPix: Body-part segmentation and anonymous population flow. Both are running on the Dev Board.

We’re excited to add BodyPix to the portfolio of projects the community is using to extend our models far beyond our demos—including tackling today’s biggest challenges. For example, Neuralet has taken our MobileNet V2 SSD Detection model and used it to implement Smart Social Distancing. Using the bounding box of person detection, they can compute a region for safe distancing and let a user know if social distance isn’t being maintained. The best part is this is done without any sort of facial recognition or tracking, with Coral we can accomplish this in real-time in a privacy preserving manner.

We can’t wait to see more projects that the community can make with BodyPix. Beyond anonymous population flow there’s endless possibilities with background and body part manipulation. Let us know what you come up with at our community channels, including GitHub and StackOverflow.

________________________

We are excited to share all that Coral has to offer as we continue to evolve our platform. For a list of worldwide distributors, system integrators and partners, including balena, visit the Coral partnerships page. Please visit Coral.ai to discover more about our edge ML platform and share your feedback at [email protected].

Google Pay plugin for Magento 2

Posted by Soc Sieng, Developer Advocate

Google Pay plugin for Magento 2

We are pleased to announce the launch of the official Google Pay plugin for Magento 2. The Google Pay plugin can help increase conversions by enabling a simpler and more secure checkout experience in your Magento website. When you integrate with Google Pay, your customers can complete their purchases quickly using the payment methods they’ve securely saved to their Google Accounts.

Google Pay in action.

The Google Pay plugin is built in collaboration with Unbound Commerce, is free to use, and integrates with popular payment service providers including Adyen, BlueSnap, Braintree, FirstData - Payeezy & Ucom, Moneris, Stripe, and Vantiv.

Installation

The Google Pay plugin can be installed from the Magento Marketplace using this link or by searching the Magento Marketplace for “Google Pay”.

Refer to the Magento Marketplace User Guide for more installation instructions.

Getting started

To get started with the Google Pay plugin, you will need your Google Pay merchant identifier which can be found in the Google Pay Business Console.

Your Merchant ID can be found in the Google Pay Business Console.

Configuring the Google Pay plugin

Once installed, you can configure the plugin in your site’s Magento administration console by navigating to Stores > Configuration > Sales > Payment Methods and selecting the Configure button next to Google Pay.

Click on the Configure button to start the setup process.

Testing out Google Pay can be achieved in three easy steps:

  1. Google Pay credentials: enter your Google Pay merchant ID (available from the Google Pay Business Console) and merchant name.
  2. Payment gateway credentials: select your payment gateway from the list of payment gateways supported by the Google Pay plugin.
    1. Choose the Sandbox environment for testing purposes.
    2. Enter your payment gateway’s credentials into their respective form fields.
  3. Google Pay settings: enable Google Pay and choose the card networks that you would like to accept.

You can optionally try out some of the advanced settings that provide the ability to customize the color and type of Google Pay button as well as enabling Minicart integration, which is recommended.

Checkout the Advanced Settings to further customize how and where the Google Pay button is presented in your store.

If your payment provider isn’t listed as an option in the payment gateway list, check to see if your payment provider’s plugin has built-in support for Google Pay.

Launching Google Pay for your website

When you’ve completed your testing, submit your website integration in the Google Pay Business Console. You will need to provide your website’s URL and screenshots to complete the submission.

Summing it up

Integrating Google Pay into your website is a great way to increase conversions and to improve the purchasing experience for your customers.

Find out more about Google Pay and the Google Pay plugin for Magento.

What do you think?

Do you have any questions? Let us know in the comments below or tweet using #AskGooglePayDev.

Introducing an easier way to design your G Suite Add-on

Posted by Kylie Poppen, Senior Interaction Designer, G Suite and Akshay Potnis, Interaction Designer, G Suite

You’ve just scoped out an awesome new way to solve for your customer’s next challenge, but wait, what about the design? Building an integration between your software platform and another comes with a laundry list of things to think about: your vision, your users, their experience, your partners, APIs, developer docs, and so on. Caught between two different platforms, many constraints, and limited time, you're probably wondering: how might we build the most intuitive and powerful user experience?

Imagine making a presentation, with Google Slides you have all sorts of templates to get you started, and you can build a great deck easily. But, to build a seamless integration between two software platforms, those pre-built templates don’t exist and you basically have to start from scratch. In the best case scenario, you’d create your own components and layer them on top of each other with the goal of making the UI seem just about right. But this takes time. Hours longer than you want it to. Without design guidelines, you're stuck guessing what is or is not possible, looking to other apps and emulating what they've already done. Which leads us to the reality that some add-ons have a suboptimal experience, because time is limited, and you're left to build only for what you know you can do, rather than what's actually possible.

To simplify all of this, we’re introducing the G Suite Add-ons UI Design Kit, now live on Figma. With it you can browse all of the components of G Suite Add-ons’ card-based interface, learn best practices, and simply drag-and-drop to create your own unique designs. Save the time spent recreating what an add-on will look like, so that you can spend that time thinking about how your add-on will work .

While the UI Design Kit has only been live for a little over a month, we’ve already been hearing feedback from our partners about its impact.

“Zapier connects more than 2,000 apps, allowing businesses to automate repetitive, time-consuming tasks. When building these integrations, we want to ensure a seamless experience for our customers,” said Ryan Powell, Product Manager at Zapier. “However, a partner’s UI can be difficult to navigate when starting from scratch. G Suite’s UI Design Kit allows us to build, test and optimize integrations because we know from the start what is and is not possible inside of GSuite’s UI.”

Here’s how to use the UI Design Kit:

Step 1

Find and duplicate design kit

  • Search for G suite on Figma community or use this link
  • Open G Suite Add Ons UI Design Kit
  • Just click the duplicate button.

Step 2

Choose a template to begin

  • Go to UI templates page
  • Select a template from the list of templates

Step 3

Copy the template and detach from symbols to start editing

Helpful Hints: Features to help you iterate quickly

Build with auto layout, you don’t need to worry about the details.

  • Copy paste maintains layout padding & structure.
  • Maintained padding & structure while editing.
  • Built in fixed footer and peek cards.

Visualize your design against G-Suite surfaces easily.

Documentation built right into the template.

  1. Go to the component page (e.g section)
  2. Find layout + documentation / api links on respective pages

Next Steps to Consider:

With G Suite Add-ons, users and admins can seamlessly get their work done, across their favorite workplace applications, without needing to leave G Suite. With this UI Design Kit, you too can focus your time on building a great user experience inside of G Suite, while simplifying and accelerating the design process. Follow these steps to get started today:

Download the UI Design Kit

Get started with G Suite Add-ons

Hopefully this will inspire you to build more add-ons using the Cards Framework! To learn more about building for G Suite, check out the developer page, and please register for Next OnAir, which kicks off July 14th.

Teaching the art of great documentation

Posted by James Scott, Technical writer

Technical writing is simple - you merely have to explain brutally complex technologies to relentlessly unforgiving audiences. It's unsurprising that so many engineers find writing documentation is the most painful part of their job. If you would like to teach your colleagues to become writers, the good news is Google's fun and interactive technical writing course materials are free and available for everyone to use! Alternatively, if you're a developer who would like to learn how to write more clearly, you can read through the course work for yourself or convince a colleague to teach the course at your organisation!

We researched documentation extensively, and it turns out that the best sentences in the world consist primarily of words. Our self-paced and facilitator-led courses will not only help software engineers choose the right words but also help to make the whole writing process a lot less scary. Perhaps software engineers won't become William Shakespeare or even William Shatner overnight, but hopefully they will gain the confidence to write something worth publishing. As working from home becomes more common, good documentation has never been more important in enabling software engineers to work independently.

Courses overview

Google introduced the technical writing courses, Technical Writing One and Technical Writing Two, in 2015. Since then, thousands of Google software engineers and product managers have taken and enjoyed the courses. In February 2020, we released the courses to the world.

The classes have the following structure:

  • Students complete self-study work before attending the live class. The self-study work is valuable on its own, even for students who will never attend the live class.
  • A facilitator guides students through a live class. The live class features practical exercises, class discussion, and extensive peer-to-peer feedback. Note that Google does not lead these live courses but provides extensive material to help facilitators prepare to lead them.

Organizations can choose to host the live classes virtually or in-person.

Technical Writing One

The first course, Technical Writing One, covers the basics of technical writing. Students learn to start thinking about their audience before even putting pen to paper. For example, in one exercise, students are challenged to write instructions for putting toothpaste on a toothbrush. That might sound relatively simple, but here's the catch - your audience has never brushed their teeth before. That's not to say they have bad oral hygiene, but they don't even know what a toothbrush is. The exercise aims to get students to think about documenting a completely new technology.

Another important lesson that Technical Writing One teaches you is how to shorten the sentence length in your documentation and how to edit unnecessarily long sentences. Hopefully once you have taken the course, you might edit the preceding sentence down to something like the following: Another important lesson that Technical Writing One teaches you is to shorten sentences length in your documentation and how to edit unnecessarily long sentences.

The course also advocates using lists instead of walls of text, so here, in list form, are some other topics it covers:

  • Using active voice instead of passive voice.
  • Revising text into clear paragraphs.
  • Learning various self-editing techniques.

Technical Writing Two

Technical Writing Two builds on the techniques from the first course and is for those who already know verbs from adverbs. The course encourages students to express their creative side. For example, in one exercise, students find the best way to illustrate technical concepts. Spoiler alert: can you spot any issues with the following diagram?

A diagram titled Finding a website through DNS, with seven boxes of varying colour, size, and shape connected by lines in various directions.

Figure 1: Finding a website through DNS

Other intermediate techniques the course covers include:

  • Organizing large doc sets.
  • Revising and reorganizing text.
  • Writing accurate descriptions.
  • Creating tutorials for beginners.

Students take part in interactive exercises and peer review with a lab partner. Technical Writing Two also includes class discussions on documentation types and how to write the dreaded first draft.

Want to know more?

If you would like to teach the courses at your own organization, see the facilitator guides. To review the pre-work and read through the training materials, see the course overviews.

New user features and developer tools to build the helpful home

Posted by Michele Turner, Director of Product and Smart Home Ecosystem for Google Nest

To create a helpful home experience, we have focused on foundational features necessary to make it easier for people to manage their smart devices. But as people spend more and more time at home during these challenging times, it’s important that we invest in additional ways to work with developers to build a more useful connected home.

Today, at the "Hey Google" Smart Home Virtual Summit, we gave updates on our latest smart home initiatives, talked more in-depth about the new smart home controls in Android 11, and previewed some platform tools that we're investing in to make devices easier to set up and work with Google Assistant.

Smart Home for Entertainment Device support with Google Assistant

As many of us continue to stay home, smart devices are being used a lot more. With the biggest growth coming from entertainment devices, we’re increasing our support in this area with our Smart Home API.

Last year, we launched Google Assistant support for Smart Home for Entertainment Device (SHED) device types and traits, including TVs, remotes, set-top boxes, speakers, soundbars, and even game consoles from top brands like Xbox, Roku, Dish, and LG. And now, we are making these APIs public for any Smart TV, set-top box or game developers to use. SHED gives users the ability to control their favorite entertainment devices from any Assistant-enabled smart display, smart speaker or mobile device.

Smart Home controls in Android 11

With the release of Android 11, coming out later this year, we are introducing a dedicated space for Smart Home controls that users can find quickly, and access any time. We’ve redesigned the power menu to make devices linked to Google Assistant just a button-press away.

Users with the Home App can choose all, or just their favorite controls to be in the space. For partners, you get this for free - there’s no new development work required. We’ll have sliders which will allow you to adjust specific settings, like the temperature of your thermostat in the morning, or how far to open the blinds. You can also customize what devices are visible from the control space and whether these devices can be accessible in your lockscreen.

Improved state reporting and reliability

With Android 11, we want to give users a quick and easy way to check or adjust the devices in their home. And as we continue to add new surfaces for device control, it becomes more critical to ensure we have accurate state. In the coming months, we’ll be introducing tools to measure your reliability and latency to help improve and debug state reporting. Once you hit key targets for reliability and latency, we will shift from a default of querying your state to using report state to render stateful controls. This will reduce query volume on your servers and improve the user experience with accurate device state across multiple surfaces.

In addition to state accuracy, the best user experience comes with strong reliability and low latency. To help achieve both, we launched local execution with the Local Home SDK back in April. As part of the Smart Home platform, local fulfillment can extend your Smart Home Action and routes commands to devices through the local network, benefitting users with reduced latency and higher reliability by removing an additional cloud hop.

To ease the development process, the Local Home platform supports both Chrome and Node.js runtime environments, as well as building and testing of apps on local development machines or personal servers. Once you've deployed your local fulfillment app, users will benefit immediately without having to upgrade hardware or manually update firmware. Nanoleaf and Yeelight have already enabled local execution for their devices. It’s available to all developers through the Actions on Google Console.

Improving linking

Implementing a high quality integration is important - it reduces churn and delights users. Yet, it’s still challenging to get users to discover these features, and we’re doing a couple of things on our end to increase the funnel of users linked to your action. We are excited to launch OAuth-based App Flip on the developer console today. With AppFlip, we streamline the standard account linking flow by flipping users from the Google Home App to the Partner app to gather consent without requiring the users to re-enter their credentials.

To increase awareness of your Action, you will soon be able to initiate the account linking flow within your app. There will also be more opportunities to increase awareness through feature promotion and in-app notification using your app, and we will have more details on discovery and linking opportunities later this year.

Robust monitoring, logging, analytics tools

We know that visibility into the behavior of your smart home integrations is critical, from debugging in early development to detailed analytics in production. To enhance developer productivity, we've integrated with the powerful monitoring and troubleshooting tools available in Google Cloud Platform to provide detailed event logs and usage metrics for smart home projects.

We’ve also recently launched new tools to help developers improve the reliability of their integrations and aid in debugging and resolving issues quickly. You can view aggregate metrics directly in the developer console, or build logs-based metrics to find trends and gain deeper insights into common problems. Google Cloud Platform also enables developers to build custom alerts to quickly identify production issues.

You can also find a new Smart Home Analytics Dashboard accessible from the developer console and pre-populated with charts for common metrics such as Daily Active Users and Request Breakdown — giving you an overall picture of your integration's behavior. This dashboard is powered by new usage and performance metrics in Google Cloud Monitoring, giving you the power to set alerts and be notified if your integration has an issue. Get started today by going to the “Analytics” tab in the Actions console or the Google Cloud console to check out these new logs, metrics, and alerting capabilities for your projects!

Updates to Device Access program

Last year, we announced that we’re moving from the Works with Nest program to Works with Google Assistant and build on a foundation of privacy and data security to ensure users have confidence in how Google and our partners are protecting the consumer’s home data.

As part of that effort, we created the Device Access program to provide a way for partners to integrate directly with Nest devices. To support the Device Access program, we will soon launch the Device Access Console, a self-serve console that guides commercial developers through the different project phases: development, certification and pilot testing, and finally production.

For a commercial developer the console allows them to manage your various projects and integrations. It also provides development guides and trait documentation for all supported Nest devices. Individuals who want to create their own automations with their Nest devices will be able to do so with this console, but only for the homes they are a member of.

Expanding routines

One of the most popular features with Nest users is the ability to automatically trigger routines based on whether users are Home or Away. Later this year, similar functionality will be available with Google Assistant through occupancy detection.

Sleep is also a critical part in maintaining our overall well-being as we stay more at home. Last year we launched the Gentle Sleep & Wake feature with Philips Hue, which slowly brightens or dims the lights at a specific time or can be tied to your morning alarms. Just say, “Turn on Gentle Wake up” to your bedroom speaker to ‘set it and forget it.’ The Light Effects trait is now public so all developers can integrate their native Sleep or Wake experiences - in fact LIFX has recently launched! We encourage you to build and integrate your own unique experiences. We’ll have a larger launch moment later this year when we launch emulated Sleep and Wake effects so that it’ll work out of the box for any smart light!

Another way partners will be able to innovate on our platform and provide more helpful experiences to users is by extending personal routines with custom routines designed by partners, available in the coming months. Developers will be able to create and suggest routines, not just for their devices, but that can work with other devices in a customer’s home. You’ll be able to create solutions for your customers that are based on your core business and bring value to your customers - whether it’s wellness, cleaning, or entertainment. Users will be able to browse and opt-in to approved routines and choose to have Nest and other devices react and participate in that routine.

Our Smart Home efforts have grown significantly over the past several years. We now have integrations with thousands of partners covering all the major connected product categories and devices, and will continue our ambitious goal to build deeper in-home integrations. Be sure to review our docs/samples/videos to learn about all the cool new stuff, and connect with us on our dev communities.