Category Archives: Google Developers Blog

News and insights on Google platforms, tools and events

Introducing Spotlight: Women Techmakers series on career and professional development in tech

Posted by Caitlin Morrissey, PgM for Women Techmakers

On July 9, Women Techmakers launched the first episode of “Spotlight”, a new global program focused on amplifying the stories and pathways of women in technology across the industry. The idea for this series came from our community who were looking for more help with navigating their careers, especially during this time. We quickly realized that with our extensive network of professional women in all different parts of the tech industry, we could play a part in scaling this kind of guidance and advice to women all over the world.

The first video features Priyanka Vergadia, a Developer Advocate for Google Cloud, who shared advice on taking risks, career paths in tech, and dealing with failure. “I wanted to be on the show because I am a firm believer that hearing someone's story can inspire you to create your own beautiful journey.”

Image of Priyanka Vergadia with Spotlight host Caitlin Morrissey

Since then, we’ve been releasing weekly episodes featuring women from all over the world on themes from finding the job that’s right for you and what’s needed to be successful in that role to overcoming imposter syndrome and leading with confidence. Each woman brings a unique perspective to the table -- some have their masters in Engineering, while others went through a coding bootcamp and got into tech as a second career. This diversity in experience and backgrounds brings a richness and variety to the advice shared across the episodes, providing valuable guidance a viewer can take away, regardless of where they are in their careers.

Image of Spotlight guests: Pujaa Rajan, Jessica Early-Cha and Jacquelle Horton

The response to the series has been overwhelmingly positive so far - “I am honestly blown away by the reception of the video,” said Priyanka, who received many heartfelt messages after her episode was published. And as Spotlight continues, we’re excited to tell more stories of more women in tech and build a library of career advice that anyone can use to help navigate the tech ecosystem.

You can check out all episodes of Spotlight here, and make sure to subscribe to the Women Techmakers channel so you don’t miss out on future interviews.

Building G Suite Add-ons with your favorite tech stack

Posted by Jon Harmer, Product Manager and Steven Bazyl, Developer Advocate for G Suite

Let’s talk about the basics of G Suite Add-ons. G Suite Add-ons simplify how users get things done in G Suite by bringing in functionality from other applications where you need them. They provide a persistent sidebar for quick access, and they are context-aware -- meaning they can react to what you’re doing in context. For example, a CRM add-on can automatically surface details about a sales opportunity in response to an email based on the recipients or even the contents of the message itself.

Up until recently, G Suite Add-ons leaned on Apps Script to build Add-ons, but choice is always a good thing, and in some cases you may want to use another scripting language.. So let’s talk about how to build Add-ons using additional runtimes:

First, additional runtimes don't add any new capabilities to what you can build. What it does give you is more choice and flexibility in how you build Add-ons. We’ve heard feedback from developers that they would also like the option to use the tools and ecosystems they’ve already learned and invested in. And while there have always been ways to bridge Apps Script and other backends that expose APIs over HTTP/S, it isn't the cleanest of workarounds. .

So let’s look at a side by side comparison of what it looks like to build an Add-on with alternate runtimes:

function homePage() {
let card = CardService.newCardBuilder()
.addSection(CardService.newCardSection()
.addWidget(CardService.newTextParagraph()
.setText("Hello world"))
).build();
return [card];
}

Here’s the hello world equivalent of an Add-on in Apps Script. Since Apps Script is more akin to a serverless framework like GCF, the code is straightforward -- a function that takes an event and returns the UI to render.

// Hello world Node.js
const express = require('express');
const app = express();
app.use(express.json());

app.post('/home', (req, res) => {
let card = {
sections: [{
widgets: [{
textParagraph: {
text: 'Hello world'
}
}]
}]
};
res.json({
action: {
navigations: [{
pushCard: card
}]
}
});
}

This is the equivalent in NodeJS using express, a popular web server framework. It shows a little bit more of the underlying mechanics -- working with the HTTP request/response directly, starting the server, and so on.

The biggest difference is the card markup -- instead of using CardService, which under the covers builds a protobuf, we're using the JSON representation of the same thing.

function getCurrentMessage(event) {
var accessToken = event.messageMetadata.accessToken;
var messageId = event.messageMetadata.messageId;
GmailApp.setCurrentMessageAccessToken(accessToken);
return GmailApp.getMessageById(messageId);
}

Another area where things differ is accessing Google APis. In Apps Script, the clients are available in the global context -- the APIs mostly 'just work'. Moving to Node requires a little more effort, but not much.

Apps Script is super easy here. In fact, normally we wouldn't bother with setting the token when using more permissive scopes as it's done for us by Apps Script. We're doing it here to take advantage of the per-message scope that the add-on framework provides.

const { google } = require('googleapis');
const { OAuth2Client } = require('google-auth-library');
const gmail = google.gmail({ version: 'v1' });

async function fetchMessage(event) {
const accessToken = event.gmail.accessToken;
const auth = new OAuth2Client();
auth.setCredentials({access_token: accessToken});

const messageId = event.gmail.messageId;
const res = await gmail.users.messages.get({
id: messageId,
userId: 'me',
headers: { 'X-Goog-Gmail-Access-Token': event.gmail.accessToken },
auth
});
return res.data;
}

The NodeJS version is very similar -- a little extra code to import the libraries, but otherwise the same -- extract the message ID and token from the request, set the credentials, then call the API to get the message contents.

Your Add-on, Your way

One of the biggest wins for alternate runtimes is the testability that comes with using your favorite IDE, language, and framework--all of which helps you make developing add-ons more approachable.

Both Apps Script and alternate runtimes for G Suite Add-ons have important places in building Add-ons. If you’re getting into building Add-ons or if you want to prototype more complex ones, Apps Script is a good choice.. If you write and maintain systems as your full time job, though, alternate runtimes allow you to use those tools to build your Add-on, letting you leverage work, code and processes that you’re already using. With alternate runtimes for G Suite Add-ons, we want to make it possible for you to extend G Suite in a way that fits your needs using whatever tools you're most comfortable with.

And don't just take our word for it, hear from one of our early access partners. Shailesh Matariya, CTO at Gfacility has this to say about alternate runtimes: "We're really happy to use alternate runtimes in G Suite Add-ons. The results have been great and it's much easier to maintain the code. Historically, it would take 4-5 seconds to load data in our Add-on, whereas with alternate runtimes it's closer to 1 second, and that time and efficiency really adds up. Not to mention performance, we're seeing about a 50% performance increase and thanks to this our users are able to manage their workflows with just a few clicks, without having to jump to another system and deal with the hassle of constant updates."

Next Steps

Read the developer documentation for Alternate Runtimes and sign up for the early access program.

Building G Suite Add-ons with your favorite tech stack

Posted by Jon Harmer, Product Manager and Steven Bazyl, Developer Advocate for G Suite

Let’s talk about the basics of G Suite Add-ons. G Suite Add-ons simplify how users get things done in G Suite by bringing in functionality from other applications where you need them. They provide a persistent sidebar for quick access, and they are context-aware -- meaning they can react to what you’re doing in context. For example, a CRM add-on can automatically surface details about a sales opportunity in response to an email based on the recipients or even the contents of the message itself.

Up until recently, G Suite Add-ons leaned on Apps Script to build Add-ons, but choice is always a good thing, and in some cases you may want to use another scripting language.. So let’s talk about how to build Add-ons using additional runtimes:

First, additional runtimes don't add any new capabilities to what you can build. What it does give you is more choice and flexibility in how you build Add-ons. We’ve heard feedback from developers that they would also like the option to use the tools and ecosystems they’ve already learned and invested in. And while there have always been ways to bridge Apps Script and other backends that expose APIs over HTTP/S, it isn't the cleanest of workarounds. .

So let’s look at a side by side comparison of what it looks like to build an Add-on with alternate runtimes:

function homePage() {
let card = CardService.newCardBuilder()
.addSection(CardService.newCardSection()
.addWidget(CardService.newTextParagraph()
.setText("Hello world"))
).build();
return [card];
}

Here’s the hello world equivalent of an Add-on in Apps Script. Since Apps Script is more akin to a serverless framework like GCF, the code is straightforward -- a function that takes an event and returns the UI to render.

// Hello world Node.js
const express = require('express');
const app = express();
app.use(express.json());

app.post('/home', (req, res) => {
let card = {
sections: [{
widgets: [{
textParagraph: {
text: 'Hello world'
}
}]
}]
};
res.json({
action: {
navigations: [{
pushCard: card
}]
}
});
}

This is the equivalent in NodeJS using express, a popular web server framework. It shows a little bit more of the underlying mechanics -- working with the HTTP request/response directly, starting the server, and so on.

The biggest difference is the card markup -- instead of using CardService, which under the covers builds a protobuf, we're using the JSON representation of the same thing.

function getCurrentMessage(event) {
var accessToken = event.messageMetadata.accessToken;
var messageId = event.messageMetadata.messageId;
GmailApp.setCurrentMessageAccessToken(accessToken);
return GmailApp.getMessageById(messageId);
}

Another area where things differ is accessing Google APis. In Apps Script, the clients are available in the global context -- the APIs mostly 'just work'. Moving to Node requires a little more effort, but not much.

Apps Script is super easy here. In fact, normally we wouldn't bother with setting the token when using more permissive scopes as it's done for us by Apps Script. We're doing it here to take advantage of the per-message scope that the add-on framework provides.

const { google } = require('googleapis');
const { OAuth2Client } = require('google-auth-library');
const gmail = google.gmail({ version: 'v1' });

async function fetchMessage(event) {
const accessToken = event.gmail.accessToken;
const auth = new OAuth2Client();
auth.setCredentials({access_token: accessToken});

const messageId = event.gmail.messageId;
const res = await gmail.users.messages.get({
id: messageId,
userId: 'me',
headers: { 'X-Goog-Gmail-Access-Token': event.gmail.accessToken },
auth
});
return res.data;
}

The NodeJS version is very similar -- a little extra code to import the libraries, but otherwise the same -- extract the message ID and token from the request, set the credentials, then call the API to get the message contents.

Your Add-on, Your way

One of the biggest wins for alternate runtimes is the testability that comes with using your favorite IDE, language, and framework--all of which helps you make developing add-ons more approachable.

Both Apps Script and alternate runtimes for G Suite Add-ons have important places in building Add-ons. If you’re getting into building Add-ons or if you want to prototype more complex ones, Apps Script is a good choice.. If you write and maintain systems as your full time job, though, alternate runtimes allow you to use those tools to build your Add-on, letting you leverage work, code and processes that you’re already using. With alternate runtimes for G Suite Add-ons, we want to make it possible for you to extend G Suite in a way that fits your needs using whatever tools you're most comfortable with.

And don't just take our word for it, hear from one of our early access partners. Shailesh Matariya, CTO at Gfacility has this to say about alternate runtimes: "We're really happy to use alternate runtimes in G Suite Add-ons. The results have been great and it's much easier to maintain the code. Historically, it would take 4-5 seconds to load data in our Add-on, whereas with alternate runtimes it's closer to 1 second, and that time and efficiency really adds up. Not to mention performance, we're seeing about a 50% performance increase and thanks to this our users are able to manage their workflows with just a few clicks, without having to jump to another system and deal with the hassle of constant updates."

Next Steps

Read the developer documentation for Alternate Runtimes and sign up for the early access program.

ChromeOS.dev — A blueprint to build world-class apps and games for Chrome OS

Posted by Iein Valdez, Head of Chrome OS Developer Relations

This article originally appeared on ChromeOS.dev.

While people are spending more time at home than on the go, they’re relying increasingly on personal desktops and laptops to make everyday life easier. Whether they’re video-chatting with friends and family, discovering entertaining apps and games, multitasking at work, or pursuing a passion project, bigger screens and better performance have made all the difference.

This trend was clear from March through June 2020: Chromebook unit sales grew 127% year over year (YOY) while the rest of the U.S. notebook category increased by 40% YOY.1 Laptops have become crucial to people at home who want to use their favorite apps and games, like Star Trek™ Fleet Command and Reigns: Game of Thrones to enjoy action-packed adventure, Calm to manage stress, or Disney+ to keep the whole family entertained.

Device Sales YOY

To deliver app experiences that truly improve people’s lives, developers must be equipped with the right tools, resources, and best practices. That’s why we’re excited to introduce ChromeOS.dev — a dedicated resource for technical developers, designers, product managers, and business leaders.

ChromeOS.dev, available in English and Spanish (with other languages coming soon), features the latest news, product announcements, technical documentation, and code samples from popular apps. Whether you’re a web, Android, or Linux developer who’s just getting started or a certified expert, you’ll find all the information you need on ChromeOS.dev.

Hear from our experts at Google and Chrome OS, as well as a variety of developers, as they share practical tips, benefits, and the challenges of creating app experiences for today’s users. Plus, you can review the updated Chrome OS Layout and UX App Quality guidelines with helpful information on UI components, navigation, fonts, layouts, and everything that goes into creating world-class apps and games for Chrome OS.

Even better, as a fully open-source online destination, ChromeOS.dev is designed considering all the principles and methods for creating highly capable and reliable Progressive Web Apps (PWAs), ensuring developers always have quick, easy access to the information they need — even when they’re offline.

Check out a few of the newest updates and improvements below, and be sure to install the ChromeOS.dev PWA on your device to stay on top of the latest information.

New features for Chrome OS developers

Whether it’s developing Android, Linux, or web apps, every update on ChromeOS.dev is about making sure all developers can build better app experiences in a streamlined, easy-to-navigate environment.

Customizable Linux Terminal

The Linux (Beta) on Chrome OS Terminal now comes equipped with personalized features right out of the box, including:

  • Integrated tabs and shortcuts
    Multitask with ease by using windows and tabs to manage different tasks and switch between multiple projects. You can also use familiar shortcuts such as Ctrl + T, Ctrl + W, and Ctrl + Tab to manage your tabs, or use the settings page to control if these keys should be used in your Terminal for apps like vim or emacs.
  • Themes
    Customize your Terminal by selecting a theme to switch up the background, frame, font, and cursor color.
  • Redesigned Terminal settings
    The settings tab has been reorganized to make it easier to customize all your Terminal options.

Developers can now start using these and other customizable features in the Terminal app.

Android Emulator support

Supported Chromebooks can now run a full version of the Android Emulator, which allows developers to test apps on any Android version and device without needing the actual hardware. Android app developers can simulate map locations and other sensor data to test how an app performs with various motions, orientations, and environmental conditions. With the Android Emulator support in Chrome OS, developers can optimize for different Android versions and devices — including tablets and foldable smartphones — right from their Chromebook.

Deploy apps directly to Chrome OS

Building and testing Android apps on a single machine is simpler than ever. Now, developers who are running Chrome OS M81 and higher can deploy and test apps directly on their Chromebooks — no need to use developer mode or to connect different devices physically via USB. Combined with Android Emulator support, Chrome OS is equipped to support full Android development.

Improved Project Wizard in Android Studio

An updated Primary/Detail Activity Template in Android Studio offers complete support to build experiences for larger screens, including Chromebooks, tablets, and foldables. This updated option provides multiple layouts for both phones and larger-screen devices as well as better keyboard/mouse scaffolding. This feature will be available in Android Studio 4.2 Canary 8.

Updated support from Android lint checks

We’ve improved the default checks in Android’s lint tool to help developers identify and correct common coding issues to improve their apps on larger screens, such as non-resizable and portrait-locked activities. This feature is currently available for testing in Canary channel.

Unlock your app’s full potential with Chrome OS

From day one, our goal has been to help developers at every skill level create simple, powerful, and secure app experiences for all platforms. As our new reality creates a greater need for helpful and engaging apps on large-screen devices, we’re working hard to streamline the process by making Chrome OS more versatile, customizable, and intuitive.

Visit ChromeOS.dev and install it on your Chromebook to stay on top of the latest resources, product updates, thought-provoking insights, and inspiring success stories from Chrome OS developers worldwide.






Sources:
1 The NPD Group, Inc., U.S. Retail Tracking Service, Notebook Computers, based on unit sales, April–June 2020 and March–June 2020​.

Introducing Cast Connect: a better way to integrate Google Cast directly into your Android TV apps

Posted by Meher Vurimi, Product Manager

For more than seven years, Google Cast has made it easy for users to enjoy your content on the big screen with Chromecast or a Chromecast built-in TV. We’re always looking to improve the casting experiences, which is why we’re excited to introduce Cast Connect. This new feature allows users to cast directly to your Android TV app while still allowing control from your sender app.

Why is this helpful?

With Google Cast your app is the remote control - helping users find, play, pause, seek, stop, and otherwise control what they’re watching. It enables you to extend video or audio from your Android, iOS, or Chrome app to a TV or sound system. With Android TV, partners are able to build apps that let users experience your app's immersive content on their TV screen and control with a remote.

With Cast Connect, we are combining the best of both worlds; first, by prioritizing playback via the Android TV app to deliver a richer and more immersive experience, and second, by allowing the user to still control their experience from your Android, iOS or Chrome app, and now, also directly using their Android TV’s remote control. Cast Connect helps the user easily engage with other content directly on the TV instead of only having to use your mobile device to browse for additional content.

Cast Connect User Journey on Stan

Availability

We’re working closely with a number of partners on bringing Cast Connect to their Apps and, most recently, we’re excited to announce that CBS and our Australian SVOD partner, Stan have launched Cast Connect. Starting today, the Cast Connect library is available on Android, iOS and Chrome. To get started with adding Cast Connect to your existing framework, head over to the Google Cast Developers site. Along the way, the Cast SDK team and the developer community are available to help you and answer questions on Stack Overflow by using the google-cast-connect tag.

Happy Casting!

Digital Ink Recognition in ML Kit

Posted by Mircea Trăichioiu, Software Engineer, Handwriting Recognition

A month ago, we announced changes to ML Kit to make mobile development with machine learning even easier. Today we're announcing the addition of the Digital Ink Recognition API on both Android and iOS to allow developers to create apps where stylus and touch act as first class inputs.

Digital ink recognition: the latest addition to ML Kit’s APIs

Digital Ink Recognition is different from the existing Vision and Natural Language APIs in ML Kit, as it takes neither text nor images as input. Instead, it looks at the user's strokes on the screen and recognizes what they are writing or drawing. This is the same technology that powers handwriting recognition in Gboard - Google’s own keyboard app, which we described in detail in a 2019 blog post. It's also the same underlying technology used in the Quick, Draw! and AutoDraw experiments.

Handwriting input in Gboard

Turning doodles into art with Autodraw

With the new Digital Ink Recognition API, developers can now use this technology in their apps as well, for everything from letting users input text and figures with a finger or stylus to transcribing handwritten notes to make them searchable; all in near real time and entirely on-device.

Supports many languages and character sets

Digital Ink Recognition supports 300+ languages and 25+ writing systems including all major Latin languages, as well as Chinese, Japanese, Korean, Arabic, Cyrillic, and more. Classifiers parse written text into a string of characters

Recognizes shapes

Other classifiers can describe shapes, such as drawings and emojis, by the class to which they belong (circle, square, happy face, etc). We currently support an autodraw sketch recognizer, an emoji recognizer, and a basic shape recognizer.

Works offline

Digital Ink Recognition API runs on-device and does not require a network connection. However, you must download one or more models before you can use a recognizer. Models are downloaded on demand and are around 20MB in size. Refer to the model download documentation for more information.

Runs fast

The time to perform a recognition call depends on the exact device and the size of the input stroke sequence. On a typical mobile device recognizing a line of text takes about 100 ms.

How to get started

If you would like to start using Digital Ink Recognition in your mobile app, head over to the documentation or check out the sample apps for Android and iOS to see the API in action. For questions or feedback, please reach out to us through one of our community channels.

Automatic Deployment of Hugo Sites on Firebase Hosting and Drafts on Cloud Run

Posted by James Ward, Developer Advocate

Recently I completed the migration of my blog from Wordpress to Hugo and I wanted to take advantage of it now being a static site by hosting it on a Content Delivery Network (CDN). With Hugo the source content is plain files instead of rows in a database. In the case of my blog those files are in git on GitHub. But when the source files change, the site needs to be regenerated and redeployed to the CDN. Also, sometimes it is nice to have drafts available for review. I setup a continuous delivery pipeline which deploys changes to my prod site on Firebase Hosting and drafts on Cloud Run, using Cloud Build. Read on for instructions for how to set all this up.

Step 1a) Setup A New Hugo Project

If you do not have an existing Hugo project you can create a GitHub copy (i.e. fork) of my Hugo starter repo:

Step 1b) Setup Existing Hugo Project

If you have an existing Hugo project you'll need to add some files to it:

.firebaserc

{
"projects": {
"production": "hello-hugo"
}
}

cloudbuild-draft.yaml

steps:
- name: 'gcr.io/cloud-builders/git'
entrypoint: '/bin/sh'
args:
- '-c'
- |
# Get the theme git submodule
THEME_URL=$(git config -f .gitmodules --get-regexp '^submodule\..*\.url$' | awk '{ print $2 }')
THEME_DIR=$(git config -f .gitmodules --get-regexp '^submodule\..*\.path$' | awk '{ print $2 }')
rm -rf themes
git clone $$THEME_URL $$THEME_DIR

- name: 'gcr.io/cloud-builders/docker'
entrypoint: '/bin/sh'
args:
- '-c'
- |
docker build -t gcr.io/$PROJECT_ID/$REPO_NAME-$BRANCH_NAME:$COMMIT_SHA -f - . << EOF
FROM klakegg/hugo:latest
WORKDIR /workspace
COPY . /workspace
ENTRYPOINT hugo -D -p \$$PORT --bind \$$HUGO_BIND --renderToDisk --disableLiveReload --watch=false serve
EOF
docker push gcr.io/$PROJECT_ID/$REPO_NAME-$BRANCH_NAME:$COMMIT_SHA

- name: 'gcr.io/cloud-builders/gcloud'
args:
- run
- deploy
- --image=gcr.io/$PROJECT_ID/$REPO_NAME-$BRANCH_NAME:$COMMIT_SHA
- --platform=managed
- --project=$PROJECT_ID
- --region=us-central1
- --memory=512Mi
- --allow-unauthenticated
- $REPO_NAME-$BRANCH_NAME

cloudbuild.yaml

steps:
- name: 'gcr.io/cloud-builders/git'
entrypoint: '/bin/sh'
args:
- '-c'
- |
# Get the theme git submodule
THEME_URL=$(git config -f .gitmodules --get-regexp '^submodule\..*\.url$' | awk '{ print $2 }')
THEME_DIR=$(git config -f .gitmodules --get-regexp '^submodule\..*\.path$' | awk '{ print $2 }')
rm -rf themes
git clone $$THEME_URL $$THEME_DIR

- name: 'gcr.io/cloud-builders/curl'
entrypoint: '/bin/sh'
args:
- '-c'
- |
curl -sL https://github.com/gohugoio/hugo/releases/download/v0.69.2/hugo_0.69.2_Linux-64bit.tar.gz | tar -zxv
./hugo

- name: 'gcr.io/cloud-builders/wget'
entrypoint: '/bin/sh'
args:
- '-c'
- |
# Get firebase CLI
wget -O firebase https://firebase.tools/bin/linux/latest
chmod +x firebase
# Deploy site
./firebase deploy --project=$PROJECT_ID --only=hosting

firebase.json

{
"hosting": {
"public": "public"
}
}


Step 2) Setup Cloud Build Triggers

In the Google Cloud Build console, connect to your newly forked repo: Select the newly forked repo: Create the default push trigger: Edit the new trigger: Set the trigger to only fire on changes to the ^master$ branch: Create a new trigger: Give it a name like drafts-trigger, specify the branch selector as .* (i.e. any branch), and the build configuration file type to "Cloud Build configuration file" with a value of cloudbuild-draft.yaml Setup permissions for the Cloud Build process to manage Cloud Run and Firebase Hosting by visiting the IAM management page, locate the member with the name ending with @cloudbuild.gserviceaccount.com, and select the "pencil" / edit button: Add a role for "Cloud Run Admin" and another for "Firebase Hosting Admin": Your default "prod" trigger isn't read to test yet, but you can test the drafts on Cloud Run by going back to the Cloud Build Triggers page, and clicking the "Run Trigger" button on the "drafts-trigger" line. Check the build logs by finding the build in the Cloud Build History. Once the build completes visit the Cloud Run console to find your newly created service which hosts the drafts version of your new blog. Note that the service name includes the branch so that you can see drafts from different branches.

Step 3) Setup Firebase Hosting

To setup your production / CDN'd site, login to the firebase console and select your project:

Now you'll need your project id, which can be found in the URL on the Firebase Project Overview page. The URL for my project is:

console.firebase.google.com/project/jw-demo/overview

Which means my project id is: jw-demo

Now copy your project id go into your GitHub fork, select the .firebaserc file and click the "pencil" / edit button:

Replace the hello-hugo string with your project id and commit the changes. This commit will trigger two new builds, one for the production site and one for the drafts site on Cloud Run. You can check the status of those builds on the Cloud Build History page. Once the default trigger (the one for Firebase hosting) finishes, check out your Hugo site running on Firebase Hosting by navigating to (replacing YOUR_PROJECT_ID with the project id you used above): https://YOUR_PROJECT_ID.web.app/

Your prod and drafts sites are now automatically deploying on new commits!

Step 4) (Optional) Change Hugo Theme

There are many themes for Hugo and they are easy to change. Typically themes are pulled into Hugo sites using git submodules. To change the theme, edit your .gitmodules file and set the subdirectories and url. As an example, here is the content when using the mainroad theme:

[submodule "themes/mainroad"]
path = themes/mainroad
url = https://github.com/vimux/mainroad.git

You will also need to change the theme value in your config.toml file to match the directory name in the themes directory. For example:

theme = "mainroad"

Note: At the time of writing this, Cloud Build does not clone git submodules so the cloudbuild.yaml does the cloning instead.

Step 5) (Optional) Setup Local Editing

To setup local editing you will first need to clone your fork. You can do this with the GitHub desktop app. Or from the command line:

git clone --recurse-submodules https://github.com/USER/REPO.git

Once you have the files locally, install Hugo, and from inside the repo's directory, run:

hugo -D serve

This will serve the drafts in the site. You can check out the site at: localhost:1313

Committing non-draft changes to master and pushing those changes to GitHub will kick off the build which will deploy them on your prod site. Committing draft to any branch will kick off the build which will deploy them on a Cloud Run site.

Hopefully that all helps you with hosting your Hugo sites! Let me know if you run into any problems.

Meet the global students solving local problems with code

Posted by Erica Hanson, Developer Student Clubs Program Manager, Google

Google Developer Student Clubs (DSC) are university based community groups for students who are interested in Google’s developer technologies. Each year, Google puts a call out to the entire DSC global community, asking students to answer one simple question: Can you solve a local problem in your community by building with Google’s technologies?

This event is known as the DSC Solution Challenge and this year’s winners went above and beyond to answer the call - so much so that we couldn’t just pick one winner, we chose 10.

While we initially thought we were the ones sending out the challenge, these young developers instead flipped the script back on us. Through their innovative designs and uncompromised creative spirit, they’ve pushed our team here at Google to stretch our thinking about how developers can build a more hopeful future.

With this, we’re welcoming these passionate students and anyone interested to the virtual Solution Challenge Demo Day on August 26th, where the students will present their winning ideas in detail.

Ahead of the event, learn more about this incredible group of thinkers and their solutions below.

1. FreeSpeak - Technical University of Munich, Germany

Maria Pospelova, Wei Wen Qing, and Almo Gunadya Sutedjo developed FreeSpeak, a software that uses modern machine learning and video/audio analyzing tools by leveraging TensorFlow and Google Cloud’s Natural Language to analyze presentations and give individual feedback and tips as a “virtual coach.”

“We’ve loved connecting with talented people from around the world and exchanging ideas with them. We see that we can provide impact not only to our local neighborhood, but also around the world and help people. This motivates us to work a lot harder.”

2. CoronaAI - University of California Berkeley, United States

Anushka Purohit, Anupam Tiwari, and Neel Desai created CoronaAI, a TensorFlow based technology that helps examine COVID-19 data. Specifically, the device is made up of a band worn around a patient's chest that uses electrodes to extract real-time images of the lungs. From here, the band connects to a monitor that allows doctors to examine patients in real time without being near them.

“We're honestly huge fans of the Google Cloud Platform because of its simplicity, familiarity, and the large number of resources available. Developing this project was the best learning experience.”

3. Worthy Walk - National University of Computer & Emerging Sciences, Pakistan

Syed Moazzam Maqsood, Krinza Momin, Muhammad Ahmed Gul, and Hussain Zuhair built Worthy Walk: an Android and iOS app that provides its users a platform to achieve health goals by walking, running, or cycling. To encourage users, Worthy Walk provides an inbuilt currency called Knubs that can be redeemed as discounts from local businesses, shops, and startups.

“Being a part of DSC means friendship - sharing knowledge and resources - all while developing a social infrastructure that gives people the power to build a global community that works for all of us.”

4. Simhae : Deep sea of mind - Soonchunhyang University, South Korea

Yuna Kim, Young hoon Jo, Jeong yoon Joo, and Sangbeom Hwang created Simhae, a platform created with Flutter and Google Cloud that allows users to access basic information and activities to inspire them to attend self-help gatherings run by suicide prevention centers. They believe that this experience is an important point that can lead to solidarity of suicide survivors.

“It's so nice to have a chance to meet more diverse people. Through these communities, I can make up for my shortcomings and share more information with those who have different experiences than me - all while developing my own potential.”

5. Emergency Response Assistance - University of Education, Winneba (College of Technology Kumasi), Ghana

Elvis Antwi Sarfo, Yaw Barnieh Anane, Ampomah Ata Acheampong Prince, and Perditha Abena Acheampong constructed Emergency Response Assistance, an Android application to help health authorities post the latest first aid steps to educate the public and also help the victims report emergency cases with a click of a button. The Emergency Response team will also be able to track the exact location of the victims on the map.

“DSC is not just a community, it’s an inspiration. It’s outstanding how the platform has brought all of these students, lecturers, and teaching assistants, who are all so passionate about using tech to solve problems, together.”

6. Tulibot - Politeknik Elektronika Negeri Surabaya, Indonesia

Muhammad Alan Nur, Pravasta Caraka Bramastagiri, Eva Rahmadanti, and Namira Rizqi Annisa created Tulibot: an integrated assistive technology, built with the Google Speech API, that’s made to bridge communication between deaf people and society. The group made two main devices, Smart Glasses and Smart Gloves. Smart Glasses help with communication for the hearing impaired by showing real time answers directly on the glasses from its interlocutors. Smart Gloves transcribe gesture input into audio output by implementing gesture to text technology.

“This has been an amazing opportunity for us because with this challenge, we can learn many things like coding, management, business, and more. The special materials we can get directly from Google is so helpful.”

7. Picare - The Hong Kong University of Science & Technology, Hong Kong

Sze Yuk Yin, Kwok Ue Nam, Ng Chi Ting, Chong Cheuk Hei, and Silver Ng developed Picare, a healthcare matching platform built with Flutter and Google Machine Learning to help elderly people in Hong Kong. Users will be able to use the app to research, schedule, and pay caregivers directly through the app.

“Our community hosted several workshops ranging from design thinking to coding techniques. This boosted our development by introducing us to various state-of-the-art technologies, such as Machine Learning and Cloud computing, which helped us reach our development goals.”

8. Shareapy - Ho Chi Minh City University of Technology, Vietnam

Vo Ngoc Khanh Linh, Tran Lam Bao Khang, Nguyen Dang Huy, and Nguyen Thanh Nhan built Shareapy: a digitized support group app created with Android that helps bring people together who share similar problems regardless of their age, gender, religion, financial status, etc. After conducting an extremely rigorous user testing phase, this team had the chance to see all that TensorFlow and Firebase could do.

“My team loves Firebase so much. One of our team members now uses it to help do some of his homework problems.”

9. Capstone - Midlands State University, Zimbabwe

Victor Chinyavada, Marvellous Humphery Chirunga, and Lavender Zandile Tshuma started Capstone, a service hosted on the Google Cloud Platform that aims to combat plagiarism among students, authors, and researchers. In particular, the technology aims to develop more effective algorithms that will incorporate the latest in big data, artificial intelligence, and data mining. As a team, the group bonded over applying technologies from Google to their project, but their real takeaway was working together to solve problems.

“To submit our project on time, we started all night hackathons, which helped us finish all of our work while having fun and getting to know each other better.”

10. MiCamp - Dr. B.R. Ambedkar National Institute of Technology, India

Praveen Agrawal built MiCamp, an Android app that holds all the info students from his campus need. Features include a calendar with upcoming campus events, student profiles, a used book marketplace, hostel management, online food ordering, and more. As a team of one, Praveen needed to speed up his development, so he applied his new knowledge of Flutter to finish.

“I’d heard of technologies like Flutter, but never used them until joining DSC; they inspired us to use those technologies, which really improved my solution.”

________________________


Want to learn more about Developer Student Clubs? Join a club near you, here and stay tuned for our upcoming virtual Solution Challenge Demo Day on August 26th, here.

Summer updates from Coral

Posted by the Coral Team

Summer has arrived along with a number of Coral updates. We're happy to announce a new partnership with balena that helps customers build, manage, and deploy IoT applications at scale on Coral devices. In addition, we've released a series of updates to expand platform compatibility, make development easier, and improve the ML capabilities of our devices.

Open-source Edge TPU runtime now available on GitHub

First up, our Edge TPU runtime is now open-source and available on GitHub, including scripts and instructions for building the library for Linux and Windows. Customers running a platform that is not officially supported by Coral, including ARMv7 and RISC-V can now compile the Edge TPU runtime themselves and start experimenting. An open source runtime is easier to integrate into your customized build pipeline, enabling support for creating Yocto-based images as well as other distributions.

Windows drivers now available for the Mini PCIe and M.2 accelerators

Coral customers can now also use the Mini PCIe and M.2 accelerators on the Microsoft Windows platform. New Windows drivers for these products complement the previously released Windows drivers for the USB accelerator and make it possible to start prototyping with the Coral USB Accelerator on Windows and then to move into production with our Mini PCIe and M.2 products.

New fresh bits on the Coral ML software stack

We’ve also made a number of new updates to our ML tools:

  • The Edge TPU compiler is now version 14.1. It can be updated by running sudo apt-get update && sudo apt-get install edgetpu, or follow the instructions here
  • Our new Model Pipelining API allows you to divide your model across multiple Edge TPUs. The C++ version is currently in beta and the source is on GitHub
  • New embedding extractor models for EfficientNet, for use with on-device backpropagation. Embedding extractor models are compiled with the last fully-connected layer removed, allowing you to retrain for classification. Previously, only Inception and MobileNet were available and now retraining can also be done on EfficientNet
  • New Colab notebooks to retrain a classification model with TensorFlow 2.0 and build C++ examples

Balena partners with Coral to enable AI at the edge

We are excited to share that the Balena fleet management platform now supports Coral products!

Companies running a fleet of ML-enabled devices on the edge need to keep their systems up-to-date with the latest security patches in order to protect data, model IP and hardware from being compromised. Additionally, ML applications benefit from being consistently retrained to recognize new use cases with maximum accuracy. Coral + balena together, bring simplicity and ease to the provisioning, deployment, updating, and monitoring of your ML project at the edge, moving early prototyping seamlessly towards production environments with many thousands of devices.

Read more about all the benefits of Coral devices combined with balena container technology or get started deploying container images to your Coral fleet with this demo project.

New version of Mendel Linux

Mendel Linux (5.0 release Eagle) is now available for the Coral Dev Board and SoM and includes a more stable package repository that provides a smoother updating experience. It also brings compatibility improvements and a new version of the GPU driver.

New models

Last but not least, we’ve recently released BodyPix, a Google person-segmentation model that was previously only available for TensorFlow.JS, as a Coral model. This enables real-time privacy preserving understanding of where people (and body parts) are on a camera frame. We first demoed this at CES 2020 and it was one of our most popular demos. Using BodyPix we can remove people from the frame, display only their outline, and aggregate over time to see heat maps of population flow.

Here are two possible applications of BodyPix: Body-part segmentation and anonymous population flow. Both are running on the Dev Board.

We’re excited to add BodyPix to the portfolio of projects the community is using to extend our models far beyond our demos—including tackling today’s biggest challenges. For example, Neuralet has taken our MobileNet V2 SSD Detection model and used it to implement Smart Social Distancing. Using the bounding box of person detection, they can compute a region for safe distancing and let a user know if social distance isn’t being maintained. The best part is this is done without any sort of facial recognition or tracking, with Coral we can accomplish this in real-time in a privacy preserving manner.

We can’t wait to see more projects that the community can make with BodyPix. Beyond anonymous population flow there’s endless possibilities with background and body part manipulation. Let us know what you come up with at our community channels, including GitHub and StackOverflow.

________________________

We are excited to share all that Coral has to offer as we continue to evolve our platform. For a list of worldwide distributors, system integrators and partners, including balena, visit the Coral partnerships page. Please visit Coral.ai to discover more about our edge ML platform and share your feedback at [email protected].

Google Pay plugin for Magento 2

Posted by Soc Sieng, Developer Advocate

Google Pay plugin for Magento 2

We are pleased to announce the launch of the official Google Pay plugin for Magento 2. The Google Pay plugin can help increase conversions by enabling a simpler and more secure checkout experience in your Magento website. When you integrate with Google Pay, your customers can complete their purchases quickly using the payment methods they’ve securely saved to their Google Accounts.

Google Pay in action.

The Google Pay plugin is built in collaboration with Unbound Commerce, is free to use, and integrates with popular payment service providers including Adyen, BlueSnap, Braintree, FirstData - Payeezy & Ucom, Moneris, Stripe, and Vantiv.

Installation

The Google Pay plugin can be installed from the Magento Marketplace using this link or by searching the Magento Marketplace for “Google Pay”.

Refer to the Magento Marketplace User Guide for more installation instructions.

Getting started

To get started with the Google Pay plugin, you will need your Google Pay merchant identifier which can be found in the Google Pay Business Console.

Your Merchant ID can be found in the Google Pay Business Console.

Configuring the Google Pay plugin

Once installed, you can configure the plugin in your site’s Magento administration console by navigating to Stores > Configuration > Sales > Payment Methods and selecting the Configure button next to Google Pay.

Click on the Configure button to start the setup process.

Testing out Google Pay can be achieved in three easy steps:

  1. Google Pay credentials: enter your Google Pay merchant ID (available from the Google Pay Business Console) and merchant name.
  2. Payment gateway credentials: select your payment gateway from the list of payment gateways supported by the Google Pay plugin.
    1. Choose the Sandbox environment for testing purposes.
    2. Enter your payment gateway’s credentials into their respective form fields.
  3. Google Pay settings: enable Google Pay and choose the card networks that you would like to accept.

You can optionally try out some of the advanced settings that provide the ability to customize the color and type of Google Pay button as well as enabling Minicart integration, which is recommended.

Checkout the Advanced Settings to further customize how and where the Google Pay button is presented in your store.

If your payment provider isn’t listed as an option in the payment gateway list, check to see if your payment provider’s plugin has built-in support for Google Pay.

Launching Google Pay for your website

When you’ve completed your testing, submit your website integration in the Google Pay Business Console. You will need to provide your website’s URL and screenshots to complete the submission.

Summing it up

Integrating Google Pay into your website is a great way to increase conversions and to improve the purchasing experience for your customers.

Find out more about Google Pay and the Google Pay plugin for Magento.

What do you think?

Do you have any questions? Let us know in the comments below or tweet using #AskGooglePayDev.