Tag Archives: Design

Pixel art: How designers created the new Pixel 6 colors

During a recent visit to Google’s Color, Material and Finish (better known as CMF) studio, I watched while Jess Ng and Jenny Davis opened drawer after drawer and placed object after object on two white tables. A gold hoop earring, a pale pink shell — all pieces of inspiration that Google designers use to come up with new colors for devices, including the just-launched Pixel 6 and Pixel 6 Pro.

“We find inspiration everywhere,” Jenny says. “It’s not abnormal to have a designer come to the studio with a toothbrush or some random object they found on their walk or wherever.”

The CMF team designs how a Google device will physically look and feel. “Color, material and finish are a big part of what defines a product,” Jess, a CMF hardware designer, says. “It touches on the more emotional part of how we decide what to buy.” And Jenny, CMF Manager for devices and services, agrees. “We always joke around that in CMF, the F stands for ‘feelings,’ so we joke that we design feelings.”

The new Pixel 6 comes in Sorta Seafoam and Kinda Coral, while the Pixel 6 Pro comes in Sorta Sunny and Cloudy White, and both are available in Stormy Black. Behind those five shades are years of work, plenty of trial and error…and lots and lots of fine-tuning. “It’s actually a very complex process,” Jenny says.

Mademore complex by COVID-19. Both Jenny and Jess describe the color selection process as highly collaborative and hands-on, which was difficult to accomplish while working from home. Designers aren’t just working with their own teams, but with those on the manufacturing and hardware side as well. “We don’t design color after the hardware design is done — we actually do it together,” Jenny says. The Pixel 6 and Pixel 6 Pro’s new premium look and feel influenced the direction of the new colors, and the CMF team needed to see colors and touch items in order to select and eliminate the shades.

They don’t only go hands-on with the devices, they do the same with sources of inspiration. “I remember one time I really wanted to share this color because I thought it would be really appropriate for one of our products, so I ended up sending my boss one of my sweaters through a courier delivery!” Jenny says. “We found creative workarounds.”

The team that designed the new Pixel 6 and Pixel 6 Pro case colors did as well. “The CMF team would make models and then take photos of the models and I would try to go in and look at them in person and physically match the case combinations against the different phone colors,” says Nasreen Shad, a Pixel Accessories product manager. “Then we’d render or photograph them and send them around to the team to review and see if what was and wasn’t working.” In addition to the challenge of working remotely, Nasreen’s team was also working on something entirely new: colorful, translucent cases.

Nasreen says they didn’t want to cover up the phones, but complement them instead, so they went with a translucent tinted plastic. Each device has a case that corresponds to its color family, but you can mix and match them for interesting new shades.

That process involved lots of experimenting. For example, what eventually became the Golden Glow case started out closer to a bronze color, which didn’t pair as well with the Stormy Black phone. “We had to tune it to a peachy shade, so that it looked good with its ‘intended pairing,’ Sorta Sunny, but with everything else, too. That meant ordering more resins and color chips in different tones, but it ended with some really beautiful effects.”

Beautiful effects, and tons of options. “I posted a picture of all of the possible combinations you can make with the phones and the cases and people kept asking me, ‘how many phones did Google just release!?’” Nasreen laughs. “And I had to be like, ‘No, no, no, these are just the cases!’”

A photograph showing the various Pixel 6 and Pixel 6 Pro phones in different colors in different colored cases, illustrating how many options there are.

Google designers often only know the devices and colors by temporary, internal code names. It's up to their colleagues to come up with the names you see on the Google Store site now. But one person who absolutely knows their official names is Lily Hackett, a Product Marketing Manager who works on a team that names device colors. “The way that we go about color naming is unique,” she says. “We like to play on the color. When you think about it, it’s actually very difficult to describe color, and the colors we often use are subtle — so we like to be specific with our approach to the name.”

Because color can be so subjective (one person’s white and gold dress is another’s black and blue dress), Lily’s team often checks in with CMF designers to make sure the words and names they’re gravitating toward actually describe the colors accurately. “It’s so nice to go to color experts and say, ‘Is this right? Is this a word you would use to describe this color?’”

Lily says their early brainstorming sessions can result in lists of 75 or more options. “It’s truly a testament to our copywriting team. When we were brainstorming for Stormy Black, they had everything under the sun — they had everything under the moon! It was incredible to see how many words they came up with.”

These days, everyone is looking ahead at new colors and new names, but the team is excited to see the rest of the world finally get to see their work. “I couldn’t wait for them to come out,” Lily says. “My favorite color was even the first to sell out on the Google Store! I was like, ‘Yes, everyone else loves it, too!’”

New designs for Chrome and Chrome OS, by Latino artists

As we celebrate National Hispanic Heritage Month, we pay tribute to the generations of Latinos who have positively influenced and enriched society, arts, culture and science in the United States.

As a proud Latina, I have seen first hand how our diversity is our strength. We use various terms to define ourselves (Hispanic, Latinx, Latino, Black, Mexican, Salvadoran, Puerto Rican, Brazilian, and more), yet we still can come together as one resilient community.

This year Chrome partnered with Latino artists to create a collection of themes that celebrate our heritage. You can use them to customize your Chrome browser and Chromebook wallpapers. The work reflects a variety of meaningful subjects, from family to the subtle ways we all stay connected. This collection continues our work commissioning contemporary artists to visually show how people use Chrome and Chromebooks to get things done, explore, find and connect. 

Meet the commissioned artists, and browse the 20 new backgrounds in the collection on the Chrome Web Store or in your Chromebook wallpaper gallery.

Recommended strategies and best practices for designing and developing games and stories on Google Assistant

Posted by Wally Brill and Jessica Dene Earley-Cha

Illustration of pink car collecting coins

Since we launched Interactive Canvas, and especially in the last year we have been helping developers create great storytelling and gaming experiences for Google Assistant on smart displays. Along the way we’ve learned a lot about what does and doesn’t work. Building these kinds of interactive voice experiences is still a relatively new endeavor, and so we want to share what we've learned to help you build the next great gaming or storytelling experience for Assistant.

Here are three key things to keep in mind when you’re designing and developing interactive games and stories. These three were selected from a longer list of lessons learned (stay tuned to the end for the link for the 10+ lessons) because they are dependent on Action Builder/SDK functionality and can be slightly different for the traditional conversation design for voice only experiences.

1. Keep the Text-To-Speech (TTS) brief

Text-to-speech, or computer generated voice, has improved exponentially in the last few years, but it isn’t perfect. Through user testing, we’ve learned that users (especially kids) don’t like listening to long TTS messages. Of course, some content (like interactive stories) should not be reduced. However, for games, try to keep your script simple. Wherever possible, leverage the power of the visual medium and show, don’t tell. Consider providing a skip button on the screen so that users can read and move forward without waiting until the TTS is finished. In many cases the TTS and text on a screen won’t always need to mirror each other. For example the TTS may say "Great job! Let's move to the next question. What’s the name of the big red dog?" and the text on screen may simply say "What is the name of the big red dog?"

Implementation

You can provide different audio and screen-based prompts by using a simple response, which allows different verbiage in the speech and text sections of the response. With Actions Builder, you can do this using the node client library or in the JSON response. The following code samples show you how to implement the example discussed above:

candidates:
- first_simple:
variants:
- speech: Great job! Let's move to the next question. What’s the name of the big red dog?
text: What is the name of the big red dog?

Note: implementation in YAML for Actions Builder

app.handle('yourHandlerName', conv => {
conv.add(new Simple({
speech: 'Great job! Let\'s move to the next question. What’s the name of the big red dog?',
text: 'What is the name of the big red dog?'
}));
});

Note: implementation with node client library

2. Consider both first-time and returning users

Frequent users don't need to hear the same instructions repeatedly. Optimize the experience for returning users. If it's a user's first time experience, try to explain the full context. If they revisit your action, acknowledge their return with a "Welcome back" message, and try to shorten (or taper) the instructions. If you noticed the user has returned more than 3 or 4 times, try to get to the point as quickly as possible.

An example of tapering:

  • Instructions to first time users: “Just say words you can make from the letters provided. Are you ready to begin?”
  • For a returning user: “Make up words from the jumbled letters. Ready?”
  • For a frequent user: “Are you ready to play?”

Implementation

You can check the lastSeenTime property in the User object of the HTTP request. The lastSeenTime property is a timestamp of the last interaction with this particular user. If this is the first time a user is interacting with your Action, this field will be omitted. Since it’s a timestamp, you can have different messages for a user who’s last interaction has been more than 3 months, 3 weeks or 3 days. Below is an example of having a default message that is tapered. If the lastSeenTime property is omitted, meaning that it's the first time the user is interacting with this Action, the message is updated with the longer message containing more details.

app.handle('greetingInstructions', conv => {
let message = 'Make up words from the jumbled letters. Ready?';
if (!conv.user.lastSeenTime) {
message = 'Just say words you can make from the letters provided. Are you ready to begin?';
}
conv.add(message);
});

Note: implementation with node client library

3. Support strongly recommended intents

There are some commonly used intents which really enhance the user experience by providing some basic commands to interact with your voice app. If your action doesn’t support these, users might get frustrated. These intents help create a basic structure to your voice user interface, and help users navigate your Action.

  • Exit / Quit

    Closes the action

  • Repeat / Say that again

    Makes it easy for users to hear immediately preceding content at any point

  • Play Again

    Gives users an opportunity to re-engage with their favorite experiences

  • Help

    Provides more detailed instructions for users who may be lost. Depending on the type of Action, this may need to be context specific. Defaults returning users to where they left off in game play after a Help message plays.

  • Pause, Resume

    Provides a visual indication that the game has been paused, and provides both visual and voice options to resume.

  • Skip

    Moves to the next decision point.

  • Home / Menu

    Moves to the home or main menu of an action. Having a visual affordance for this is a great idea. Without visual cues, it’s hard for users to know that they can navigate through voice even when it’s supported.

  • Go back

    Moves to the previous page in an interactive story.

Implementation

Actions Builder & Actions SDK support System Intents that cover a few of these use case which contain Google support training phrase:

  • Exit / Quit -> actions.intent.CANCEL This intent is matched when the user wants to exit your Actions during a conversation, such as a user saying, "I want to quit."
  • Repeat / Say that again -> actions.intent.REPEAT This intent is matched when a user asks the Action to repeat.

For the remaining intents, you can create User Intents and you have the option of making them Global (where they can be triggered at any Scene) or add them to a particular scene. Below are examples from a variety of projects to get you started:

So there you have it. Three suggestions to keep in mind for making amazing interactive games and story experiences that people will want to use over and over again. To check out the full list of our recommendations go to the Lessons Learned page.

Thanks for reading! To share your thoughts or questions, join us on Reddit at r/GoogleAssistantDev.

Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on. Can’t wait to see what you build!

Recommended strategies and best practices for designing and developing games and stories on Google Assistant

Posted by Wally Brill and Jessica Dene Earley-Cha

Illustration of pink car collecting coins

Since we launched Interactive Canvas, and especially in the last year we have been helping developers create great storytelling and gaming experiences for Google Assistant on smart displays. Along the way we’ve learned a lot about what does and doesn’t work. Building these kinds of interactive voice experiences is still a relatively new endeavor, and so we want to share what we've learned to help you build the next great gaming or storytelling experience for Assistant.

Here are three key things to keep in mind when you’re designing and developing interactive games and stories. These three were selected from a longer list of lessons learned (stay tuned to the end for the link for the 10+ lessons) because they are dependent on Action Builder/SDK functionality and can be slightly different for the traditional conversation design for voice only experiences.

1. Keep the Text-To-Speech (TTS) brief

Text-to-speech, or computer generated voice, has improved exponentially in the last few years, but it isn’t perfect. Through user testing, we’ve learned that users (especially kids) don’t like listening to long TTS messages. Of course, some content (like interactive stories) should not be reduced. However, for games, try to keep your script simple. Wherever possible, leverage the power of the visual medium and show, don’t tell. Consider providing a skip button on the screen so that users can read and move forward without waiting until the TTS is finished. In many cases the TTS and text on a screen won’t always need to mirror each other. For example the TTS may say "Great job! Let's move to the next question. What’s the name of the big red dog?" and the text on screen may simply say "What is the name of the big red dog?"

Implementation

You can provide different audio and screen-based prompts by using a simple response, which allows different verbiage in the speech and text sections of the response. With Actions Builder, you can do this using the node client library or in the JSON response. The following code samples show you how to implement the example discussed above:

candidates:
- first_simple:
variants:
- speech: Great job! Let's move to the next question. What’s the name of the big red dog?
text: What is the name of the big red dog?

Note: implementation in YAML for Actions Builder

app.handle('yourHandlerName', conv => {
conv.add(new Simple({
speech: 'Great job! Let\'s move to the next question. What’s the name of the big red dog?',
text: 'What is the name of the big red dog?'
}));
});

Note: implementation with node client library

2. Consider both first-time and returning users

Frequent users don't need to hear the same instructions repeatedly. Optimize the experience for returning users. If it's a user's first time experience, try to explain the full context. If they revisit your action, acknowledge their return with a "Welcome back" message, and try to shorten (or taper) the instructions. If you noticed the user has returned more than 3 or 4 times, try to get to the point as quickly as possible.

An example of tapering:

  • Instructions to first time users: “Just say words you can make from the letters provided. Are you ready to begin?”
  • For a returning user: “Make up words from the jumbled letters. Ready?”
  • For a frequent user: “Are you ready to play?”

Implementation

You can check the lastSeenTime property in the User object of the HTTP request. The lastSeenTime property is a timestamp of the last interaction with this particular user. If this is the first time a user is interacting with your Action, this field will be omitted. Since it’s a timestamp, you can have different messages for a user who’s last interaction has been more than 3 months, 3 weeks or 3 days. Below is an example of having a default message that is tapered. If the lastSeenTime property is omitted, meaning that it's the first time the user is interacting with this Action, the message is updated with the longer message containing more details.

app.handle('greetingInstructions', conv => {
let message = 'Make up words from the jumbled letters. Ready?';
if (!conv.user.lastSeenTime) {
message = 'Just say words you can make from the letters provided. Are you ready to begin?';
}
conv.add(message);
});

Note: implementation with node client library

3. Support strongly recommended intents

There are some commonly used intents which really enhance the user experience by providing some basic commands to interact with your voice app. If your action doesn’t support these, users might get frustrated. These intents help create a basic structure to your voice user interface, and help users navigate your Action.

  • Exit / Quit

    Closes the action

  • Repeat / Say that again

    Makes it easy for users to hear immediately preceding content at any point

  • Play Again

    Gives users an opportunity to re-engage with their favorite experiences

  • Help

    Provides more detailed instructions for users who may be lost. Depending on the type of Action, this may need to be context specific. Defaults returning users to where they left off in game play after a Help message plays.

  • Pause, Resume

    Provides a visual indication that the game has been paused, and provides both visual and voice options to resume.

  • Skip

    Moves to the next decision point.

  • Home / Menu

    Moves to the home or main menu of an action. Having a visual affordance for this is a great idea. Without visual cues, it’s hard for users to know that they can navigate through voice even when it’s supported.

  • Go back

    Moves to the previous page in an interactive story.

Implementation

Actions Builder & Actions SDK support System Intents that cover a few of these use case which contain Google support training phrase:

  • Exit / Quit -> actions.intent.CANCEL This intent is matched when the user wants to exit your Actions during a conversation, such as a user saying, "I want to quit."
  • Repeat / Say that again -> actions.intent.REPEAT This intent is matched when a user asks the Action to repeat.

For the remaining intents, you can create User Intents and you have the option of making them Global (where they can be triggered at any Scene) or add them to a particular scene. Below are examples from a variety of projects to get you started:

So there you have it. Three suggestions to keep in mind for making amazing interactive games and story experiences that people will want to use over and over again. To check out the full list of our recommendations go to the Lessons Learned page.

Thanks for reading! To share your thoughts or questions, join us on Reddit at r/GoogleAssistantDev.

Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on. Can’t wait to see what you build!

Recommended strategies and best practices for designing and developing games and stories on Google Assistant

Posted by Wally Brill and Jessica Dene Earley-Cha

Illustration of pink car collecting coins

Since we launched Interactive Canvas, and especially in the last year we have been helping developers create great storytelling and gaming experiences for Google Assistant on smart displays. Along the way we’ve learned a lot about what does and doesn’t work. Building these kinds of interactive voice experiences is still a relatively new endeavor, and so we want to share what we've learned to help you build the next great gaming or storytelling experience for Assistant.

Here are three key things to keep in mind when you’re designing and developing interactive games and stories. These three were selected from a longer list of lessons learned (stay tuned to the end for the link for the 10+ lessons) because they are dependent on Action Builder/SDK functionality and can be slightly different for the traditional conversation design for voice only experiences.

1. Keep the Text-To-Speech (TTS) brief

Text-to-speech, or computer generated voice, has improved exponentially in the last few years, but it isn’t perfect. Through user testing, we’ve learned that users (especially kids) don’t like listening to long TTS messages. Of course, some content (like interactive stories) should not be reduced. However, for games, try to keep your script simple. Wherever possible, leverage the power of the visual medium and show, don’t tell. Consider providing a skip button on the screen so that users can read and move forward without waiting until the TTS is finished. In many cases the TTS and text on a screen won’t always need to mirror each other. For example the TTS may say "Great job! Let's move to the next question. What’s the name of the big red dog?" and the text on screen may simply say "What is the name of the big red dog?"

Implementation

You can provide different audio and screen-based prompts by using a simple response, which allows different verbiage in the speech and text sections of the response. With Actions Builder, you can do this using the node client library or in the JSON response. The following code samples show you how to implement the example discussed above:

candidates:
- first_simple:
variants:
- speech: Great job! Let's move to the next question. What’s the name of the big red dog?
text: What is the name of the big red dog?

Note: implementation in YAML for Actions Builder

app.handle('yourHandlerName', conv => {
conv.add(new Simple({
speech: 'Great job! Let\'s move to the next question. What’s the name of the big red dog?',
text: 'What is the name of the big red dog?'
}));
});

Note: implementation with node client library

2. Consider both first-time and returning users

Frequent users don't need to hear the same instructions repeatedly. Optimize the experience for returning users. If it's a user's first time experience, try to explain the full context. If they revisit your action, acknowledge their return with a "Welcome back" message, and try to shorten (or taper) the instructions. If you noticed the user has returned more than 3 or 4 times, try to get to the point as quickly as possible.

An example of tapering:

  • Instructions to first time users: “Just say words you can make from the letters provided. Are you ready to begin?”
  • For a returning user: “Make up words from the jumbled letters. Ready?”
  • For a frequent user: “Are you ready to play?”

Implementation

You can check the lastSeenTime property in the User object of the HTTP request. The lastSeenTime property is a timestamp of the last interaction with this particular user. If this is the first time a user is interacting with your Action, this field will be omitted. Since it’s a timestamp, you can have different messages for a user who’s last interaction has been more than 3 months, 3 weeks or 3 days. Below is an example of having a default message that is tapered. If the lastSeenTime property is omitted, meaning that it's the first time the user is interacting with this Action, the message is updated with the longer message containing more details.

app.handle('greetingInstructions', conv => {
let message = 'Make up words from the jumbled letters. Ready?';
if (!conv.user.lastSeenTime) {
message = 'Just say words you can make from the letters provided. Are you ready to begin?';
}
conv.add(message);
});

Note: implementation with node client library

3. Support strongly recommended intents

There are some commonly used intents which really enhance the user experience by providing some basic commands to interact with your voice app. If your action doesn’t support these, users might get frustrated. These intents help create a basic structure to your voice user interface, and help users navigate your Action.

  • Exit / Quit

    Closes the action

  • Repeat / Say that again

    Makes it easy for users to hear immediately preceding content at any point

  • Play Again

    Gives users an opportunity to re-engage with their favorite experiences

  • Help

    Provides more detailed instructions for users who may be lost. Depending on the type of Action, this may need to be context specific. Defaults returning users to where they left off in game play after a Help message plays.

  • Pause, Resume

    Provides a visual indication that the game has been paused, and provides both visual and voice options to resume.

  • Skip

    Moves to the next decision point.

  • Home / Menu

    Moves to the home or main menu of an action. Having a visual affordance for this is a great idea. Without visual cues, it’s hard for users to know that they can navigate through voice even when it’s supported.

  • Go back

    Moves to the previous page in an interactive story.

Implementation

Actions Builder & Actions SDK support System Intents that cover a few of these use case which contain Google support training phrase:

  • Exit / Quit -> actions.intent.CANCEL This intent is matched when the user wants to exit your Actions during a conversation, such as a user saying, "I want to quit."
  • Repeat / Say that again -> actions.intent.REPEAT This intent is matched when a user asks the Action to repeat.

For the remaining intents, you can create User Intents and you have the option of making them Global (where they can be triggered at any Scene) or add them to a particular scene. Below are examples from a variety of projects to get you started:

So there you have it. Three suggestions to keep in mind for making amazing interactive games and story experiences that people will want to use over and over again. To check out the full list of our recommendations go to the Lessons Learned page.

Thanks for reading! To share your thoughts or questions, join us on Reddit at r/GoogleAssistantDev.

Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on. Can’t wait to see what you build!

A closer look at the new Nest Hub’s design details

For the Nest Industrial Design team, details matter. Working on the new Nest Hub was no exception. "When we approached the design of the new Nest Hub, we wanted to give the product a lighter, more effortless aesthetic,” says team lead Katie Morgenroth. “We wanted it to feel evolved and refined, not reinvented.” Styling alone shouldn’t be the reason to replace a product, she says. “We want to make sure whether you have one Nest product or many, that they all compliment each other in your space.”

Because of this considered approach, you might not immediately notice some of the more subtle updates. We took some time to talk to Katie, as well as Industrial Design lead Jason Pi and Color and Material designer Vicki Chuang, about some of the new additions worth a second glance — or even a third, or a fourth, or a … you get the idea.

The new, cool color. The team introduced the new Mist color because it’s in the cool family, and compliments nature. It’s soothing, and almost looks like a neutral. Vicki led the color and material design, and says that atmospheric colors like Mist help express “soft feelings.” “Color enhances well-being. Mist is inspired by the sky, it compliments nature,” she says. “We started with a range of blues from light pastel to saturated blue, and the soft muted blue felt the most soothing and relaxing — a good fit for the home.”


Don’t forget the feet. Peek underneath the Nest Hub to see the silicone feet. “We try to have a little fun with color there,” Katie says. “We were inspired by the color you see when you cut into a fruit like a guava or a watermelon — it makes you smile.”


The inspiration for edgeless. Our idea for the edgeless display was the look of a piece of artwork or picture frame with a white border. The new Nest Hub has a lighter, more effortless feel, as Katie describes it. “All you see from the front is the glass. It makes the display almost feel like it’s floating.” 


Jason also adds that the general construction was an upgrade. "We’re very proud of the matte finish and silky feel of the display enclosure, which is also more sustainable even though it has a premium feel to it.” In fact, the new Nest Hub was designed with 54% of its plastic part weight made with recycled material.

A new knit. The new Nest Hub uses the same sustainable yarn recycled from PET bottles that the Minis use, just slightly modified. We used a recycled monofilament yarn, which gives the device a structure that’s ideal for sound quality. “The fabric was reengineered to be not only sustainable but also optimized for great acoustic transmission,” Vicki says.


And look a little closer…and you’ll see the team color matched the device down to the yarn level, so there’s a subtle blending effect in the overall look of the speaker. “That effect is called ‘melange’ and it’s created when there are two colors of yarn knit together to create a variation in the tone,” Katie explains. 


A hard switch. We first introduced the privacy switch with the Home Mini and it’s been a part of every Nest device since, including the new Nest Hub. The hard switch completely disables microphones, and the new Nest Hub also has added LED lights to the front of the display that indicate when the switch is on or off. This was important to the team to keep consistent across all Nest devices, because privacy isn’t something they wanted to overcomplicate. “From the beginning we always wanted to continue the precedence we set with the physical privacy button and include it on Nest Hub,” Jason says. “There is something definitive about having it be a physical switch. I also like the color pop that's visible once it’s on mute — it’s a nice, clear indicator.” Plus, it’s one more place designers get to have a little fun.

Celebrate Black creative visions with Chrome

This Black History Month, the Chrome team is showcasing exciting new work by Black artists in a collection of themes that let you customize the look of your browser.

We commissioned six contemporary artists and invited them to turn Chrome into their canvas. Working in different mediums and bringing different points of view, each artist has presented their interpretation of the ways people use Chrome: finding new knowledge, connecting with each other, exploring our world and taking action towards our goals.

Our design team crafted themes around their work to fuse them seamlessly into Chrome, coordinating the colors of your tabs and making sure the work looks great on all types of laptop and desktop screens.

We drew inspiration from the #drawingwhileblack hashtag, organized by featured artist Abelle Hayford, as well as from the many artists who have used their talents to advance the call for justice and give us visions of a better future. We hope these themes help you discover new artists, and bring you energy and joy throughout your day as you go to new places through art. 

Browse all 24 themes in the collection on the Chrome Web Store, and read on to hear from the artists:


A Google designer takes us inside Search’s mobile redesign

The beginning of a new year inspires people everywhere to make changes. It's when many of us take stock of our lives, our careers or even just our surroundings and think about what improvements we can make. That's also been the case for Google designer Aileen Cheng. Aileen recently led a major visual redesign of the mobile Search experience, which rolls out in the coming days. “We wanted to take a step back to simplify a bit so people could find what they’re looking for faster and more easily,” she says. “I find it really refreshing. To me, it’s a breath of fresh air!” 

Like all organizing efforts, this one came with its challenges. “Rethinking the visual design for something like Search is really complex,” Aileen says. “That’s especially true given how much Google Search has evolved. We’re not just organizing the web’s information, but all the world’s information,” Aileen says. “We started with organizing web pages, but now there’s so much diversity in the types of content and information we have to help make sense of.” 

Image showing a mock-up of a Pixel phone with Google Search pulled up on the screen. The search results show answers about Humpback whales, including two images.

We recently had the chance to learn more about the new look from Aileen, as well as the process. Here are five things that drove the redesign: 

1. Bringing information into focus. “We want to let the search results shine, allowing people to focus on the information instead of the design elements around it,” says Aileen. “It’s about simplifying the experience and getting people to the information they’re looking for as clearly and quickly as possible.” 

2. Making text easier to read. One way the team did this was by using larger, bolder text, so the human eye can scan and understand Search results faster. “We’re making the result and section titles bigger, as well,” Aileen says. While we’re on the subject of text: The update also includes more of Google’s own font, which already shows up in Android and Gmail, among other Google products. “Bringing consistency to when and how we use fonts in Search was important, too, which also helps people parse information more efficiently,” Aileen explains. 

Image showing a phone with Google Search pulled up on the screen. The search query is for "running spots sf."

3. Creating more breathing room. “We decided to create a new edge-to-edge results design and to minimize the use of shadows, making it easier to immediately see what you’re looking for,” says Aileen. “The overall effect is that you have more visual space and breathing room for Search results and other content to take center stage.”

4. Using color to highlight what’s important. Aileen says that some other iterations of the redesign experimented with using lots of bold colors, and others tried more muted tones. They weren’t quite right, though, and ultimately the team focused on centering content and images against a clean background and using color more intentionally to guide the eye to important information without being overwhelming or distracting. “It has an optimistic feel, too,” Aileen says.  

5. Leaning into that “Googley” feeling. If you’re noticing the new design feels a little bubblier and bouncier, you’re onto something. “If you look at the Google logo, you’ll notice there’s a lot of roundness to it, so we’re borrowing from that and bringing it to other places as well,” says Aileen. You’ll see that in parts of this redesign, like in rounded icons and imagery. “That form is already so much a part of our DNA. Just look at the Search bar, or the magnifying glass,” Aileen points out.

Image showing Google logo with design effects pointing to its roundness.

Part of the work is also in refreshing the look while remaining familiar. “My three-year-old recently dropped a handful of Legos in my hand, red, yellow, green, blue, and he told me, ‘Mama, this is Google,’” Aileen says. “That’s how playful and well known we are to people. And when we redesign something, we want to bring that familiarity and approachability with us, too.”

Source: Search


Fernanda Viégas puts people at the heart of AI

When Fernanda Viégas was in college, it took three years with three different majors before she decided she wanted to study graphic design and art history. And even then, she couldn’t have imagined the job she has today: building artificial intelligence and machine learning with fairness and transparency in mind to help people in their daily lives.  

Today Fernanda, who grew up in Rio de Janeiro, Brazil, is a senior researcher at Google. She’s based in London, where she co-leads the global People + AI Research (PAIR) Initiative, which she co-founded with fellow senior research scientist Martin M. Wattenberg and Senior UX Researcher Jess Holbrook, and the Big Picture team. She and her colleagues make sure people at Google think about fairness and values–and putting Google’s AI Principlesinto practice–when they work on artificial intelligence. Her team recently launched a seriesof “AI Explorables,"a collection of interactive articles to better explain machine learning to everyone. 

When she’s not looking into the big questions around emerging technology, she’s also an artist, known for her artistic collaborations with Wattenberg. Their data visualization art is a part of the permanent collection of the Museum of Modern Art in New York.  

I recently sat down with Fernanda via Google Meet to talk about her role and the importance of putting people first when it comes to AI. 

How would you explain your job to someone who isn't in tech?

As a research scientist, I try to make sure that machine learning (ML) systems can be better understood by people, to help people have the right level of trust in these systems. One of the main ways in which our work makes its way to the public is through the People + AI Guidebook, a set of principles and guidelines for user experience (UX) designers, product managers and engineering teams to create products that are easier to understand from a user’s perspective.

What is a key challenge that you’re focusing on in your research? 

My team builds data visualization tools that help people building AI systems to consider issues like fairness proactively, so that their products can work better for more people. Here’s a generic example: Let’s imagine it's time for your coffee break and you use an app that uses machine learning for recommendations of coffee places near you at that moment. Your coffee app provides 10 recommendations for cafes in your area, and they’re all well-rated. From an accuracy perspective, the app performed its job: It offered information on a certain number of cafes near you. But it didn’t account for unintended unfair bias. For example: Did you get recommendations only for large businesses? Did the recommendations include only chain coffee shops? Or did they also include small, locally owned shops? How about places with international styles of coffee that might be nearby? 

The tools our team makes help ensure that the recommendations people get aren’t unfairly biased. By making these biases easy to spot with engaging visualizations of the data, we can help identify what might be improved. 

What inspired you to join Google? 

It’s so interesting to consider this because my story comes out of repeated failures, actually! When I was a student in Brazil, where I was born and grew up, I failed repeatedly in figuring out what I wanted to do. After spending three years studying for different things—chemical engineering, linguistics, education—someone said to me, “You should try to get a scholarship to go to the U.S.” I asked them why I should leave my country to study somewhere when I wasn’t even sure of my major. “That's the thing,” they said. “In the U.S. you can be undecided and change majors.” I loved it! 

So I went to the U.S. and by the time I was graduating, I decided I loved design but I didn't want to be a traditional graphic designer for the rest of my life. That’s when I heard about the Media Lab at MIT and ended up doing a master's degree and PhD in data visualization there. That’s what led me to IBM, where I met Martin M. Wattenberg. Martin has been my working partner for 15 years now; we created a startup after IBM and then Google hired us. In joining, I knew it was our chance to work on products that have the possibility of affecting the world and regular people at scale. 

Two years ago, we shared our seven AI Principles to guide our work. How do you apply them to your everyday research?

One recent example is from our work with the Google Flights team. They offered users alerts about the “right time to buy tickets,” but users were asking themselves, Hmm, how do I trust this alert?  So the designers used our PAIR Guidebook to underscore the importance of AI explainability in their discussions with the engineering team. Together, they redesigned the feature to show users how the price for a flight has changed over the past few months and notify them when prices may go up or won’t get any lower. When it launched, people saw our price history graph and responded very well to it. By using our PAIR Guidebook, the team learned that how you explain your technology can significantly shape the user’s trust in your system. 

Historically, ML has been evaluated along the lines of mathematical metrics for accuracy—but that’s not enough. Once systems touch real lives, there’s so much more you have to think about, such as fairness, transparency, bias and explainability—making sure people understand why an algorithm does what it does. These are the challenges that inspire me to stay at Google after more than 10 years. 

What’s been one of the most rewarding moments of your career?

Whenever we talk to students and there are women and minorities who are excited about working in tech, that’s incredibly inspiring to me. I want them to know they belong in tech, they have a place here. 

Also, working with my team on a Google Doodle about the composer Johann Sebastian Bach last year was so rewarding. It was the very first time Google used AI for a Doodle and it was thrilling to tell my family in Brazil, look, there’s an AI Doodle that uses our tech! 

How should aspiring AI thinkers and future technologists prepare for a career in this field? 

Try to be deep in your field of interest. If it’s AI, there are so many different aspects to this technology, so try to make sure you learn about them. AI isn’t just about technology. It’s always useful to be looking at the applications of the technology, how it impacts real people in real situations.

Introducing an easier way to design your G Suite Add-on

Posted by Kylie Poppen, Senior Interaction Designer, G Suite and Akshay Potnis, Interaction Designer, G Suite

You’ve just scoped out an awesome new way to solve for your customer’s next challenge, but wait, what about the design? Building an integration between your software platform and another comes with a laundry list of things to think about: your vision, your users, their experience, your partners, APIs, developer docs, and so on. Caught between two different platforms, many constraints, and limited time, you're probably wondering: how might we build the most intuitive and powerful user experience?

Imagine making a presentation, with Google Slides you have all sorts of templates to get you started, and you can build a great deck easily. But, to build a seamless integration between two software platforms, those pre-built templates don’t exist and you basically have to start from scratch. In the best case scenario, you’d create your own components and layer them on top of each other with the goal of making the UI seem just about right. But this takes time. Hours longer than you want it to. Without design guidelines, you're stuck guessing what is or is not possible, looking to other apps and emulating what they've already done. Which leads us to the reality that some add-ons have a suboptimal experience, because time is limited, and you're left to build only for what you know you can do, rather than what's actually possible.

To simplify all of this, we’re introducing the G Suite Add-ons UI Design Kit, now live on Figma. With it you can browse all of the components of G Suite Add-ons’ card-based interface, learn best practices, and simply drag-and-drop to create your own unique designs. Save the time spent recreating what an add-on will look like, so that you can spend that time thinking about how your add-on will work .

While the UI Design Kit has only been live for a little over a month, we’ve already been hearing feedback from our partners about its impact.

“Zapier connects more than 2,000 apps, allowing businesses to automate repetitive, time-consuming tasks. When building these integrations, we want to ensure a seamless experience for our customers,” said Ryan Powell, Product Manager at Zapier. “However, a partner’s UI can be difficult to navigate when starting from scratch. G Suite’s UI Design Kit allows us to build, test and optimize integrations because we know from the start what is and is not possible inside of GSuite’s UI.”

Here’s how to use the UI Design Kit:

Step 1

Find and duplicate design kit

  • Search for G suite on Figma community or use this link
  • Open G Suite Add Ons UI Design Kit
  • Just click the duplicate button.

Step 2

Choose a template to begin

  • Go to UI templates page
  • Select a template from the list of templates

Step 3

Copy the template and detach from symbols to start editing

Helpful Hints: Features to help you iterate quickly

Build with auto layout, you don’t need to worry about the details.

  • Copy paste maintains layout padding & structure.
  • Maintained padding & structure while editing.
  • Built in fixed footer and peek cards.

Visualize your design against G-Suite surfaces easily.

Documentation built right into the template.

  1. Go to the component page (e.g section)
  2. Find layout + documentation / api links on respective pages

Next Steps to Consider:

With G Suite Add-ons, users and admins can seamlessly get their work done, across their favorite workplace applications, without needing to leave G Suite. With this UI Design Kit, you too can focus your time on building a great user experience inside of G Suite, while simplifying and accelerating the design process. Follow these steps to get started today:

Download the UI Design Kit

Get started with G Suite Add-ons

Hopefully this will inspire you to build more add-ons using the Cards Framework! To learn more about building for G Suite, check out the developer page, and please register for Next OnAir, which kicks off July 14th.