Tag Archives: Ask a Techspert

Ask a Techspert: How do machine learning models explain themselves?

Editor’s Note: Do you ever feel like a fish out of water? Try being a tech novice and talking to an engineer at a place like Google. Ask a Techspert is a series on the Keyword asking Googler experts to explain complicated technology for the rest of us. This isn’t meant to be comprehensive, but just enough to make you sound smart at a dinner party. 

A few years ago, I learned that a translation from Finnish to English using Google Translate led to an unexpected outcome. The sentence “hän on lentäjä” became “he is a pilot” in English, even though “hän” is a gender-neutral word in Finnish. Why did Translate assume it was “he” as the default? 

As I started looking into it, I became aware that just like humans, machines are affected by society’s biases. The machine learning model for Translate relied on training data, which consisted of the input from hundreds of millions of already-translated examples from the web. “He” was more associated with some professions than “she” was, and vice versa. 

Now, Google provides options for both feminine and masculine translations when adapting gender-neutral words in several languages, and there’s a continued effort to roll it out more broadly. But it’s still a good example of how machine learning can reflect the biases we see all around us. Thankfully, there are teams at Google dedicated to finding human-centered solutions to making technology inclusive for everyone. I sat down with Been Kim, a Google researcher working on the People + AI Research (PAIR) team, who devotes her time to making sure artificial intelligence puts people, not machines, at its center, and helping others understand the full spectrum of human interaction with machine intelligence. We talked about how you make machine learning models easy to interpret and understand, and why it’s important for everybody to have a basic idea of how the technology works.

Been Kim

Why is this field of work so important?

Machine learning is such a powerful tool, and because of that, you want to make sure you’re using it responsibly. Let’s take an electric machine saw as an example. It’s a super powerful tool, but you need to learn how to use it in order not to cut your fingers. Once you learn, it’s so useful and efficient that you’ll never want to go back to using a hand saw. And the same goes for machine learning. We want to help you understand and use machine learning correctly, fairly and safely. 

Since machine learning is used in our everyday lives, it’s also important for everyone to understand how it impacts us. No matter whether you’re a coffee shop owner using machine learning to optimize the purchase of your beans based on seasonal trends, or your doctor diagnoses you with a disease with the help of this technology, it’s often crucial to understand why a machine learning model has produced the outcome it has. It’s also important for developers and decision-makers to be able to explain or present a machine learning model to people in order to do so. This is what we call “interpretability.” 

How do you make machine learning models easier to understand and interpret? 

There are many different ways to make an ML model easier to understand. One way is to make the model reflect how humans think from the start, and have the model "trained" to provide explanations along with predictions, meaning when it gives you an outcome, it also has to explain how it got there. 

Another way is to try and explain a model after the training on data is done. This is something you can do when the model has been built to use input to provide an output from its own perspective, optimizing for prediction, without a clear “how” included. This means you're able to plug things into it and see what comes out, and that can give you some insight into how the model generally makes decisions, but you don't necessarily know exactly how specific inputs are interpreted by the model in specific cases. 

One way to try and explain models after they’ve been trained is using low level features or high level concepts. Let me give you an example of what this means. Imagine a system that classifies pictures: you give it a picture and it says, “This is a cat.” A low level feature is when I then ask the machine which pixels mattered for that prediction, it can tell us if it was one pixel or the other, and we might be able to see that the pixels in question show the cat’s whiskers. But we might also see that it is a scattering of pixels that don’t appear meaningful to the human eye, or that it’s made the wrong interpretation. High level concepts are more similar to the way humans communicate with one another. Instead of asking about pixels, I’d ask, “Did the whiskers matter for the prediction? or the paws?” and again, the machine can show me what imagery led it to reach this conclusion. Based on the outcome, I can understand the model better. (Together with researchers from Stanford, we’ve published papers that go into further detail on this for those who are interested.)

Can machines understand some things that we humans can’t? 

Yes! This is an area that I am very interested in myself. I am currently working on a way to showcase how technology can help humans learn new things. Machine learning technology is better at some things than we are; for example it can analyze and interpret data at a much larger scale than humans can. Leveraging this technology, I believe we can enlighten human scientists with knowledge they haven't previously been aware of. 

What do you need to be careful of when you’re making conclusions based on machine learning models?

First of all, we have to be careful that human bias doesn't come into play. Humans carry biases that we simply cannot help and are often unaware of, so if an explanation is up to a human’s interpretation, and often it is, then we have a problem. Humans read what they want to read. Now, this doesn’t mean that you should remove humans from the loop. Humans communicate with machines, and vice versa. Machines need to communicate their outcomes in the form of a clear statement using quantitative data, not one that is vague and completely open for interpretation. If the latter happens, then the machine hasn’t done a very good job and the human isn’t able to provide good feedback to the machine. It could also be that the outcome simply lacks additional context only the human can provide, or that it could benefit from having caveats, in order for them to make an informed judgement about the results of the model. 

What are some of the main challenges of this work? 

Well, one of the challenges for computer scientists in this field is dealing with non mathematical objectives, which are things you might want to optimize for, but don’t have an equation for. You can’t always define what is good for humans using math. That requires us to test and evaluate methods with rigor, and have a table full of different people to discuss the outcome. Another thing has to do with complexity. Humans are so complex that we have a whole field of work - psychology - to study this. So in my work, we don't just have computational challenges, but also complex humans that we have to consider. Value-based questions such as “what defines fairness?” are even harder. They require interdisciplinary collaboration, and a diverse group of people in the room to discuss each individual matter.

What's the most exciting part? 

I think interpretability research and methods are making a huge impact. Machine learning technology is a powerful tool that will transform society as we know it, and helping others to use it safely is very rewarding. 

On a more personal note, I come from South Korea and grew up in circumstances where I feel I didn’t have too many opportunities. I was incredibly lucky to get a scholarship to MIT and come to the U.S. When I think about the people who haven't had these opportunities to be educated in science or machine learning, and knowing that this machine learning technology can really help and be useful to them in their everyday lives if they use it safely, I feel really motivated to be working on democratizing this technology. There's many ways to do it, and interpretability is one of the things that I can contribute with.  

Ask a Techspert: How does motion sensing work?

Editor’s Note: Do you ever feel like a fish out of water? Try being a tech novice and talking to an engineer at a place like Google. Ask a Techspert is a series on the Keyword asking Googler experts to explain complicated technology for the rest of us. This isn’t meant to be comprehensive, but just enough to make you sound smart at a dinner party. 

Thanks to my allergies, I’ve never had a cat. They’re cute and cuddly for about five minutes—until the sneezing and itching set in. Still, I’m familiar enough with cats (and cat GIFs) to know that they always have a paw in the air, whether it’s batting at a toy or trying to get your attention. Whatever it is they’re trying to do, it often looks like they’re waving at us. So imagine my concern when I found out that you can now change songs, snooze alarms or silence your phone ringing on your Pixel 4 with the simple act of waving. What if precocious cats everywhere started unintentionally making us sleep late by waving their paws?

Fortunately, that’s not a problem. Google’s motion sensing radar technology—a feature called Motion Sense in the Pixel 4—is designed so that only human hands, as opposed to cat paws, can change the tracks on your favorite playlist. So how does this motion sensing actually work, and how did Google engineers design it to identify specific motions? 

To answer my questions, I found our resident expert on motion sensors, Brandon Barbello. Brandon is a product manager on our hardware team and he helped me unlock the mystery behind the motion sensors on your phone, and how they only work for humans. 

When I’m waving my hand in front of my screen, how can my phone sense something is there? 

Brandon tells me that your Pixel phone has a chip at the top with a series of antennas, some of which emit a radio signal and others that receive “bounce backs” of the same signal the other antenna emitted. “Those radio signals go out into the world, and then they hit things and bounce back. The receiver antennas read the signals as they bounce back and that’s how they’re able to sense something has happened. Your Pixel actually has four antennas: One that sends out signals, and three that receive.”

What happens after the antenna picks up the motion? 

According to Brandon, when the radio waves bounce back, the computer in your phone begins to process the information. “Essentially, the sensor picks up that you’re around, and that triggers your phone to keep an eye out for the relevant gestures,” he says.

How does the Pixel detect that a motion is a swipe and not something else? 

With the motion sensing functions on the Pixel, Brandon and his team use machine learning to determine what happened. “Those radio waves get analyzed and reduced into a series of numbers that can be fed into the machine learning models that detect if a reach or a swipe has just happened,” Brandon says. “We collected millions of motion samples to pre-train each phone to recognize intentional swipes. Specifically, we’ve trained the models to detect motions that look like they come from a human hand, and not, for instance, a coffee mug passing over the phone as you put it down on the table.”

What will motion sensors be capable of in the future? 

Brandon told me that he and his team plan to add more gestures to recognize beyond swiping, and that specific movements could be connected to more apps. “In the future, we want to create devices that can understand your body language, so they’re more intuitive to use and more helpful,” he tells me. 

At the moment, motion-sensing technology is focused on the practical, and there’s still improvements to be made and new ground to cover, but he says this technology can also be delightful and fun—like on the Pixel’s gesture-controlled Pokémon Live Wallpaper. Overall, motion sensing technology helps you use your devices in a whole new way, and that will keep changing as the tech advances. "We're just beginning to see the potential of motion sensing," Brandon says.

Ask a Techspert: What is machine learning?

Editor’s Note: Do you ever feel like a fish out of water? Try being a tech novice and talking to an engineer at a place like Google. Ask a Techspert is a series on the Keyword asking Googler experts to explain complicated technology for the rest of us. This isn’t meant to be comprehensive, but just enough to make you sound smart at a dinner party. 

Imagine you’re going to the grocery store to buy ice cream. If you’re an ice cream lover like me, this probably happens regularly. Normally, I go to the store closest to my home, but every so often I opt to go to a different one, in search of my ice-cream white whale: raspberry chocolate chip. 

When you’re in a new store searching for your favorite-but-hard-to-find flavor of ice cream, you might not know exactly where it is, but you’ll probably know that you should head toward the refrigerators, it’s in the aisle labeled frozen foods and that it’s probably not in the same section as the frozen pizza.

My ability to find ice cream in a new store is not instinctive, even though it feels like it. It is the result of years of memories navigating the many sections and aisles of different grocery stores, using visual cues like refrigerators or aisle signs to figure out if I am on the right track. 

Today, when we hear about “machine learning,” we’re actually talking about how Google teaches computers to use existing information to answer questions like: Where is the ice cream? Or, can you tell me if my package has arrived on my doorstep? For this edition of Ask a Techspert, I spoke with Rosie Buchanan, who is a senior software engineer working on Machine Perception within Google Nest. 

She not only helped explain how machine learning works, she also told me that starting today, Nest Aware subscribers can receive a notification when their Nest Hello, using machine learning, detects that a package has been delivered. 

What is machine learning? 

I’ll admit: Rosie came up with the food metaphor. She told me that when you’re looking for something to eat, you have a model in your head. “You learn what to eat by seeing, smelling, touching and by using your prior experience with similar things,” she says. “With machine learning, we’re teaching the computer how to do something, often with better accuracy than a person, based on past understanding.” 

How do you get a machine to learn? 

Rosie and her team teach machines through supervised learning. To help Nest cameras identify packages, they use data that they know contains the “right answers,” which in this case are photos of packages. They then input these data sets to the computer so that it can create an algorithmic model based on the images they provided. This is called a training job, and it requires hundreds of thousands of images. “Over time, the computer is able to independently identify a delivered package without assistance,” Rosie says. 

How do you figure out what to make a machine learn? 

Rosie told me that package detection was one of the most requested features from Nest Hello users. “In particular, we’re trying to solve problems based on what users want,” she says. “Home safety and security is a huge area for our users.” By bringing package delivery notifications to Nest Aware, Rosie and her team have found a use for machine learning that eliminates the tedious task of waiting around for your delivery. 

Do you need a massive supercomputer to do machine learning? 

That depends on whether you’re creating a machine learning model or using it. If you’re a developer like Rosie, you’ll need some powerful computers. But if you want to see whether there’s a package on your doorstep, you don’t need more than a video doorbell. "When engineers develop a machine learning model, it can take a ton of computing power to teach it what it needs to know,” Rosie says. “But once it's trained, a machine learning model doesn't necessarily take up a lot of space, so it can run basically anywhere, like in your smart doorbell."

Can machines understand some things that we humans can’t? 

According to Rosie, yes. “We can often describe the things we’re learning,” she says, “but there are things we can’t describe, and machines are good at understanding these observations.” It’s called black box learning: We can tell the model is learning something but we can’t quite tell what it is. 

A great example of this is when a package arrives at your doorstep. Rosie’s team shows the network lots of pictures of packages, and lots of pictures of other things (trees, dogs, bananas, you name it). They tell the network which images are packages and which ones are not. The network is made up of different nodes, each trying to learn how to identify a package on its own. One node might learn that many packages are brown, and another might notice that many are rectangular. 

“These nodes work together to start putting together a concept of what a package is, eventually coming up with a concept of ‘packageness’ that we as humans might not even understand,” Rosie says. “At the end, we don't actually know exactly what the network learned as its definition of ‘packageness,’ whether it's looking for a brown box, a white bag or something else.” With machine learning, teams can show a network a new picture and it may tell us there’s a package in it, but we can’t fully know exactly how it made that decision. 

What’s the best part about working on machine learning? 

Rosie, who’s been at Google for over five years, says it’s all about working on the unknown. “We get to work on problems that we don’t know are actually solvable,” she says. “It’s exciting to get started on something while knowing that it might not be feasible.” 

So will machine learning be able to identify that raspberry chocolate chip is the best flavor of ice cream ever created? Probably not. We’ll still need human knowledge to confirm that. But machine learning will help us in other ways, like waiting around for a package to be delivered so you can take that precious time to peruse the frozen foods section. 

Ask a Techspert: How does Wi-Fi actually work?

Editor’s Note: Do you ever feel like a fish out of water? Try being a tech novice and talking to an engineer at a place like Google. Ask a Techspert is a series on the Keyword asking Googler experts to explain complicated technology for the rest of us. This isn’t meant to be comprehensive, but just enough to make you sound smart at a dinner party. 

How do you define a best friend?  Is it that someone who understands your needs? Or maybe it’s the person who is there through your ups and downs. Or, perhaps, does it require a special ability to allow your electronic devices to connect to the web without cords? 

While there aren’t many people who immediately consider wireless routers their bestie, according to a recent study commissioned by Google and conducted by Kelton Research, 57 percent of respondents say their Wi-Fi is like their best friend. In fact, 25 percent compared Wi-Fi to their significant other, and 68 percent said they’d be lonelier without Wi-Fi. And respondents said they’d rather suffer annoying situations like long lines at the DMV than deal with spotty Wi-Fi connections. 

Certainly, Wi-Fi is part of our daily lives, but how does it actually work? For this edition of “Ask A Techspert,” I spoke with Sanjay Noronha, a product manager at Google Nest and our resident expert on Wi-Fi and routers, to learn more about how the technology behind Wi-Fi works and about the future of home networks.

How does Wi-Fi even work? 

“It’s like listening to the radio, but two-way. Instead of just receiving sound like we do with AM or FM, Wi-Fi also lets you send data, like an email or a post to social media,” Sanjay told me. “Wi-Fi sends the data over radio waves quickly and reliably so that the thing you’re trying to do, or video you’re trying to stream, or game you’re trying to play, happens in a seamless way so you’re not stuck to your wall with an ethernet cable.” 

Wi-Fi operates on 2.4GHz and 5GHz radio frequencies. Think of those numbers like tuning your car to 97.9 FM to hear your favorite station. Except you don’t actually need to set anything yourself. Your Wi-Fi router decides which radio station to put your devices on so you can watch YouTube videos on your smartphone or take a video call while moving around your house. Multiple Wi-Fi networks can exist on the same frequencies, which is why you might see your neighbors’ networks when you try to connect on your device. (And respondents to our survey know this well: 13 percent said they have tried to connect to another network in their area, and five percent have asked their neighbors if they could tap into their Wi-Fi.)

Why does my Wi-Fi slow down at certain times? 

The overwhelming majority (81 percent) of router users in our survey have experienced issues with their home Wi-Fi. Among people who experience issues, half reported dealing with a slow connection, and 43 percent report slower speeds during certain times of day. 

I live in New York City and sometimes, particularly at night, my Wi-Fi gets particularly slow. And that’s because other New Yorkers are trying to stream their favorite TV shows, too. “That’s Wi-Fi congestion,” Sanjay told me. “If you have multiple Wi-Fi networks operating at once in the same area, they’re all using the same frequency ranges.” 

But if you use Google Wifi, there’s a way to avoid that problem. Wi-Fi was originally built for only 2.4 GHz, then newer Wi-Fi technology also added 5 GHz channels.  (If you see a wireless network with the number 5 at the end, that’s what that means.) That means you sometimes may have to pick which one to connect to when you’re online. But with Google Wifi, the experience is simplified. Users just connect to one network and are automatically moved between channels with a technology called “band steering.” Google Wifi also seamlessly selects the Wi-Fi frequencies it uses,  depending on the congestion, so you can binge-watch without interruption. 

How come some parts of my home get better Wi-Fi? 

According to Sanjay, that depends on your router. “A single router is like a lightbulb,” he says, noting a lightbulb has a limited range of light, and a router has a limited range of signal. “Just like you have multiple lightbulbs throughout your house, we want to make it easy for you to put in multiple routers.” 

Google Wifi is “mesh technology,” and it enables you to get better Wi-Fi by putting additional Wi-Fi routers throughout your home. So it’s like having multiple lightbulbs in your house, instead of expecting one lightbulb by your front door to illuminate your attic. Having a mesh system helps spread Wi-Fi signals throughout your home, wherever you’re using Wi-Fi. 

“Even though Wi-Fi has been around for many years, many people still experience Wi-Fi that cuts out,” Sanjay says. “We’re applying our years of experience to make Wi-Fi even more accessible everywhere in your home, not just in the room with the router.” 

Wi-Fi survey

Even though Wi-Fi might be like your best friend, some people have an odd way of showing it. According to our study, router users go to great lengths to hide their routers. Over two in five router users confess they’ve attempted to hide their networking device because of its appearance. So, we designed Google Wifi to look different from a traditional router. Instead of clunky cords and external antennas, Google Wifi is sleek and compact, so you may not mind having it hang out on you counter or shelf for the best connection possible. That way  you can hang out with your best friend, anywhere in the house, without worrying about making the place look neat.