Tag Archives: Ask a Techspert

Ask a Techspert: What’s a subsea cable?

Whenever I try to picture the internet at work, I see little pixels of information moving through the air and above our heads in space, getting where they need to go thanks to 5G towers and satellites in the sky. But it’s a lot deeper than that — literally. Google Cloud’s Vijay Vusirikala recently talked with me about why the coolest part of the internet is really underwater. So today, we’re diving into one of the best-kept secrets in submarine life: There wouldn’t be an internet without the ocean.

First question: How does the internet get underwater?

We use something called a subsea cable that runs along the ocean floor and transmits bits of information.

What’s a subsea cable made of?

These cables are about the same diameter as the average garden hose, but on the inside they contain thin optical fibers. Those fibers are surrounded by several layers of protection, including two layers of ultra-high strength steel wires, water-blocking structures and a copper sheath. Why so much protection? Imagine the pressure they are under. These cables are laid directly on the sea bed and have tons of ocean water on top of them! They need to be super durable.

Two photographs next to each other, the first showing a cable with outer protection surrounding it. The second photograph shows a stripped cable with copper wires and optical fibers inside.

A true inside look at subsea cables: On the left, a piece of the Curie subsea cable showing the additional steel armoring for protection close to the beach landing. On the right, a cross-sectional view of a typical deep water subsea cable showing the optical fibers, copper sheath, and steel wires for protection.

Why are subsea cables important?

Subsea cables are faster, can carry higher traffic loads and are more cost effective than satellite networks. Subsea cables are like a highway that has the right amount of lanes to handle rush-hour traffic without getting bogged down in standstill jams. Subsea cables combine high bandwidths (upwards of 300 to 400 terabytes of data per second) with low lag time. To put that into context, 300 to 400 terabytes per second is roughly the same as 17.5 million people streaming high quality videos — at the same time!

So when you send a customer an email, share a YouTube video with a family member or talk with a friend or coworker on Google Meet, these underwater cables are like the "tubes" that deliver those things to the recipient.

Plus, they help increase internet access in places that have had limited connectivity in the past, like countries in South America and Africa. This leads to job creation and economic growth in the places where they’re constructed.

How many subsea cables are there?

There are around 400 subsea cables criss-crossing the planet in total. Currently, Google invests in 19 of them — a mix of cables we build ourselves and projects we’re a part of, where we work together with telecommunications providers and other companies.

Video introducing Curie, a subsea cable.
10:25

Wow, 400! Does the world need more of them?

Yes! Telecommunications providers alongside technology companies are still building them around the world. At Google, we invest in subsea cables for a few reasons: One, our Google applications and Cloud services keep growing. This means more network demand from people and businesses in every country around the world. And more demand means building more cables and upgrading existing ones, which have less capacity than their modern counterparts.

Two, you cannot have a single point of failure when you're on a mission to connect the world’s information and make it universally accessible. Repairing a subsea cable that goes down can take weeks, so to guard against this we place multiple cables in each cross section. This gives us sufficient extra cable capacity so that services aren’t affected for people around the world.

What’s your favorite fact about subsea cables?

Three facts, if I may!

First, I love that we name many of our cables after pioneering women, like Curie for Marie Curie, which connects California to Chile, and Grace Hopper, which links the U.S., Spain and the U.K. Firmina, which links the U.S., Argentina, Brazil and Uruguay, is named after Brazil’s first novelist, Maria Firmina dos Reis.

Second, I’m proud that the cables are kind to their undersea homes. They’re environmentally friendly and are made of chemically inactive materials that don't harm the flora and fauna of the ocean, and they generally don’t move around much! We’re very careful about where we place them; we study each beach’s marine life conditions and we adjust our attachment timeline so we don’t disrupt a natural lifecycle process, like sea turtle nesting season. For the most part they’re stationary and don't disrupt the ocean floor or marine life. Our goal is to integrate into the underwater landscape, not bother it.

And lastly, my favorite fact is actually a myth: Most people think sharks regularly attack our subsea cables, but I’m aware of exactly one shark attack on a subsea cable that took place more than 15 years ago. Truly, the most common problems for our cables are caused by people doing things like fishing, trawling (which is when a fishing net is pulled through the water behind a boat) and anchor drags (when a ship drifts without holding power even though it has been anchored).

Ask a Techspert: What exactly is a time crystal?

Quantum computers will allow us to do hard computations and help us rethink our understanding of the fundamentals of science. That’s because quantum computers harness the power of quantum mechanics — a subfield in physics that explains how the natural world around us works at the subatomic level. While we are a long ways off from building a useful quantum computer, our team at Google Quantum AI is already running novel experiments on the quantum processors we have today. One particular experiment that was just published in the science journal Nature is our work on a new phase of matter called a time crystal.

For years, scientists have theorized about the possibility of a time crystal and wondered whether one could ever be observed. By using our quantum processor, Sycamore, we now know it’s possible. To answer some common questions about this phenomenon, Google Quantum AI research scientists Pedram Roushan and Kostyantyn Kechedzhi answer the frequently asked questions.

What is a time crystal?

A time crystal may sound like it's from the pages of a science fiction novel, but it’s something that we’ve demonstrated is possible to observe, even though it may appear to go against the basic laws of nature. You might be familiar with crystals like emerald, diamond and salt. At the microscopic level, they’re made up of repeating patterns — many layers of atoms that ultimately form a physical structure. For example, a grain of salt is made up of sodium and chlorine atoms. A time crystal is similar, but instead of forming a repetitive pattern in space, an oscillating pattern is formed in time.

An artistic representation of time crystals is represented as a pattern on the faces of a 20-sided object. The pattern changes from one instance of time to the next, but repeats itself, and the oscillation continues indefinitely.

Time crystals show an oscillating pattern in time.

Can you give an example of a time crystal?

Let’s say you took pictures of a planet and its orbiting moon every time it finishes its orbit over a period of time with the Hubble Telescope. These pictures would all look the same with the moon repeating its orbit over and over again. Now hypothetically, let’s say there were hundreds of new moons added to the planet’s orbit. Each new moon introduced would exert gravitational pull on the orbits of the others. Over time, the moons would start to deviate from their orbits without ever coming back to their starting point. This increase in disorder or entropy is unavoidable due to the second law of thermodynamics, a fundamental law of physics. What if there was a system of a planet and many moons where the moons could periodically repeat their orbits, without ever increasing entropy? This configuration — evidently hard to achieve — would be considered a time crystal.

How do you use a quantum processor to observe a time crystal?

Quantum objects behave like waves, similar to how sonar uses sound waves reflected from solid objects on the ocean floor to detect them. If the medium that the quantum wave travels through contains multiple objects at random locations in space, then the wave could be confined and come to a complete stop. This key insight about quantum waves is what puts a cap on the spread of entropy and allows the production of a stable time crystal, even though it appears to be at odds with the second law of thermodynamics. This is where our quantum processor comes in. In our paper, we describe how we used our Sycamore processor as a quantum system to observe these oscillatory wave patterns of stable time crystals.

Our quantum processor, Sycamore, is made up of two chips. The top chip contains qubits and the bottom contains the wiring.

We observed a time crystal using Sycamore, our quantum processor.

Now that time crystals have been observed for the first time, what’s next?

Observing a time crystal shows how quantum processors can be used to study novel physical phenomena that have puzzled scientists for years. Moving from theory to actual observation is a critical leap and is the foundation for any scientific discovery. Research like this opens the door to many more experiments, not only in physics, but hopefully inspiring future quantum applications in many other fields.

Ask a Techspert: How do Nest Cams know people from pets?

The other day when I was sitting in my home office, I got an alert from my Nest Doorbell that a package had been delivered — and right from my phone, I could see it sitting on the porch. Moments later, my neighbor dropped by to return a piece of mail that had accidentally gone to her — and again, my Doorbell alerted me. But this time, it alerted me that someone (rather than something) was at the door. 

When I opened my door and saw my neighbor standing next to the package, I wondered…how does that little camera understand the world around it? 

For an answer, I turned to Yoni Ben-Meshulam, a Staff Software Engineer who works on the Nest team. 

Before I ask you how the camera knows what’s a person and what’s a vehicle, first I want to get into how they detect anything at all?

Our cameras run something called a perception algorithm which detects objects (people, animals, vehicles, and packages) that show up in the live video stream. For example, if a package is delivered within one of your Activity Zones, like your porch, the camera will track the movement of the delivery person and the package, and analyze all of this to give you a package delivery notification. If you have Familiar Face Alerts on and the camera detects a face, it analyzes the face on-device and checks whether it matches anyone you have identified as a Familiar Face. And the camera recognizes new faces as you identify and label them.

The camera also learns what its surroundings look like. For example, if you have a Nest Cam in your living room, the camera runs an algorithm that can identify where there is likely a TV, so that the camera won’t think the people on the screen are in your home. 

Perception algorithms sound a little like machine learning. Is ML involved in this process?

Yes — Nest cameras actually have multiple machine learning models running inside of them. One is an object detector that takes in video frames and outputs a bounding box around objects of interest, like a package or vehicle. This object detector was trained to solve a problem using millions of examples.

Nest Cam (battery) in the rain

Is there a difference between creating an algorithm for a security camera versus a “regular” camera?

Yes! A security camera is a different domain. ​​Generally, the pictures you take on your phone are closer and the object of interest is better-focused. For a Nest camera, the environment is harder to control.

Objects may appear blurry due to lighting, weather or camera positioning. People usually aren’t posing or smiling for a security camera, and sometimes only part of an object, like a person’s arm, is in the frame. And Nest Cams analyze video in real time, versus some photos applications, which may have an entire video to analyze from start to finish. 

Cameras also see the world in 2D but they need to understand it in 3D. That’s why a Nest Cam may occasionally mistake a picture on your T-shirt for a real event. Finally, a lot of what a security camera sees is boring because our doorsteps and backyards are mostly quiet, and there are fewer examples of activity. That means you may occasionally get alerts where nothing actually happened. In order for security cameras to become more accurate, we need to have more high quality data to train the ML models on—and that’s one of the biggest challenges.

Nest Cam vs. camera photo of dog

On the left, an image of a dog from a Nest Cam feed on a Nest Hub. On the right, a photo of a dog taken with a Pixel phone.

So basically…it’s harder to detect people with a security camera than with a handheld camera, like a phone? 

In a word…yes. A model used for Google Image Search or Photos won't perform well on Nest Cameras because the images used to train it were probably taken on handheld cameras, and those images are mostly centered and well-lit, unlike the ones a Nest Camera has to analyze

A synthetic room with synthetic cats

Here's an example of a synthesized image, with bounding boxes around synthetic cats

So, we increased the size and diversity of our datasets that were appropriate for security cameras. Then, we added synthesized data — which ranges from creating a fully simulated world to putting synthetic objects on real backgrounds. With full simulation, we were able to create a game-like world where we could manipulate room layout, object placement, lighting conditions, camera placement, and more to account for the many settings our cameras are installed in. Over the course of this project, we created millions of images — including 2.5 million synthetic cats! 

We also use common-sense rules when developing and tuning our algorithms — for example, heads are attached to people!

Our new cameras and doorbells also have hardware that can make the most of the improved software and they do machine learning on-device, rather than in the cloud, for added privacy. They have a Tensor Processing Unit (TPU) with 170 times more compute than our older devices—a fancy way of saying that the new devices have more accurate, reliable and timely alerts. 

So, does this mean Nest Cam notifications are accurate 100% of the time? 

No — we use machine learning to ensure Nest Cam notifications are very accurate, but the technology isn’t always perfect. Sometimes a camera could mistake a child crawling around on all fours as an animal, a statue may be confused with a real person, and sometimes the camera will miss things. The new devices have a significantly improved ability to catch previously missed events, but improving our models over time is a forever project.

One thing we’re working on is making sure our camera algorithms take data diversity into account across different genders, ages and skin tones with larger, more diverse training datasets. We’ve also built hardware that can accommodate these datasets, and process frames on-device for added privacy. We treat all of this very seriously across Google ML projects, and Nest is committed to the same.

Ask a Techspert: How do you build a chatbot?

Chatbots have become a normal part of daily life, from that helpful customer service pop-up on a website to the voice-controlled system in your home. As a conversational AI engineer at Google, Lee Boonstra knows everything about chatbots. When the pandemic started, many of the conferences she spoke at were canceled, which gave Lee the time to put her knowledge into book form. She started writing while she was pregnant, and now, along with her daughter Rebel, she has this book: The Definitive Guide to Conversational AI With Dialogflow and Google Cloud


Lee, who lives and works in Amsterdam, is donating the proceeds of her royalties to Stichting Meer dan Gewenst, a nonprofit organization that helps people in the LGBTQ+ community who want to have children. The charity is close to her heart; as an LGBTQ+ parent herself, she wants others like her to have a chance at the joy she feels with her daughter. 


The book itself is for anyone interested in using chatbots, from developers to project managers and CEOs. Here she speaks to The Keyword about the art (and science) behind building a chatbot. 


What exactly is a chatbot?

A chatbot is a piece of software designed to simulate online conversations with people. Many people know chatbots as a chat window that appears when you open a website, but there are more forms — for instance, there are chatbots that answer questions via social media, and the voice of the Google Assistant is a chatbot. Chatbots have been around since the early computing days, but computers, they’ve only recently become more mainstream. That has everything to do with machine learning and natural language understanding. 

Old-school chatbots required you to formulate your sentences carefully. If you said things differently, the chatbot wouldn't know how to answer. If you made a spelling mistake, the bot would run amok! But there are many different ways to say something. A chatbot built with natural language understanding can understand a specific piece of text and then retrieve a specific answer to your question. It doesn't matter if you spell it wrong or say things differently. 


What benefits can the use of chatbots offer companies?
A chatbot works quickly, knows (almost) everything and is available 24/7. That basically makes it the ideal customer service representative. The customer no longer has to wait, the company saves money and the employees experience less stress. As a customer, you get a chatbot on the phone that listens to your question and can answer like a human thanks to speech technology. This way, most customers already receive the answers they need. If the chatbot doesn’t know the answer, it can transfer them to an employee. The customer will not be prompted for information again, as the agent will see that the chat history and system fields are already filled.


Companies are finding more and more ways to use chatbots. For example, since the advent of artificial intelligence, KLM Royal Dutch Airlines has been handling twice as many questions from customers via social media. And technical developer Doop built a Google Assistant Action in the Netherlands in collaboration with AVROTROS, specifically for the Eurovision Song Contest. Anyone who asks for information about the Eurovision Song Contest will hear a chatbot with the voice of presenter Cornald Maas talk about the show. 


How do you build a chatbot?
You can build a chatbot using the Dialogflow tool and other services on the Google Cloud platform. Dialogflow is a tool in your web browser that allows you to build chatbots by entering examples. For example, if you already have a FAQ section on your website, that's a good start. With Dialogflow you can edit the content of that Q&A and then train the chatbot to find answers to questions that customers often ask. Dialogflow learns from all the conversation examples so that it can provide answers.


But just like building a website, you probably need more resources, such as a place to host your chatbot and a database to store your data. You may also want to use additional machine learning models so that your chatbot can do things like detect the content of a PDF or the sentiment of a text. Google Cloud has more than 200 products available for this. It's just like playing with blocks: by stacking all these resources on top of each other, you build a product and you improve the experience, for yourself and for the customer.


Do you have any tips for getting started?
First things first: Start building the chatbot as soon as possible. Many people dread this, because they think it's hugely complex, but it’s better to just get going. You will need to keep track of the conversations and keep an eye on the statistics; what do customers ask and what do they expect? Building a chatbot is an ongoing project. The longer a chatbot lasts, the more data is collected and the smarter and faster it becomes. 


In addition, don't build a chatbot just for one specific channel. What you don't want is to have to build a chatbot for another channel next year and replicate the work. In a large company, teams often want to build a chatbot, but different chat channels are important to different departments. As a company you want to be present on all of those channels, whether that’s the website, on social media, via telephone or on Whatsapp. Build an integrated bot so there’s no duplication of work and maintenance is much easier. 


How do chatbots make life easier for people?
Many of the frustrations that you experience with traditional customer services, such as limited opening hours for contact by phone, waiting times and incomprehensible menus, can be removed with chatbots. People do find it important to know whether they are interacting with a human being or a chatbot, but, interestingly, a chatbot is more likely to be forgiven for making a mistake than a human. People might also have a specific preference for human interaction or a chatbot when discussing more sensitive topics like medical or financial issues, either because they want to have personal, human contact or they would rather not discuss a topic with a human being because they don’t feel comfortable doing so. Chatbots are getting better and better at understanding and interacting, and can be very helpful for interactions about these topics as well. 

Ask a Techspert: What is open source?

When I started working at Google, a colleague mentioned that the group projects I worked on in college sounded a lot like some of the open source projects we do here at Google. I thought there had to be some misunderstanding since my projects all happened in-person with my classmates in the corner of some building in the engineering quad. 

To find out how a real life study group could be like a type of computer software, I went straight to Rebecca Stambler, one of Google’s many open source experts.


Explain your job to me like I’m a first-grader.

Well, to start, computer programs have to be written in a language that computers understand — not in English or any other spoken language. At Google we have our own language called Go. When we write in a language to tell a computer what to do, that’s called source code. Just like you can write an essay or a letter in a Google Doc, you have to write your code in an “editor.” I work on making these editors work well for people who write code in Google’s programming language, Go. 


What does it mean for software to be open source?

A piece of software is considered open source if its source code is made publicly available to anyone, meaning they can freely copy, modify and redistribute the code. Usually, companies want to keep the source code of their products secret, so people can’t copy and reproduce their products. But sometimes a company shares their code publicly so anyone can contribute. This makes software more accessible and builds a community around a project. Anyone can work on an open source project no matter who they are or where they are. 


Anyone can contribute? How do they do it?

Before you actually write open source code, a good first step would be thinking about what you’re interested in, whether that’s web development, systems or front end development. Then you can dive into that community by doing things like attending talks or joining online networks where you can often learn more about what open source projects are out there. Then, think about what topics you’re interested in — maybe it’s the environment, retail, banking or a specific type of web development. Some people write code just because they enjoy it; plenty of these people have contributed to code within Google open source projects. So if you’re looking to contribute,  make sure it’s something  you’re really interested in.

Abstract illustration of three people putting together code.

Many open source projects are hosted on a site called Github, so once you narrow down your area of interest, that’s a great place to start! Once you’ve found something you want to work on, the easiest way to get involved is to fix errors in the code raised by other members of the project who don’t have the time to fix. Even if you don’t know how to code there’s a lot of non-technical work in open source projects like prioritizing issues that need fixing, community organization or writing user guides. You just have to be passionate about the work and ready to jump in. 


What’s the benefit of using open source code to create something?

We need lots of diverse perspectives to build good software, and open source helps with that. If you’re building something with a small team of three people, you might not consider all of the different ways someone might use your product. Or maybe your team doesn’t have the best equipment. Open source enables people from all over the world with different use cases, computers and experiences to chime in and say “hey, this doesn’t actually work for me” or “running this software drains my battery.” Without having open source projects, I don’t think we could make products that work for everyone. 

Projects like Android, which is Google operating system for mobile devices, are open source. And just last year Google Kubernetes Engine celebrated its five-year anniversary. This was really exciting because it showed how Google engineers contribute to the broader open source community outside of Google. Open source projects build a real sense of community between the contributors. When we have people that work on a lot of our projects we send them thank you notes and mention them when we release new software versions. We’ve created a whole community of contributors who’ve made our products more successful and exciting. 

Ask a Techspert: What’s a neural network?

Back in the day, there was a surefire way to tell humans and computers apart: You’d present a picture of a four-legged friend and ask if it was a cat or dog. A computer couldn’t identify felines from canines, but we humans could answer with doggone confidence. 

That all changed about a decade ago thanks to leaps in computer vision and machine learning – specifically,  major advancements in neural networks, which can train computers to learn in a way similar to humans. Today, if you give a computer enough images of cats and dogs and label which is which, it can learn to tell them apart purr-fectly. 

But how exactly do neural networks help computers do this? And what else can — or can’t — they do? To answer these questions and more, I sat down with Google Research’s Maithra Raghu, a research scientist who spends her days helping computer scientists better understand neural networks. Her research helped the Google Health team discover new ways to apply deep learning to assist doctors and their patients.

So, the big question: What’s a neural network?

To understand neural networks, we need to first go back to the basics and understand how they fit into the bigger picture of artificial intelligence (AI). Imagine a Russian nesting doll, Maithra explains. AI would be the largest doll, then within that, there’s machine learning (ML), and within that, neural networks (... and within that, deep neural networks, but we’ll get there soon!).

If you think of AI as the science of making things smart, ML is the subfield of AI focused on making computers smarter by teaching them to learn, instead of hard-coding them. Within that, neural networks are an advanced technique for ML, where you teach computers to learn with algorithms that take inspiration from the human brain.

Your brain fires off groups of neurons that communicate with each other. In an artificial neural network, (the computer type), a “neuron” (which you can think of as a computational unit) is grouped with a bunch of other “neurons” into a layer, and those layers  stack on top of each other. Between each of those layers are connections. The more layers  a neural network has, the “deeper” it is. That’s where the idea of “deep learning” comes from. “Neural networks depart from neuroscience because you have a mathematical element to it,” Maithra explains, “Connections between neurons are numerical values represented by matrices, and training the neural network uses gradient-based algorithms.” 

This might seem complex, but you probably interact with neural networks fairly often — like when you’re scrolling through personalized movie recommendations or chatting with a customer service bot.

So once you’ve set up a neural network, is it ready to go?

Not quite. The next step is training. That’s where the model becomes much more sophisticated. Similar to people, neural networks learn from feedback. If you go back to the cat and dog example, your neural network would look at pictures and start by randomly guessing. You’d label the training data (for example, telling the computer if each picture features a cat or dog), and those labels would provide feedback, telling the neural network when it’s right or wrong. Throughout this process, the neural network’s parameters adjust, and the neural network transitions from not knowing to learning how to identify between cats and dogs.

Why don’t we use neural networks all the time?

“Though neural networks are based on our brains, the way they learn is actually very different from humans,” Maithra says. “Neural networks are usually quite specialized and narrow. This can be useful because, for example, it means a neural network might be able to process medical scans much quicker than a doctor, or spot patterns  a trained expert might not even notice.” 

But because neural networks learn differently from people, there's still a lot that computer scientists don’t know about how they work. Let’s go back to cats versus dogs: If your neural network gives you all the right answers, you might think it’s behaving as intended. But Maithra cautions that neural networks can work in mysterious ways.

“Perhaps your neural network isn’t able to identify between cats and dogs at all – maybe it’s only able to identify between sofas and grass, and all of your pictures of cats happen to be on couches, and all your pictures of dogs are in parks,” she says. “Then, it might seem like it knows the difference when it actually doesn’t.”

That’s why Maithra and other researchers are diving into the internals of neural networks, going deep into their layers and connections, to better understand them – and come up with ways to make them more helpful.

“Neural networks have been transformative for so many industries,” Maithra says, “and I’m excited that we’re going to realize even more profound applications for them moving forward.”

Ask a Techspert: How can we fight energy rush hours?

Editor’s Note: Do you ever feel like a fish out of water? Try being a tech novice and talking to an engineer at a place like Google. Ask a Techspert is a series on the Keyword asking Googler experts to explain complicated technology for the rest of us. This isn’t meant to be comprehensive, but just enough to make you sound smart at a dinner party.

Returning from a weekend trip this past winter, my husband and I watched in real time as our security camera cut to black and our Nest app reported the thermostat had lost power. The entire neighborhood had no electricity...thanks to an ice storm that caused a tree in our very own backyard to fall. We returned to a dark, cold home, which stayed that way for two days until the power company made their way through downed trees and ice to reconnect us.

Suddenly, the lights turned on, the internet came back and best yet, we heard the gentle whir of the heater. We blasted the heat — and I have to imagine the homes around us did, too. That likely created an “energy rush hour,” something the Nest team is working on reducing through its Rush Hour Rewards program, which works with utility companies to reward you for saving energy using your Thermostat. Nest is currently celebrating Earth Day with a discount: You can get the Nest Thermostat for $99, which coupled with utility rebates could make the thermostat free for people in certain areas. 

But what exactly creates or constitutes an energy rush hour? And what role do utility companies play? 

I turned to Hannah Bascom, head of energy partnerships for Google Nest. Her job is to find ways for Google to partner with energy companies and services...and this week, to also answer my questions. 

Let’s start with the basics: Tell me about energy rush hours! 

Certain times of the year, especially when it’s very hot or cold, everyone cranks their A/C or heat in addition to all of the usual energy-consuming things we already do, so demand for energy is very high. We call these energy rush hours.

Image showing a hand adjusting a Nest Thermostat on a wall next to a circular mirror.

Then my neighborhood definitely created an energy rush hour this winter during the ice storm. So when everyone cranks their heat or A/C, what do the utility companies do?

 When demand for energy spikes, utility companies typically turn on additional power plants — which are often very expensive and emit a lot of carbon dioxide. And as more people need increasing amounts of energy in their homes and businesses, energy rush hours happen more frequently. We’ve seen several examples of brownouts recently — utilities didn’t have enough power to supply everyone, so they had to shut off power in certain places. As extreme weather events become more common this could happen more regularly, so utilities are considering building more power plants, which is costly and could increase carbon emissions.

But it doesn’t have to be that way! Utilities can incentivize customers to use less energy.

How? I can’t imagine not blasting my heat when it was so cold. 

Nest’s Rush Hour Rewards is one way people automatically lower energy use during energy rush hours without being uncomfortable in their homes. Think about using GPS during a traffic jam: You’re sitting on the highway and it reroutes you to side roads to get around the gridlock. You reach the same destination, you just took a slightly different way. Rush Hour Rewards is like that: Nest reroutes your home’s energy usage during times of grid congestion, but you still reach your destination — which in this case is your comfort level.  

When you enroll in the program, your thermostat will use less energy during times of high demand, but you’ll stay comfortable. And you get rewarded by your utility company because they don’t have to fire up additional generators. That reward could come in the form of bill credits or a sent check. You may even be able to get an instant discount on a Nest Thermostat from your utility provider. Just search for your utility and “Nest Thermostat” to find discounts.

How many customers using Rush Hour Rewards does it take to offset a power plant?

It definitely depends on the scenario but here’s one example: There are lots of peaker plants — the kind of power plant a utility would bring online during an energy rush hour — that are 50 megawatts in size, which is equivalent to only 50,000 thermostats participating in an event. Most major sports arenas hold more people than that!

How does the Nest Thermostat know when an energy rush hour is coming up?

Your energy company, or sometimes another entity that manages your electric grid, monitors weather conditions and forecasts electricity demand. When they predict demand will be high, they call a rush hour. Rush hours can also happen during grid emergencies, like when power plants suddenly go offline due to mechanical failure or extreme weather.

Another fun fact is that virtual power plants help balance renewables like solar and wind on the grid. 

What’s a virtual power plant?

A virtual power plant is what’s created when a bunch of different sources — like home batteries and smart thermostats — come together to help the grid like a power plant would. Because energy output from these sources varies based on things like cloud cover and wind speed, “mini” energy rush hours occur more frequently when there isn’t quite enough energy supply to meet demand. People who participate in Rush Hour Rewards can help balance the grid demand with energy supply. 

How does the Nest thermostat know what temperature is enough to keep me warm or cool but also enough to make a difference during an energy rush hour? 

Your Nest thermostat is very smart! It learns from your use what temperatures keep you comfortable and will make slight adjustments to those settings during or even before rush hours. For example, Nest may pre-cool your house a little bit before a rush hour event starts so that it runs less A/C during the rush hour. Same goes for pre-heating.

Right now, only thermostats participate in rush hours, but in the future your electric vehicle or even your whole home may be able to join in.

Ask a Techspert: How do satellite images work?

When flying, I am firmly a window seat person. (And I can’t wait to start flying again… or at least get out of my apartment.) Not because I’m annoyed by the beverage cart hitting my elbows (though I am), or because I like to blankly stare out at the endless sky (which I do), but because I enjoy looking down at the streets, buildings and skyline of my destination as we land. It’s thrilling to watch cars move, see skyscrapers cast shadows on the street or check out the reflection of the sun in a body of water. For most of human history, it was impossible to even imagine what Earth looked like from above, and only in the past century have we been able to capture it. 

Today, satellite imagery is one of the most popular features on Google Maps. Capturing the world from above is a huge undertaking, matching millions of images to precise locations. But how does satellite imagery actually work? How often are images updated? What are some of the biggest challenges to bringing satellite imagery to more than 1 billion users?

To answer these questions, I reached out to our satellite imagery techspert, Matt Manolides. Matt is Google’s Geo Data Strategist. He’s worked at Google for over 14 years and he gave me an aerial view (pun intended) of how satellite imagery works.

How do we accumulate the images used in Google Maps? Do we actually use satellites? 

The mosaic of satellite and aerial photographs you can see in Google Maps and Google Earth is sourced from many different providers, including state agencies, geological survey organizations and commercial imagery providers. These images are taken on different dates and under different lighting and weather conditions.

In fact, there’s an entire industry around doing aerial surveys. Companies cut holes in the bottom of planes, and cameras take pictures as they fly overhead. In many areas around the world, this is happening constantly. In parts of the world where there isn’t an established aerial survey market, we rely on satellites. With aerial surveys, we get very high-quality images that are sharp enough to create detailed maps. Satellites produce lower-quality imagery, but are still helpful because they provide global coverage. 

Today, you can explore 36 million square miles of high-resolution satellite images in Google Maps and Google Earth, which covers more than 98 percent of the entire population.

When do the images meet the map? 

“Google obtains commercially-available satellite imagery from a range of third parties, and our team stitches the images together to create a seamless map,” Matt tells me. This is a process called photogrammetry and, according to Matt, we’re increasingly able to automate our photogrammetry process using machine learning to help accurately place images and improve resolution. 

For aerial data, the images are delivered on hard disks and we upload them into Google Cloud. For satellite imagery, the data is uploaded directly from our providers to Google Cloud. The imagery is delivered in a raw format, meaning it’s not yet positioned on the ground and is separated into red, blue and green photos, as well as panchromatic images, which includes finer details. We then combine the jumble of images so they all line up and have an accurate placement in the real world, and generally look beautiful.  

Hard drives with satellite imagery in a room

Rooms full of hard drives, each one jam-packed with aerial images.

How often do you update satellite images? 

“We aim to update satellite imagery of the places that are changing the most,” Matt says. For instance, because big cities are always evolving, we try to update our satellite images every year. For medium-sized cities, we try to update images every two years, and it goes up to every three years for smaller cities. Overall our goal is to keep densely populated places refreshed on a regular basis and to keep up with a changing world, so we will refresh areas more frequently when we think there’s lots of building or road construction going on.

Why do we sometimes see mysterious objects on Maps? What are they? 

Matt explains that sometimes the way the images are collected can create optical illusions. One of the most common instances of this are “sunken ships,” which are actually regular, operating ships that might appear underwater due to the way the satellite imagery gets layered together. Other times, sunlight can reflect off something shiny, and it will look like a strange white object that some believe are haunted houses or other such spookiness.

sinking_ship.png

A spooky "sunken ship" illusion in London. 

Because the satellite cameras take multiple pictures at the same time, but in different color spectrums, a fast-moving object, like a plane, can look strange, like several identical but differently-colored planes flying over each other. 

"Rainbow" plane image

As for Matt, his favorite part is finding public events that are happening when the images are captured. From hydroplane races to car shows, it’s fascinating to see events in the overhead imagery. 


“When I was a kid growing up in Seattle, I always loved the hydroplane races that would happen each summer. It was a thrill to realize that we captured one from the air back in 2010,” Matt says. “The imagery isn’t visible in Google Maps anymore, but you can still see it using Google Earth Pro’s Historic Imagery feature, which lets you browse our full catalog of imagery.”

Hydroplane races

A hydroplane race on Lake Sammamish, Washington, on June 10, 2010. 

Source: Google LatLong


Ask a Techspert: How do satellite images work?

When flying, I am firmly a window seat person. (And I can’t wait to start flying again… or at least get out of my apartment.) Not because I’m annoyed by the beverage cart hitting my elbows (though I am), or because I like to blankly stare out at the endless sky (which I do), but because I enjoy looking down at the streets, buildings and skyline of my destination as we land. It’s thrilling to watch cars move, see skyscrapers cast shadows on the street or check out the reflection of the sun in a body of water. For most of human history, it was impossible to even imagine what Earth looked like from above, and only in the past century have we been able to capture it. 

Today, satellite imagery is one of the most popular features on Google Maps. Capturing the world from above is a huge undertaking, matching millions of images to precise locations. But how does satellite imagery actually work? How often are images updated? What are some of the biggest challenges to bringing satellite imagery to more than 1 billion users?

To answer these questions, I reached out to our satellite imagery techspert, Matt Manolides. Matt is Google’s Geo Data Strategist. He’s worked at Google for over 14 years and he gave me an aerial view (pun intended) of how satellite imagery works.

How do we accumulate the images used in Google Maps? Do we actually use satellites? 

The mosaic of satellite and aerial photographs you can see in Google Maps and Google Earth is sourced from many different providers, including state agencies, geological survey organizations and commercial imagery providers. These images are taken on different dates and under different lighting and weather conditions.

In fact, there’s an entire industry around doing aerial surveys. Companies cut holes in the bottom of planes, and cameras take pictures as they fly overhead. In many areas around the world, this is happening constantly. In parts of the world where there isn’t an established aerial survey market, we rely on satellites. With aerial surveys, we get very high-quality images that are sharp enough to create detailed maps. Satellites produce lower-quality imagery, but are still helpful because they provide global coverage. 

Today, you can explore 36 million square miles of high-resolution satellite images in Google Maps and Google Earth, which covers more than 98 percent of the entire population.

When do the images meet the map? 

“Google obtains commercially-available satellite imagery from a range of third parties, and our team stitches the images together to create a seamless map,” Matt tells me. This is a process called photogrammetry and, according to Matt, we’re increasingly able to automate our photogrammetry process using machine learning to help accurately place images and improve resolution. 

For aerial data, the images are delivered on hard disks and we upload them into Google Cloud. For satellite imagery, the data is uploaded directly from our providers to Google Cloud. The imagery is delivered in a raw format, meaning it’s not yet positioned on the ground and is separated into red, blue and green photos, as well as panchromatic images, which includes finer details. We then combine the jumble of images so they all line up and have an accurate placement in the real world, and generally look beautiful.  

Hard drives with satellite imagery in a room

Rooms full of hard drives, each one jam-packed with aerial images.

How often do you update satellite images? 

“We aim to update satellite imagery of the places that are changing the most,” Matt says. For instance, because big cities are always evolving, we try to update our satellite images every year. For medium-sized cities, we try to update images every two years, and it goes up to every three years for smaller cities. Overall our goal is to keep densely populated places refreshed on a regular basis and to keep up with a changing world, so we will refresh areas more frequently when we think there’s lots of building or road construction going on.

Why do we sometimes see mysterious objects on Maps? What are they? 

Matt explains that sometimes the way the images are collected can create optical illusions. One of the most common instances of this are “sunken ships,” which are actually regular, operating ships that might appear underwater due to the way the satellite imagery gets layered together. Other times, sunlight can reflect off something shiny, and it will look like a strange white object that some believe are haunted houses or other such spookiness.

sinking_ship.png

A spooky "sunken ship" illusion in London. 

Because the satellite cameras take multiple pictures at the same time, but in different color spectrums, a fast-moving object, like a plane, can look strange, like several identical but differently-colored planes flying over each other. 

"Rainbow" plane image

As for Matt, his favorite part is finding public events that are happening when the images are captured. From hydroplane races to car shows, it’s fascinating to see events in the overhead imagery. 


“When I was a kid growing up in Seattle, I always loved the hydroplane races that would happen each summer. It was a thrill to realize that we captured one from the air back in 2010,” Matt says. “The imagery isn’t visible in Google Maps anymore, but you can still see it using Google Earth Pro’s Historic Imagery feature, which lets you browse our full catalog of imagery.”

Hydroplane races

A hydroplane race on Lake Sammamish, Washington, on June 10, 2010. 

Source: Google LatLong


Ask a Techspert: How do machine learning models explain themselves?

Editor’s Note: Do you ever feel like a fish out of water? Try being a tech novice and talking to an engineer at a place like Google. Ask a Techspert is a series on the Keyword asking Googler experts to explain complicated technology for the rest of us. This isn’t meant to be comprehensive, but just enough to make you sound smart at a dinner party. 

A few years ago, I learned that a translation from Finnish to English using Google Translate led to an unexpected outcome. The sentence “hän on lentäjä” became “he is a pilot” in English, even though “hän” is a gender-neutral word in Finnish. Why did Translate assume it was “he” as the default? 

As I started looking into it, I became aware that just like humans, machines are affected by society’s biases. The machine learning model for Translate relied on training data, which consisted of the input from hundreds of millions of already-translated examples from the web. “He” was more associated with some professions than “she” was, and vice versa. 

Now, Google provides options for both feminine and masculine translations when adapting gender-neutral words in several languages, and there’s a continued effort to roll it out more broadly. But it’s still a good example of how machine learning can reflect the biases we see all around us. Thankfully, there are teams at Google dedicated to finding human-centered solutions to making technology inclusive for everyone. I sat down with Been Kim, a Google researcher working on the People + AI Research (PAIR) team, who devotes her time to making sure artificial intelligence puts people, not machines, at its center, and helping others understand the full spectrum of human interaction with machine intelligence. We talked about how you make machine learning models easy to interpret and understand, and why it’s important for everybody to have a basic idea of how the technology works.

Been Kim

Why is this field of work so important?

Machine learning is such a powerful tool, and because of that, you want to make sure you’re using it responsibly. Let’s take an electric machine saw as an example. It’s a super powerful tool, but you need to learn how to use it in order not to cut your fingers. Once you learn, it’s so useful and efficient that you’ll never want to go back to using a hand saw. And the same goes for machine learning. We want to help you understand and use machine learning correctly, fairly and safely. 

Since machine learning is used in our everyday lives, it’s also important for everyone to understand how it impacts us. No matter whether you’re a coffee shop owner using machine learning to optimize the purchase of your beans based on seasonal trends, or your doctor diagnoses you with a disease with the help of this technology, it’s often crucial to understand why a machine learning model has produced the outcome it has. It’s also important for developers and decision-makers to be able to explain or present a machine learning model to people in order to do so. This is what we call “interpretability.” 

How do you make machine learning models easier to understand and interpret? 

There are many different ways to make an ML model easier to understand. One way is to make the model reflect how humans think from the start, and have the model "trained" to provide explanations along with predictions, meaning when it gives you an outcome, it also has to explain how it got there. 

Another way is to try and explain a model after the training on data is done. This is something you can do when the model has been built to use input to provide an output from its own perspective, optimizing for prediction, without a clear “how” included. This means you're able to plug things into it and see what comes out, and that can give you some insight into how the model generally makes decisions, but you don't necessarily know exactly how specific inputs are interpreted by the model in specific cases. 

One way to try and explain models after they’ve been trained is using low level features or high level concepts. Let me give you an example of what this means. Imagine a system that classifies pictures: you give it a picture and it says, “This is a cat.” A low level feature is when I then ask the machine which pixels mattered for that prediction, it can tell us if it was one pixel or the other, and we might be able to see that the pixels in question show the cat’s whiskers. But we might also see that it is a scattering of pixels that don’t appear meaningful to the human eye, or that it’s made the wrong interpretation. High level concepts are more similar to the way humans communicate with one another. Instead of asking about pixels, I’d ask, “Did the whiskers matter for the prediction? or the paws?” and again, the machine can show me what imagery led it to reach this conclusion. Based on the outcome, I can understand the model better. (Together with researchers from Stanford, we’ve published papers that go into further detail on this for those who are interested.)

Can machines understand some things that we humans can’t? 

Yes! This is an area that I am very interested in myself. I am currently working on a way to showcase how technology can help humans learn new things. Machine learning technology is better at some things than we are; for example it can analyze and interpret data at a much larger scale than humans can. Leveraging this technology, I believe we can enlighten human scientists with knowledge they haven't previously been aware of. 

What do you need to be careful of when you’re making conclusions based on machine learning models?

First of all, we have to be careful that human bias doesn't come into play. Humans carry biases that we simply cannot help and are often unaware of, so if an explanation is up to a human’s interpretation, and often it is, then we have a problem. Humans read what they want to read. Now, this doesn’t mean that you should remove humans from the loop. Humans communicate with machines, and vice versa. Machines need to communicate their outcomes in the form of a clear statement using quantitative data, not one that is vague and completely open for interpretation. If the latter happens, then the machine hasn’t done a very good job and the human isn’t able to provide good feedback to the machine. It could also be that the outcome simply lacks additional context only the human can provide, or that it could benefit from having caveats, in order for them to make an informed judgement about the results of the model. 

What are some of the main challenges of this work? 

Well, one of the challenges for computer scientists in this field is dealing with non mathematical objectives, which are things you might want to optimize for, but don’t have an equation for. You can’t always define what is good for humans using math. That requires us to test and evaluate methods with rigor, and have a table full of different people to discuss the outcome. Another thing has to do with complexity. Humans are so complex that we have a whole field of work - psychology - to study this. So in my work, we don't just have computational challenges, but also complex humans that we have to consider. Value-based questions such as “what defines fairness?” are even harder. They require interdisciplinary collaboration, and a diverse group of people in the room to discuss each individual matter.

What's the most exciting part? 

I think interpretability research and methods are making a huge impact. Machine learning technology is a powerful tool that will transform society as we know it, and helping others to use it safely is very rewarding. 

On a more personal note, I come from South Korea and grew up in circumstances where I feel I didn’t have too many opportunities. I was incredibly lucky to get a scholarship to MIT and come to the U.S. When I think about the people who haven't had these opportunities to be educated in science or machine learning, and knowing that this machine learning technology can really help and be useful to them in their everyday lives if they use it safely, I feel really motivated to be working on democratizing this technology. There's many ways to do it, and interpretability is one of the things that I can contribute with.