Tag Archives: Pixel

A focus on portrait mode: behind the scenes with Pixel 2’s camera features

This week the Pixel 2 and Pixel 2 XL, Google’s newest smartphones, arrive in stores. Both devices come with features like Now Playing, the Google Assistant, and the best-rated smartphone camera ever, according to DXO.


We designed Pixel 2’s camera by asking how we can make the camera in your Pixel 2 act like SLRs and other big cameras, and we’ve talked before about the tech we use to do that (such as HDR+). Today we’re highlighting a new feature for Pixel 2’s camera: portrait mode.


With portrait mode, you can take pictures of your friends and family (that includes your pets too!) that keep what’s important sharp and in focus, but softly blur out the background. This helps draw your attention to the people in your pictures and keeps you from getting distracted by what’s in the background. This works on both Pixel 2 and Pixel 2 XL, on both the rear- and front-facing cameras.

girl-with-the-orange-hat-s.jpg
Pictures without (left) and with (right) portrait mode. Photo by Matt Jones

Technically, blurring out the background like this is an effect called “shallow depth of field.” The big lenses on SLRs can be configured to do this by changing their aperture, but smartphone cameras have fixed, small apertures that produce images where everything is more or less sharp. To create this effect with a smartphone camera, we need to know which parts of the image are far away in order to blur them artificially.


Normally, to determine what’s far away with a smartphone camera you’d need to use two cameras close to each other, then triangulate the depth of various parts of the scene—just like your eyes work. But on Pixel 2 we’re able to combine computational photography and machine learning to do the same with just one camera.

How portrait mode works on the Pixel 2


Portrait mode starts with an HDR+ picture where everything is sharp and high-quality.


Next, our technology needs to decide which pixels belong to the foreground of the image (a person, or your dog) and which belong to the background. This is called a “segmentation mask” and it’s where machine learning comes in. We trained a neural network to look at a picture and understand which pixels are people and which aren’t. Because photos of people may also include things like hats, sunglasses, and ice cream cones, we trained our network on close to a million pictures—including pictures with things like those!


Just creating two layers—foreground and background, with a hard edge in between them—isn’t quite enough for all pictures you’d want to take; SLRs produce blur that gets stronger with each fraction of an inch further from the thing that’s in sharp focus. To recreate that look with Pixel 2’s rear camera, we use the new Dual Pixel sensor to look through the left and right sides of the camera’s tiny lens at the same time—effectively giving us two cameras for the price of one. Using these two views we compute a depth map: the distance from the camera to each point in the scene. Then we blur the image based on the combination of the depth map and the segmentation mask.


The result? Portrait mode.

girl-with-coffee.jpg
Pictures without (left) and with (right) portrait mode. Photo by Sam Kweskin

Portrait mode works a little differently on the front-facing camera, where we aren’t able to produce a depth map the same way we do with the more powerful rear-facing camera. For selfies, we just use our segmentation mask, which works particularly well for selfies since they have simpler compositions.

bikeride-selfie-comp-s.jpg
Selfie without (left) and with (right) portrait mode. The front-facing camera identifies which background pixels to blur using only machine learning—no depth map. Photo by Marc Levoy

When and how to use portrait mode


Portrait mode on the Pixel 2 is automatic and easy to use—just choose it from your camera menu then take your picture. You can use it for pictures of your friends, family, and even pets. You can also use it for all kinds of “close-up” shots of objects such flowers, food, or bumblebees (just don’t get stung!) with background blur.

flower-comp-s.jpg
Close-up picture without (left) and with (right) portrait mode. Photo by Marc Levoy

Here are some tips for how to take great portraits using any camera (and Pixel 2 as well!):


  • Stand close enough to your subjects that their head (or head and shoulders) fill the frame.
  • For a group shot where you want everyone sharp, place them at the same distance from the camera.
  • Put some distance between your subjects and the background.
  • For close-up shots, tap to focus to get more control over what’s sharp and what’s blurred. Also, the camera can’t focus on things closer than several inches, so stay at least that far away.
To learn more about portrait mode on the Pixel 2, watch this video by by Nat & Friends, or geek out with our our in-depth, technical post over on the Research blog.
How Google Built the Pixel 2 Camera

Portrait mode on the Pixel 2 and Pixel 2 XL smartphones



Portrait mode, a major feature of the new Pixel 2 and Pixel 2 XL smartphones, allows anyone to take professional-looking shallow depth-of-field images. This feature helped both devices earn DxO's highest mobile camera ranking, and works with both the rear-facing and front-facing cameras, even though neither is dual-camera (normally required to obtain this effect). Today we discuss the machine learning and computational photography techniques behind this feature.
HDR+ picture without (left) and with (right) portrait mode. Note how portrait mode’s synthetic shallow depth of field helps suppress the cluttered background and focus attention on the main subject. Click on these links in the caption to see full resolution versions. Photo by Matt Jones
What is a shallow depth-of-field image?
A single-lens reflex (SLR) camera with a big lens has a shallow depth of field, meaning that objects at one distance from the camera are sharp, while objects in front of or behind that "in-focus plane" are blurry. Shallow depth of field is a good way to draw the viewer's attention to a subject, or to suppress a cluttered background. Shallow depth of field is what gives portraits captured using SLRs their characteristic artistic look.

The amount of blur in a shallow depth-of-field image depends on depth; the farther objects are from the in-focus plane, the blurrier they appear. The amount of blur also depends on the size of the lens opening. A 50mm lens with an f/2.0 aperture has an opening 50mm/2 = 25mm in diameter. With such a lens, objects that are even a few inches away from the in-focus plane will appear soft.

One other parameter worth knowing about depth of field is the shape taken on by blurred points of light. This shape is called bokeh, and it depends on the physical structure of the lens's aperture. Is the bokeh circular? Or is it a hexagon, due to the six metal leaves that form the aperture inside some lenses? Photographers debate tirelessly about what constitutes good or bad bokeh.

Synthetic shallow depth of field images
Unlike SLR cameras, mobile phone cameras have a small, fixed-size aperture, which produces pictures with everything more or less in focus. But if we knew the distance from the camera to points in the scene, we could replace each pixel in the picture with a blur. This blur would be an average of the pixel's color with its neighbors, where the amount of blur depends on distance of that scene point from the in-focus plane. We could also control the shape of this blur, meaning the bokeh.

How can a cell phone estimate the distance to every point in the scene? The most common method is to place two cameras close to one another – so-called dual-camera phones. Then, for each patch in the left camera's image, we look for a matching patch in the right camera's image. The position in the two images where this match is found gives the depth of that scene feature through a process of triangulation. This search for matching features is called a stereo algorithm, and it works pretty much the same way our two eyes do.

A simpler version of this idea, used by some single-camera smartphone apps, involves separating the image into two layers – pixels that are part of the foreground (typically a person) and pixels that are part of the background. This separation, sometimes called semantic segmentation, lets you blur the background, but it has no notion of depth, so it can't tell you how much to blur it. Also, if there is an object in front of the person, i.e. very close to the camera, it won't be blurred out, even though a real camera would do this.

Whether done using stereo or segmentation, artificially blurring pixels that belong to the background is called synthetic shallow depth of field or synthetic background defocusing. Synthetic defocus is not the same as the optical blur you would get from an SLR, but it looks similar to most people.

How portrait mode works on the Pixel 2
The Google Pixel 2 offers portrait mode on both its rear-facing and front-facing cameras. For the front-facing (selfie) camera, it uses only segmentation. For the rear-facing camera it uses both stereo and segmentation. But wait, the Pixel 2 has only one rear facing camera; how can it see in stereo? Let's go through the process step by step.
Step 1: Generate an HDR+ image.
Portrait mode starts with a picture where everything is sharp. For this we use HDR+, Google's computational photography technique for improving the quality of captured
photographs, which runs on all recent Nexus/Pixel phones. It operates by capturing a burst of images that are underexposed to avoid blowing out highlights, aligning and averaging these frames to reduce noise in the shadows, and boosting these shadows in a way that preserves local contrast while judiciously reducing global contrast. The result is a picture with high dynamic range, low noise, and sharp details, even in dim lighting.
The idea of aligning and averaging frames to reduce noise has been known in astrophotography for decades. Google's implementation is a bit different, because we do it on bursts captured by a handheld camera, and we need to be careful not to produce ghosts (double images) if the photographer is not steady or if objects in the scene move. Below is an example of a scene with high dynamic range, captured using HDR+.
Photographs from the Pixel 2 without (left) and with (right) HDR+ enabled.
Notice how HDR+ avoids blowing out the sky and courtyard while retaining detail in the dark arcade ceiling.
Photo by Marc Levoy
Step 2:  Machine learning-based foreground-background segmentation.
Starting from an HDR+ picture, we next decide which pixels belong to the foreground (typically aperson) and which belong to the background. This is a tricky problem, because unlike chroma keying (a.k.a. green-screening) in the movie industry, we can't assume that the background is green (or blue, or any other color) Instead, we apply machine learning.  
In particular, we have trained a neural network, written in TensorFlow, that looks at the picture, and produces an estimate of which pixels are people and which aren't. The specific network we use is a convolutional neural network (CNN) with skip connections. "Convolutional" means that the learned components of the network are in the form of filters (a weighted sum of the neighbors around each pixel), so you can think of the network as just filtering the image, then filtering the filtered image, etc. The "skip connections" allow information to easily flow from the early stages in the network where it reasons about low-level features (color and edges) up to later stages of the network where it reasons about high-level features (faces and body parts). Combining stages like this is important when you need to not just determine if a photo has a person in it, but to identify exactly which pixels belong to that person. Our CNN was trained on almost a million pictures of people (and their hats, sunglasses, and ice cream cones). Inference to produce the mask runs on the phone using TensorFlow Mobile. Here’s an example:
At left is a picture produced by our HDR+ pipeline, and at right is the smoothed output of our neural network. White parts of this mask are thought by the network to be part of the foreground, and black parts are thought to be background.
Photo by Sam Kweskin
How good is this mask? Not too bad; our neural network recognizes the woman's hair and her teacup as being part of the foreground, so it can keep them sharp. If we blur the photograph based on this mask, we would produce this image:
Synthetic shallow depth-of-field image generated using a mask.
There are several things to notice about this result. First, the amount of blur is uniform, even though the background contains objects at varying depths. Second, an SLR would also blur out the pastry on her plate (and the plate itself), since it's close to the camera. Our neural network knows the pastry isn't part of her (note that it's black in the mask image), but being below her it’s not likely to be part of the background. We explicitly detect this situation and keep these pixels relatively sharp. Unfortunately, this solution isn’t always correct, and in this situation we should have blurred these pixels more.
Step 3. From dual pixels to a depth map
To improve on this result, it helps to know the depth at each point in the scene. To compute depth we can use a stereo algorithm. The Pixel 2 doesn't have dual cameras, but it does have a technology called Phase-Detect Auto-Focus (PDAF) pixels, sometimes called dual-pixel autofocus (DPAF). That's a mouthful, but the idea is pretty simple. If one imagines splitting the (tiny) lens of the phone's rear-facing camera into two halves, the view of the world as seen through the left side of the lens and the view through the right side are slightly different. These two viewpoints are less than 1mm apart (roughly the diameter of the lens), but they're different enough to compute stereo and produce a depth map. The way the optics of the camera works, this is equivalent to splitting every pixel on the image sensor chip into two smaller side-by-side pixels and reading them from the chip separately, as shown here:
On the rear-facing camera of the Pixel 2, the right side of every pixel looks at the world through the left side of the lens, and the left side of every pixel looks at the world through the right side of the lens.
Figure by Markus Kohlpaintner, reproduced with permission.
As the diagram shows, PDAF pixels give you views through the left and right sides of the lens in a single snapshot. Or, if you're holding your phone in portrait orientation, then it's the upper and lower halves of the lens. Here's what the upper image and lower image look like for our example scene (below). These images are monochrome because we only use the green pixels of our Bayer color filter sensor in our stereo algorithm, not the red or blue pixels. Having trouble telling the two images apart? Maybe the animated gif at right (below) will help. Look closely; the differences are very small indeed!
Views of our test scene through the upper half and lower half of the lens of a Pixel 2. In the animated gif at right,
notice that she holds nearly still, because the camera is focused on her, while the background moves up and down.
Objects in front of her, if we could see any, would move down when the background moves up (and vice versa).
PDAF technology can be found in many cameras, including SLRs to help them focus faster when recording video. In our application, this technology is being used instead to compute a depth map.  Specifically, we use our left-side and right-side images (or top and bottom) as input to a stereo algorithm similar to that used in Google's Jump system panorama stitcher (called the Jump Assembler). This algorithm first performs subpixel-accurate tile-based alignment to produce a low-resolution depth map, then interpolates it to high resolution using a bilateral solver. This is similar to the technology formerly used in Google's Lens Blur feature.    
One more detail: because the left-side and right-side views captured by the Pixel 2 camera are so close together, the depth information we get is inaccurate, especially in low light, due to the high noise in the images. To reduce this noise and improve depth accuracy we capture a burst of left-side and right-side images, then align and average them before applying our stereo algorithm. Of course we need to be careful during this step to avoid wrong matches, just as in HDR+, or we'll get ghosts in our depth map (but that's the subject of another blog post). On the left below is a depth map generated from the example shown above using our stereo algorithm.
Left: depth map computed using stereo from the foregoing upper-half-of-lens and lower-half-of-lens images. Lighter means closer to the camera. 
Right: visualization of how much blur we apply to each pixel in the original. Black means don't blur at all, red denotes scene features behind the in-focus plane (which is her face), the brighter the red the more we blur, and blue denotes features in front of the in-focus plane (the pastry).
Step 4. Putting it all together to render the final image
The last step is to combine the segmentation mask we computed in step 2 with the depth map we computed in step 3 to decide how much to blur each pixel in the HDR+ picture from step 1.  The way we combine the depth and mask is a bit of secret sauce, but the rough idea is that we want scene features we think belong to a person (white parts of the mask) to stay sharp, and features we think belong to the background (black parts of the mask) to be blurred in proportion to how far they are from the in-focus plane, where these distances are taken from the depth map. The red-colored image above is a visualization of how much to blur each pixel.
Actually applying the blur is conceptually the simplest part; each pixel is replaced with a translucent disk of the same color but varying size. If we composite all these disks in depth order, it's like the averaging we described earlier, and we get a nice approximation to real optical blur.  One of the benefits of defocusing synthetically is that because we're using software, we can get a perfect disk-shaped bokeh without lugging around several pounds of glass camera lenses.  Interestingly, in software there's no particular reason we need to stick to realism; we could make the bokeh shape anything we want! For our example scene, here is the final portrait mode output. If you compare this
result to the the rightmost result in step 2, you'll see that the pastry is now slightly blurred, much as you would expect from an SLR.
Final synthetic shallow depth-of-field image, generated by combining our HDR+
picture, segmentation mask, and depth map. Click for a full-resolution image.
Ways to use portrait mode
Portrait mode on the Pixel 2 runs in 4 seconds, is fully automatic (as opposed to Lens Blur mode on
previous devices, which required a special up-down motion of the phone), and is robust enough to
be used by non-experts. Here is an album of examples, including some hard cases, like people with frizzy hair, people holding flower bouquets, etc. Below is a list of a few ways you can use Portrait Mode on the new Pixel 2.
Taking macro shots
If you're in portrait mode and you point the camera at a small object instead of a person (like a flower or food), then our neural network can't find a face and won't produce a useful segmentation mask. In other words, step 2 of our pipeline doesn't apply. Fortunately, we still have a depth map from PDAF data (step 3), so we can compute a shallow depth-of-field image based on the depth map alone. Because the baseline between the left and right sides of the lens is so small, this works well only for objects that are roughly less than a meter away. But for such scenes it produces nice pictures. You can think of this as a synthetic macro mode. Below are example straight and portrait mode shots of a macro-sized object, and here's an album with more macro shots, including more hard cases, like a water fountain with a thin wire fence behind it. Just be careful not to get too close; the Pixel 2 can’t focus sharply on objects closer than about 10cm from the camera.
Macro picture without (left) and with (right) portrait mode. There’s no person here, so background pixels are identified solely using the depth map. Photo by Marc Levoy
The selfie camera
The Pixel 2 offers portrait mode on the front-facing (selfie) as well as rear-facing camera. This camera is 8Mpix instead of 12Mpix, and it doesn't have PDAF pixels, meaning that its pixels aren't split into left and right halves. In this case, step 3 of our pipeline doesn't apply, but if we can find a face, then we can still use our neural network (step 2) to produce a segmentation mask. This allows us to still generate a shallow depth-of-field image, but because we don't know how far away objects are, we can't vary the amount of blur with depth. Nevertheless, the effect looks pretty good, especially for selfies shot against a cluttered background, where blurring helps suppress the clutter. Here are example straight and portrait mode selfies taken with the Pixel 2's selfie camera:
Selfie without (left) and with (right) portrait mode. The front-facing camera lacks PDAF pixels,so background pixels are identified using only machine learning. Photo by Marc Levoy

How To Get the Most Out of Portrait Mode
The portraits produced by the Pixel 2 depend on the underlying HDR+ image, segmentation mask, and depth map; problems in these inputs can produce artifacts in the result. For example, if a feature is overexposed in the HDR+ image (blown out to white), then it's unlikely the left-half and right-half images will have useful information in them, leading to errors in the depth map. What can go wrong with segmentation? It's a neural network, which has been trained on nearly a million images, but we bet it has never seen a photograph of a person kissing a crocodile, so it will probably omit the crocodile from the mask, causing it to be blurred out. How about the depth map? Our stereo algorithm may fail on textureless features (like blank walls) because there are no features for the stereo algorithm to latch onto, or repeating textures (like plaid shirts) or horizontal or vertical lines, because the stereo algorithm might match the wrong part of the image, thus triangulating to produce the wrong depth.

While any complex technology includes tradeoffs, here are some tips for producing great portrait mode shots:
  • Stand close enough to your subjects that their head (or head and shoulders) fill the frame.
  • For a group shot where you want everyone sharp, place them at the same distance from the camera.
  • For a more pleasing blur, put some distance between your subjects and the background.
  • Remove dark sunglasses, floppy hats, giant scarves, and crocodiles.
  • For macro shots, tap to focus to ensure that the object you care about stays sharp.
By the way, you'll notice that in portrait mode the camera zooms a bit (1.5x for the rear-facing camera, and 1.2x for the selfie camera).  This is deliberate, because narrower fields of view encourage you to stand back further, which in turn reduces perspective distortion,
leading to better portraits.

Is it time to put aside your SLR (forever)?
When we started working at Google 5 years ago, the number of pixels in a cell phone picture hadn't
caught up to SLRs, but it was high enough for most people's needs. Even on a big home computer
screen, you couldn't see the individual pixels in pictures you took using your cell phone.
Nevertheless, mobile phone cameras weren't as powerful as SLRs, in four ways:
  1. Dynamic range in bright scenes (blown-out skies)
  2. Signal-to-noise ratio (SNR) in low light (noisy pictures, loss of detail)
  3. Zoom (for those wildlife shots)
  4. Shallow depth of field
Google's HDR+ and similar technologies by our competitors have made great strides on #1 and #2. In fact, in challenging lighting we'll often put away our SLRs, because we can get a better picture from a phone without painful bracketing and post-processing. For zoom, the modest telephoto lenses being added to some smartphones (typically 2x) help, but for that grizzly bear in the streambed there's no substitute for a 400mm lens (much safer too!). For shallow depth-of-field, synthetic defocusing is not the same as real optical defocusing, but the visual effect is similar enough to achieve the same goal, of directing your attention towards the main subject.

Will SLRs (or their mirrorless interchangeable lens (MIL) cousins) with big sensors and big lenses disappear? Doubtful, but they will occupy a smaller niche in the market. Both of us travel with a big camera and a Pixel 2. At the beginning of our trips we dutifully take out our SLRs, but by the end, it mostly stays in our luggage. Welcome to the new world of software-defined cameras and computational photography!
For more about portrait mode on the Pixel 2, check out this video by Nat & Friends.
Here is another album of pictures (portrait and not) and videos taken by the Pixel 2.

Pixel Visual Core: image processing and machine learning on Pixel 2

The camera on the new Pixel 2 is packed full of great hardware, software and machine learning (ML), so all you need to do is point and shoot to take amazing photos and videos. One of the technologies that helps you take great photos is HDR+, which makes it possible to get excellent photos of scenes with a large range of brightness levels, from dimly lit landscapes to a very sunny sky.

HDR+ produces beautiful images, and we’ve evolved the algorithm that powers it over the past year to use the Pixel 2’s application processor efficiently, and enable you to take multiple pictures in sequence by intelligently processing HDR+ in the background. In parallel, we’ve also been working on creating hardware capabilities that enable significantly greater computing power—beyond existing hardware—to bring HDR+ to third-party photography applications.

To expand the reach of HDR+, handle the most challenging imaging and ML applications, and deliver lower-latency and even more power-efficient HDR+ processing, we’ve created Pixel Visual Core.

Pixel Visual Core is Google’s first custom-designed co-processor for consumer products. It’s built into every Pixel 2, and in the coming months, we’ll turn it on through a software update to enable more applications to use Pixel 2’s camera for taking HDR+ quality pictures.

pixel visual core
Magnified image of Pixel Visual Core

Let's delve into the details for you technical folks out there: The centerpiece of Pixel Visual Core is the Google-designed Image Processing Unit (IPU)—a fully programmable, domain-specific processor designed from scratch to deliver maximum performance at low power. With eight Google-designed custom cores, each with 512 arithmetic logic units (ALUs), the IPU delivers raw performance of more than 3 trillion operations per second on a mobile power budget. Using Pixel Visual Core, HDR+ can run 5x faster and at less than one-tenth the energy than running on the application processor (AP). A key ingredient to the IPU’s efficiency is the tight coupling of hardware and software—our software controls many more details of the hardware than in a typical processor. Handing more control to the software makes the hardware simpler and more efficient, but it also makes the IPU challenging to program using traditional programming languages. To avoid this, the IPU leverages domain-specific languages that ease the burden on both developers and the compiler: Halide for image processing and TensorFlow for machine learning. A custom Google-made compiler optimizes the code for the underlying hardware.


In the coming weeks, we’ll enable Pixel Visual Core as a developer option in the developer preview of Android Oreo 8.1 (MR1). Later, we’ll enable it for all third-party apps using the Android Camera API, giving them access to the Pixel 2’s HDR+ technology. We can’t wait to see the beautiful HDR+ photography that you already get through your Pixel 2 camera become available in your favorite photography apps.

HDR+ will be the first application to run on Pixel Visual Core. Notably, because Pixel Visual Core is programmable, we’re already preparing the next set of applications. The great thing is that as we port more machine learning and imaging applications to use Pixel Visual Core, Pixel 2 will continuously improve. So keep an eye out!

#teampixel heads to the Big Easy with Pixel 2

With Google Pixel 2 hitting the streets soon, we’re excited to see what new photography fills the #teampixel feeds in the coming weeks. In the meantime, we visited New Orleans, LA, with Timothy McGurr who captured some of the city’s unique quirks and characters for a recent shoot, Pixel 2 in tow.


Check out his lovely photos—all of which are in their natural state with absolutely no retouching, no attachments and no other equipment. Because let’s face it...New Orleans is best experienced unfiltered.   

Cheers to a year of #teampixel

We’ve come a long way, #teampixel! From being featured in an immersive digital installation to getting on the big stage, Pixel photographers have been crushing it with amazing photography that let us into their everyday lives. This week, we’re celebrating #teampixel’s one-year milestone—without this community, the world would be a less colorful, creative place.

So on behalf of all of us at Pixel, thank you for your contributions. We have some surprises planned for the community in the upcoming months, so stay tuned. And if you haven’t shared anything yet, now’s the time—there’s always room for more to join #teampixel.

In the meantime, check out just a few of our favorite #teampixel photos from the past year from a range of subjects including nature, travel and animals, including the above image from @ritchiehoang. Here’s looking at you, #teampixel…

Look, Ma, no SIM card!

While phones have come a long way over the years, there’s one thing that hasn't changed: you still need to insert a SIM card to get mobile service. But on the new Pixel 2, Project Fi is making it easier than ever to get connected. It’s the first phone built with eSIM, an embedded SIM that lets you instantly connect to a carrier network with the tap of a button. This means you no longer need to go to a store to get a SIM card for wireless service, wait a few days for your card to arrive in the mail, or fumble around with a bent paper clip to coax your SIM card into a tiny slot. Getting wireless service with eSIM is as quick as connecting your phone to Wi-Fi.  


You’ll see the option to use eSIM to connect to the Project Fi network on all Pixel 2s purchased through the Google Store or Project Fi. If you’re already a Project Fi subscriber, simply power up your Pixel 2 to begin setup. When you’re prompted to insert a SIM card, just tap the button for SIM-free setup, and we’ll take care of the heavy lifting.


For now, we’re piloting eSIM on the newest Pixel devices with Project Fi. We look forward to sharing what we learn and working together with industry partners to encourage more widespread adoption.


While we can talk about eSIM all we want, nothing beats trying it for the first time. If you’d like to give it a go, head over to the Project Fi website to sign up and purchase the Pixel 2.

Look, Ma, no SIM card!

While phones have come a long way over the years, there’s one thing that hasn't changed: you still need to insert a SIM card to get mobile service. But on the new Pixel 2, Project Fi is making it easier than ever to get connected. It’s the first phone built with eSIM, an embedded SIM that lets you instantly connect to a carrier network with the tap of a button. This means you no longer need to go to a store to get a SIM card for wireless service, wait a few days for your card to arrive in the mail, or fumble around with a bent paper clip to coax your SIM card into a tiny slot. Getting wireless service with eSIM is as quick as connecting your phone to Wi-Fi.  


You’ll see the option to use eSIM to connect to the Project Fi network on all Pixel 2s purchased through the Google Store or Project Fi. If you’re already a Project Fi subscriber, simply power up your Pixel 2 to begin setup. When you’re prompted to insert a SIM card, just tap the button for SIM-free setup, and we’ll take care of the heavy lifting.


For now, we’re piloting eSIM on the newest Pixel devices with Project Fi. We look forward to sharing what we learn and working together with industry partners to encourage more widespread adoption.


While we can talk about eSIM all we want, nothing beats trying it for the first time. If you’d like to give it a go, head over to the Project Fi website to sign up and purchase the Pixel 2.

The best hardware, software and AI—together

Today, we introduced our second generation family of consumer hardware products, all made by Google: new Pixel phones, Google Home Mini and Max, an all new Pixelbook, Google Clips hands-free camera, Google Pixel Buds, and an updated Daydream View headset. We see tremendous potential for devices to be helpful, make your life easier, and even get better over time when they’re created at the intersection of hardware, software and advanced artificial intelligence (AI).


Why Google?

These days many devices—especially smartphones—look and act the same. That means in order to create a meaningful experience for users, we need a different approach. A year ago, Sundar outlined his vision of how AI would change how people would use computers. And in fact, AI is already transforming what Google’s products can do in the real world. For example, swipe typing has been around for a while, but AI lets people use Gboard to swipe-type in two languages at once. Google Maps uses AI to figure out what the parking is like at your destination and suggest alternative spots before you’ve even put your foot on the gas. But, for this wave of computing to reach new breakthroughs, we have to build software and hardware that can bring more of the potential of AI into reality—which is what we’ve set out to do with this year’s new family of products.

Hardware, built from the inside out

We’ve designed and built our latest hardware products around a few core tenets. First and foremost, we want them to be radically helpful. They’re fast, they’re there when you need them, and they’re simple to use. Second, everything is designed for you, so that the technology doesn’t get in they way and instead blends into your lifestyle. Lastly, by creating hardware with AI at the core, our products can improve over time. They’re constantly getting better and faster through automatic software updates. And they’re designed to learn from you, so you’ll notice features—like the Google Assistant—get smarter and more assistive the more you interact with them.


You’ll see this reflected in our 2017 lineup of new Made by Google products:

  • The Pixel 2 has the best camera of any smartphone, again, along with a gorgeous display and augmented reality capabilities. Pixel owners get unlimited storage for their photos and videos, and an exclusive preview of Google Lens, which uses AI to give you helpful information about the things around you.
  • Google Home Mini brings the Assistant to more places throughout your home, with a beautiful design that fits anywhere. And Max is our biggest and best-sounding Google Home device, powered by the Assistant. And with AI-based Smart Sound, Max has the ability to adapt your audio experience to you—your environment, context, and preferences.
  • With Pixelbook, we’ve reimagined the laptop as a high-performance Chromebook, with a versatile form factor that works the way you do. It’s the first laptop with the Assistant built in, and the Pixelbook Pen makes the whole experience even smarter.
  • Our new Pixel Buds combine Google smarts and the best digital sound. You’ll get elegant touch controls that put the Assistant just a tap away, and they’ll even help you communicate in a different language.
  • The updated Daydream View is the best mobile virtual reality (VR) headset on the market, and the simplest, most comfortable VR experience.
  • Google Clips is a totally new way to capture genuine, spontaneous moments—all powered by machine learning and AI. This tiny camera seamlessly sends clips to your phone, and even edits and curates them for you.

Assistant, everywhere

Across all these devices, you can interact with the Google Assistant any way you want—talk to it with your Google Home or your Pixel Buds, squeeze your Pixel 2, or use your Pixelbook’s Assistant key or circle things on your screen with the Pixelbook Pen. Wherever you are, and on any device with the Assistant, you can connect to the information you need and get help with the tasks to get you through your day. No other assistive technology comes close, and it continues to get better every day.

New hardware products

Google’s hardware business is just getting started, and we’re committed to building and investing for the long run. We couldn’t be more excited to introduce you to our second-generation family of products that truly brings together the best of Google software, thoughtfully designed hardware with cutting-edge AI. We hope you enjoy using them as much as we do.

The best hardware, software and AI—together

Today, we introduced our second generation family of consumer hardware products, all made by Google: new Pixel phones, Google Home Mini and Max, an all new Pixelbook, Google Clips hands-free camera, Google Pixel Buds, and an updated Daydream View headset. We see tremendous potential for devices to be helpful, make your life easier, and even get better over time when they’re created at the intersection of hardware, software and advanced artificial intelligence (AI).


Why Google?

These days many devices—especially smartphones—look and act the same. That means in order to create a meaningful experience for users, we need a different approach. A year ago, Sundar outlined his vision of how AI would change how people would use computers. And in fact, AI is already transforming what Google’s products can do in the real world. For example, swipe typing has been around for a while, but AI lets people use Gboard to swipe-type in two languages at once. Google Maps uses AI to figure out what the parking is like at your destination and suggest alternative spots before you’ve even put your foot on the gas. But, for this wave of computing to reach new breakthroughs, we have to build software and hardware that can bring more of the potential of AI into reality—which is what we’ve set out to do with this year’s new family of products.

Hardware, built from the inside out

We’ve designed and built our latest hardware products around a few core tenets. First and foremost, we want them to be radically helpful. They’re fast, they’re there when you need them, and they’re simple to use. Second, everything is designed for you, so that the technology doesn’t get in they way and instead blends into your lifestyle. Lastly, by creating hardware with AI at the core, our products can improve over time. They’re constantly getting better and faster through automatic software updates. And they’re designed to learn from you, so you’ll notice features—like the Google Assistant—get smarter and more assistive the more you interact with them.


You’ll see this reflected in our 2017 lineup of new Made by Google products:

  • The Pixel 2 has the best camera of any smartphone, again, along with a gorgeous display and augmented reality capabilities. Pixel owners get unlimited storage for their photos and videos, and an exclusive preview of Google Lens, which uses AI to give you helpful information about the things around you.
  • Google Home Mini brings the Assistant to more places throughout your home, with a beautiful design that fits anywhere. And Max is our biggest and best-sounding Google Home device, powered by the Assistant. And with AI-based Smart Sound, Max has the ability to adapt your audio experience to you—your environment, context, and preferences.
  • With Pixelbook, we’ve reimagined the laptop as a high-performance Chromebook, with a versatile form factor that works the way you do. It’s the first laptop with the Assistant built in, and the Pixelbook Pen makes the whole experience even smarter.
  • Our new Pixel Buds combine Google smarts and the best digital sound. You’ll get elegant touch controls that put the Assistant just a tap away, and they’ll even help you communicate in a different language.
  • The updated Daydream View is the best mobile virtual reality (VR) headset on the market, and the simplest, most comfortable VR experience.
  • Google Clips is a totally new way to capture genuine, spontaneous moments—all powered by machine learning and AI. This tiny camera seamlessly sends clips to your phone, and even edits and curates them for you.

Assistant, everywhere

Across all these devices, you can interact with the Google Assistant any way you want—talk to it with your Google Home or your Pixel Buds, squeeze your Pixel 2, or use your Pixelbook’s Assistant key or circle things on your screen with the Pixelbook Pen. Wherever you are, and on any device with the Assistant, you can connect to the information you need and get help with the tasks to get you through your day. No other assistive technology comes close, and it continues to get better every day.

New hardware products

Google’s hardware business is just getting started, and we’re committed to building and investing for the long run. We couldn’t be more excited to introduce you to our second-generation family of products that truly brings together the best of Google software, thoughtfully designed hardware with cutting-edge AI. We hope you enjoy using them as much as we do.

The best hardware, software and AI—together

Today, we introduced our second generation family of consumer hardware products, all made by Google: new Pixel phones, Google Home Mini and Max, an all new Pixelbook, Google Clips hands-free camera, Google Pixel Buds, and an updated Daydream View headset. We see tremendous potential for devices to be helpful, make your life easier, and even get better over time when they’re created at the intersection of hardware, software and advanced artificial intelligence (AI).


Why Google?

These days many devices—especially smartphones—look and act the same. That means in order to create a meaningful experience for users, we need a different approach. A year ago, Sundar outlined his vision of how AI would change how people would use computers. And in fact, AI is already transforming what Google’s products can do in the real world. For example, swipe typing has been around for a while, but AI lets people use Gboard to swipe-type in two languages at once. Google Maps uses AI to figure out what the parking is like at your destination and suggest alternative spots before you’ve even put your foot on the gas. But, for this wave of computing to reach new breakthroughs, we have to build software and hardware that can bring more of the potential of AI into reality—which is what we’ve set out to do with this year’s new family of products.

Hardware, built from the inside out

We’ve designed and built our latest hardware products around a few core tenets. First and foremost, we want them to be radically helpful. They’re fast, they’re there when you need them, and they’re simple to use. Second, everything is designed for you, so that the technology doesn’t get in they way and instead blends into your lifestyle. Lastly, by creating hardware with AI at the core, our products can improve over time. They’re constantly getting better and faster through automatic software updates. And they’re designed to learn from you, so you’ll notice features—like the Google Assistant—get smarter and more assistive the more you interact with them.


You’ll see this reflected in our 2017 lineup of new Made by Google products:

  • The Pixel 2 has the best camera of any smartphone, again, along with a gorgeous display and augmented reality capabilities. Pixel owners get unlimited storage for their photos and videos, and an exclusive preview of Google Lens, which uses AI to give you helpful information about the things around you.
  • Google Home Mini brings the Assistant to more places throughout your home, with a beautiful design that fits anywhere. And Max is our biggest and best-sounding Google Home device, powered by the Assistant. And with AI-based Smart Sound, Max has the ability to adapt your audio experience to you—your environment, context, and preferences.
  • With Pixelbook, we’ve reimagined the laptop as a high-performance Chromebook, with a versatile form factor that works the way you do. It’s the first laptop with the Assistant built in, and the Pixelbook Pen makes the whole experience even smarter.
  • Our new Pixel Buds combine Google smarts and the best digital sound. You’ll get elegant touch controls that put the Assistant just a tap away, and they’ll even help you communicate in a different language.
  • The updated Daydream View is the best mobile virtual reality (VR) headset on the market, and the simplest, most comfortable VR experience.
  • Google Clips is a totally new way to capture genuine, spontaneous moments—all powered by machine learning and AI. This tiny camera seamlessly sends clips to your phone, and even edits and curates them for you.

Assistant, everywhere

Across all these devices, you can interact with the Google Assistant any way you want—talk to it with your Google Home or your Pixel Buds, squeeze your Pixel 2, or use your Pixelbook’s Assistant key or circle things on your screen with the Pixelbook Pen. Wherever you are, and on any device with the Assistant, you can connect to the information you need and get help with the tasks to get you through your day. No other assistive technology comes close, and it continues to get better every day.

New hardware products

Google’s hardware business is just getting started, and we’re committed to building and investing for the long run. We couldn’t be more excited to introduce you to our second-generation family of products that truly brings together the best of Google software, thoughtfully designed hardware with cutting-edge AI. We hope you enjoy using them as much as we do.

Source: Google Chrome