Tag Archives: Pixel

Soli Radar-Based Perception and Interaction in Pixel 4



The Pixel 4 and Pixel 4 XL are optimized for ease of use, and a key feature helping to realize this goal is Motion Sense, which enables users to interact with their Pixel in numerous ways without touching the device. For example, with Motion Sense you can use specific gestures to change music tracks or instantly silence an incoming call. Motion Sense additionally detects when you're near your phone and when you reach for it, allowing your Pixel to be more helpful by anticipating your actions, such as by priming the camera to provide a seamless face unlock experience, politely lowering the volume of a ringing alarm as you reach to dismiss it, or turning off the display to save power when you’re no longer near the device.

The technology behind Motion Sense is Soli, the first integrated short-range radar sensor in a consumer smartphone, which facilitates close-proximity interaction with the phone without contact. Below, we discuss Soli’s core radar sensing principles, design of the signal processing and machine learning (ML) algorithms used to recognize human activity from radar data, and how we resolved some of the integration challenges to prepare Soli for use in consumer devices.

Designing the Soli Radar System for Motion Sense
The basic function of radar is to detect and measure properties of remote objects based on their interactions with radio waves. A classic radar system includes a transmitter that emits radio waves, which are then scattered, or redirected, by objects within their paths, with some portion of energy reflected back and intercepted by the radar receiver. Based on the received waveforms, the radar system can detect the presence of objects as well as estimate certain properties of these objects, such as distance and size.

Radar has been under active development as a detection and ranging technology for almost a century. Traditional radar approaches are designed for detecting large, rigid, distant objects, such as planes and cars; therefore, they lack the sensitivity and resolution for sensing complex motions within the requirements of a consumer handheld device. Thus, to enable Motion Sense, the Soli team developed a new, small-scale radar system, novel sensing paradigms, and algorithms from the ground up specifically for fine-grained perception of human interactions.

Classic radar designs rely on fine spatial resolution relative to target size in order to resolve different objects and distinguish their spatial structures. Such spatial resolution typically requires broad transmission bandwidth, narrow antenna beamwidth, and large antenna arrays. Soli, on the other hand, employs a fundamentally different sensing paradigm based on motion, rather than spatial structure. Because of this novel paradigm, we were able to fit Soli’s entire antenna array for Pixel 4 on a 5 mm x 6.5 mm x 0.873 mm chip package, allowing the radar to be integrated in the top of the phone. Remarkably, we developed algorithms that specifically do not require forming a well-defined image of a target’s spatial structure, in contrast to an optical imaging sensor, for example. Therefore, no distinguishable images of a person’s body or face are generated or used for Motion Sense presence or gesture detection.
Soli’s location in Pixel 4.
Soli relies on processing temporal changes in the received signal in order to detect and resolve subtle motions. The Soli radar transmits a 60 GHz frequency-modulated signal and receives a superposition of reflections off of nearby objects or people. A sub-millimeter-scale displacement in a target’s position from one transmission to the next induces a distinguishable timing shift in the received signal. Over a window of multiple transmissions, these shifts manifest as a Doppler frequency that is proportional to the object’s velocity. By resolving different Doppler frequencies, the Soli signal processing pipeline can distinguish objects moving with different motion patterns.

The animations below demonstrate how different actions exhibit distinctive motion features in the processed Soli signal. The vertical axis of each image represents range, or radial distance, from the sensor, increasing from top to bottom. The horizontal axis represents velocity toward or away from the sensor, with zero at the center, negative velocities corresponding to approaching targets on the left, and positive velocities corresponding to receding targets on the right. Energy received by the radar is mapped into these range-velocity dimensions and represented by the intensity of each pixel. Thus, strongly reflective targets tend to be brighter relative to the surrounding noise floor compared to weakly reflective targets. The distribution and trajectory of energy within these range-velocity mappings show clear differences for a person walking, reaching, and swiping over the device.

In the left image, we see reflections from multiple body parts appearing on the negative side of the velocity axis as the person approaches the device, then converging at zero velocity at the top of the image as the person stops close to the device. In the middle image depicting a reach, a hand starts from a stationary position 20 cm from the sensor, then accelerates with negative velocity toward the device, and finally decelerates to a stop as it reaches the device. The reflection corresponding to the hand moves from the middle to the top of the image, corresponding to the hand’s decreasing range from the sensor over the course of the gesture. Finally, the third image shows a hand swiping over the device, moving with negative velocity toward the sensor on the left half of the velocity axis, passing directly over the sensor where its radial velocity is zero, and then away from the sensor on the right half of the velocity axis, before reaching a stop on the opposite side of the device.

Left: Presence - Person walking towards the device. Middle: Reach - Person reaching towards the device. Right: Swipe - Person swiping over the device.
The 3D position of each resolvable reflection can also be estimated by processing the signal received at each of Soli’s three receivers; this positional information can be used in addition to range and velocity for target differentiation.

The signal processing pipeline we designed for Soli includes a combination of custom filters and coherent integration steps that boost signal-to-noise ratio, attenuate unwanted interference, and differentiate reflections off a person from noise and clutter. These signal processing features enable Soli to operate at low-power within the constraints of a consumer smartphone.

Designing Machine Learning Algorithms for Radar
After using Soli’s signal processing pipeline to filter and boost the original radar signal, the resulting signal transformations are fed to Soli’s ML models for gesture classification. These models have been trained to accurately detect and recognize the Motion Sense gestures with low latency.

There are two major research challenges to robustly classifying in-air gestures that are common to any motion sensing technology. The first is that every user is unique and performs even simple motions, such as a swipe, in a myriad of ways. The second is that throughout the day, there may be numerous extraneous motions within the range of the sensor that may appear similar to target gestures. Furthermore, when the phone moves, the whole world looks like it’s moving from the point of view of the motion sensor in the phone.

Solving these challenges required designing custom ML algorithms optimized for low-latency detection of in-air gestures from radar signals. Soli’s ML models consist of neural networks trained using millions of gestures recorded from thousands of Google volunteers. These radar recordings were mixed with hundreds of hours of background radar recordings from other Google volunteers containing generic motions made near the device. Soli’s ML models were trained using TensorFlow and optimized to run directly on Pixel’s low-power digital signal processor (DSP). This allows us to run the models at low power, even when the main application processor is powered down.

Taking Soli from Concept to Product
Soli’s integration into the Pixel smartphone was possible because the end-to-end radar system — including hardware, software, and algorithms — was carefully designed to enable touchless interaction within the size and power constraints of consumer devices. Soli’s miniature hardware allowed the full radar system to fit into the limited space in Pixel’s upper bezel, which was a significant team accomplishment. Indeed, the first Soli prototype in 2014 was the size of a desktop computer. We combined hardware innovations with our novel temporal sensing paradigm described earlier in order to shrink the entire radar system down to a single 5.0 mm x 6.5 mm RFIC, including antennas on package. The Soli team also introduced several innovative hardware power management schemes and optimized Soli’s compute cycles, enabling Motion Sense to fit within the power budget of the smartphone.

Hardware innovations included iteratively shrinking the radar system from a desktop-sized prototype to a single 5.0 mm x 6.5 mm RFIC, including antennas on package.
For integration into Pixel, the radar system team collaborated closely with product design engineers to preserve Soli signal quality. The chip placement within the phone and the z-stack of materials above the chip were optimized to maximize signal transmission through the glass and minimize reflections and occlusions from surrounding components. The team also invented custom signal processing techniques to enable coexistence with surrounding phone components. For example, a novel filter was developed to reduce the impact of audio vibration on the radar signal, enabling gesture detection while music is playing. Such algorithmic innovations enabled Motion Sense features across a variety of common user scenarios.

Vibration due to audio on Pixel 4 appearing as an artifact in Soli’s range-doppler signal representation.
Future Directions
The successful integration of Soli into Pixel 4 and Pixel 4 XL devices demonstrates for the first time the feasibility of radar-based machine perception in an everyday mobile consumer device. Motion Sense in Pixel devices shows Soli’s potential to bring seamless context awareness and gesture recognition for explicit and implicit interaction. We are excited to continue researching and developing Soli to enable new radar-based sensing and perception capabilities.

Acknowledgments
The work described above was a collaborative effort between Google Advanced Technology and Projects (ATAP) and the Pixel and Android product teams. We particularly thank Patrick Amihood for major contributions to this blog post.

Source: Google AI Blog


Capture the night sky this Milky Way season

On a clear, dark night, you can see a faint glowing band of billions of distant stars across the sky: the Milky Way. Parts of it are visible on moonless nights throughout the year, but the brightest and most photogenic region, near the constellation Sagittarius, appears above the horizon from spring to early fall.


A few weeks ago, Sagittarius returned to the early morning sky, rising in the east just before dawn—and now is the perfect time to photograph it. Thanks to the astrophotography features in Night Sight on Pixel 4, 3a and 3, you can capture it with your phone. 


Before you head outside to catch the Milky Way, here are a few tips and tricks for taking breathtaking night time photos of your own.


Get out of town, and into the wilderness

The Milky Way isn’t very bright, so seeing and photographing it requires a dark night. Moonlight and light pollution from nearby cities tend to obscure it. 


To observe the brightest part of the Milky Way during spring, try to find a location with no large city to the east, and pick a night where the moon isn’t visible in the early morning hours. The nights between the new moon and about three days before the next full moon are ideal. For early 2020, this includes today (March 4) through March 6, March 24 through April 4 and April 22 through May 4. Look for the Milky Way in the hour before the first light of dawn. Of course you’ll want to check the weather forecast to make sure that the stars won’t be hidden by clouds.


Do your research

You can utilize tools to help track the rise and set of the sun and moon that will help you find the best time for your photo shoot. A light pollution map helps you find places to capture the best shot, and star charts can help you find constellations and other interesting celestial objects (like the Andromeda Galaxy) so you know what you’re shooting.


Stay steady and settle in

Once you’ve found the perfect spot, open the Camera App on Pixel and switch to Night Sight, and either mount the phone on a tripod, or support it on a rock or anything else that won’t move. Whatever you do, don’t try to steady the phone with your hands. Once stable, your phone will display “Astrophotography on” in the viewfinder to confirm it will use long exposures to capture your photo (up to four minutes total on Pixel 4, or one minute on Pixel 3 and 3a, depending on how dark the environment is).


Get comfortable with the camera features

To ensure you get great photos, explore all of the different options available in the Camera App. Say, for example, you are in the middle of taking a picture, and a car’s headlights are about to appear in the frame; you can tap the shutter button to stop the exposure and keep the lights from ruining your shot. You will get a photo even if you stop early, but letting the exposure run to completion will produce a clearer image.


The viewfinder in the Google Camera App works at full moon light levels, but in even darker environments the on-screen image may become too dim and grainy to be useful. Try this quick fix: Point the phone in what you think is the right direction, then tap the shutter button. Soon after the exposure begins, the viewfinder will show a much clearer image than before, and that image will be updated every few seconds. This allows you to check and correct which way the phone is pointing. Wait for the next update to see the effect of your corrections. Once you’re satisfied with the composition, tap the shutter button a second time to stop the exposure. Then tap the shutter button once more to start a new exposure, and let it run to completion without touching the phone.


The phone will try to focus automatically, but autofocus can fail in extremely dark scenes. For landscape shots you may just want to set focus to “far” by tapping the down arrow next to the viewfinder to access the focus options. This will make sure that anything further away than about 15 feet will be sharp.

Point Reyes After Dusk.jpg

Venus above the Pacific Ocean about one hour after sunset at Point Reyes National Seashore in California, captured on Pixel 4.


Use moonlight and twilight to your advantage 

Astrophotography mode isn’t only for taking pictures when it’s completely dark outside. It also takes impressive photos during nights with bright moonlight, or at dusk, when daylight is almost gone and the first stars have become visible.


No matter what time of day it is, spend a little extra time experimenting with the Google Camera App to find out what’s possible, and your photos will look great.


Northern Lights Finland.jpg

Aurora borealis near Kolari, Finland, on a night in February. Photo by Ingemar Eriksson, captured on Pixel 4.


New music controls, emoji and more features dropping for Pixel

A few months ago, Pixel owners got a few new, helpful features in our first feature drop. Beginning today, even more updates and new experiences will begin rolling out to Pixel users. 

Help when you need it

You can already use Motion Sense to skip forward or go back to a previous song. Now, if you have a Pixel 4, you can also pause and resume music with a tapping gesture above the phone. So you can easily pause music when you're having a conversation, without even picking up your phone.

12_Control_Your_Music_EN_1.gif

When you need help the most, your Pixel will be there too. Last October we launched the Personal Safety app on Pixel 4 for US users, which uses the phones’ sensors to quickly detect if you’ve been in a severe car crash1, and checks with you to see if you need emergency services. For those who need 911, you can request help via a voice command or with a single tap. Now, the feature is rolling out to Pixel users in Australia (000) and the UK (999). If you’re unresponsive, your Pixel will share relevant details, like location info, with emergency responders.


14_Get_Help_Calling_913_After_Car_Crash_EN.gif

We’re also rolling out some helpful features to more Pixel devices. Now Live Caption, the technology that automatically captions media playing on your phone, will begin rolling out to Pixel 2 owners. 

More fun with photos and video 

New AR effects you can use live on your Duo video call with friends make chatting more visually stimulating. These effects change based on your facial expressions, and move with you around the screen. Duo calls now come with a whole new layer of fun. 

Duomoji-marketing-P4XL.gif

Selfies on Pixel 4 are getting better, too. Your front-facing camera can now create images with depth, which improves Portrait Blur and color pop, and lets you create 3D photos for Facebook.

Emoji on Pixel will now be a more customizable and inclusive thanks to the emoji 12.1 update, with 169 new emoji to represent a wider variation of gender and skin tones, as well as more couple combinations to better reflect the world around us. 

New Inclusive Emoji 12.1 Update

A more powerful power button

Pixel is making it faster to pick the right card when using Google Pay. Just press and hold the power button to swipe through your debit and credit cards, event tickets, boarding passes or access anything else in Google Pay. This feature will be available to users in the US, UK, Canada, Australia, France, Germany, Spain, Italy, Ireland, Taiwan and Singapore. If you have Pixel 4, you can also quickly access emergency contacts and medical information. 

10_Quickly_Access_Payments_Emergency_Info_EN (1).gif

Getting on a flight is also getting easier. Simply take a screenshot of a boarding pass barcode and tap on the notification to add it to Google Pay. You will receive real-time flight updates, and on the day of your flight, you can just press the power button to pull up your boarding pass.  This feature will be rolling out gradually in all countries with Google Pay during March on Pixel 3, 3a and 4.

Customize your Pixel’s look and feel

A number of system-level advancements will give Pixel users more control over the look and feel of their devices.

You may know that Dark theme looks great and helps save battery power. Starting today, Dark theme gets even more helpful and flexible in switching from light to dark background, with the ability to schedule Dark theme based on local sunrise and sunset times. 

13_DarkMode_EN.gif

Have you forgotten to silence your phone when you get to work? Pixel gives you the ability to automatically enable certain rules based on WiFi network or physical location. You can now set up a rule to automatically silence your ringtone when you connect to your office WiFi, or go on Do Not Disturb when you walk in the front door of your house to focus on the people and things that matter most. 

Pixel 4 users are also getting some unique updates to the way they engage with the content on their phone. Improved long press options in Pixel’s launcher will get more and faster help from your apps. There’s also an update to Adaptive brightness, which now temporarily increases screen brightness to make reading content easier when in extremely bright ambient lighting, like direct sunlight. Check out more options for customizing your screen options.

Here’s to better selfies, more emoji and a quick pause when you need it! Check out our support page for more information on the new features, and look out for more helpful features dropping for Pixel users soon. 

 1 Not available in all languages or countries. Car crash detection may not detect all accidents. High-impact activities may trigger calls to emergency services. This feature is dependent upon network connectivity and other factors and may not be reliable for emergency communications or available in all areas. For country and language availability and more information see g.co/pixel/carcrashdetection

Source: Android


Made by Google’s 20 tips for 2020

The new year is a time for resolutions and reflection, from getting organized to creating some healthy habits. And there are more than a few ways that the tech in your home and in your pocket can help you get there. 

If you received a Made by Google device over the holidays—or you’ve owned one for a while—consider these pro tips for getting the most out of them. We’re sharing 20 fun features and tricks available across a variety of devices to try, plus expert advice for adding an extra layer of protection to your accounts across the web.

  1. Turn off distractions. With the new Focus mode, found in Pixel's device settings under "Digital Wellbeing & parental controls," you can temporarily pause and silence certain apps so you can focus on the task at hand. While you’re working out, during your commute or while you’re trying to take a moment to yourself, Focus mode gives you control over which apps you need notifications from and when.

  2. Capture one-of-a-kind photos.With Pixel, you can snap great pictures year-round using features like Portrait Mode, Photobooth and even Night Sight, which allows you to shoot photos of the stars. See g.co/pixel/astrophotography to learn more about astrophotography on Pixel 4.

  3. Outsmart robocalls.U.S.-based, English-speaking Pixel owners can use Call Screen on Pixel to automatically screen spam calls, so you can avoid calls from unknown numbers and limit interruptions throughout your day (Call Screen is new and may not detect all robocalls, but it will definitely try!).

  4. Try wall-mounting your Nest Mini. Nest Mini comes with wall mounting capabilities, which comes in handy if you’re short on counter space. Wall-mounting also helps you take advantage of its improved bass and full sound.

  5. Stress-free healthy cooking. If you’re trying to eat more fresh fruits and vegetables, don’t sweat meal planning: Get easy inspiration from Nest Hub or Nest Hub Max. Say “Hey Google, show me recipes with spinach, lentils and tomatoes” and you’ll see ideas to scroll through, select, and follow step-by-step.

  6. Stay in touch. We could all do better at keeping in touch with loved ones. Nest Hub Max offers the option to make video calls using Google Duo, so you can catch up with mom face-to-face right from your display. 

  7. Get help with delegating. Create Assignable reminders for other members of your household, like reminding your partner to walk the dog. Face Match will show them any missed reminders automatically when they approach Hub Max. You can also use reminders to send someone a note of encouragement when they need it the most (“Hey Google, remind Kathy that she’ll do great in tomorrow’s interview”).

  8. View and share your favorite photos. Enjoy your favorite moments from Google Photos on Nest Hub Max’s 10-inch high definition screen. See a photo pop up that brings a smile to your face? Share it with one of your contacts: “Hey Google, share this photo with Mom.” Or if you see an old memory and can’t remember the location, just ask “Hey Google, where was this photo taken?”

  9. Check your Wi-Fi easily. You can use a Nest Wifi point the same way you use a Google Nest speaker. Simply say, “Hey Google, what’s my internet speed?” or “Hey Google, pause Wi-Fi for Daniel” to pause individual users’ devices at certain times, like during dinner.

  10. Have a worry-free work week.The Talk and Listen feature on Nest Hello makes it easy for busy families to keep in touch throughout the day. When you see Nest Hello start recording, you can share your status with your family members who have access to Nest Hello’s camera feed. It’ll become a quick video they can view on their phones.

  11. Keep track of deliveries. Nest Hello also detects packages for Nest Aware users—helpful if you’re expecting something important. 

  12. Choose when your cameras record. You can schedule your Nest cameras to automatically turn off on the weekends and back on again during the week (or during the time frame you prefer). To do this, turn off Home/Away assist and create your schedule

  13. Control what you save.While your Nest Cam video history automatically expires after a specific time frame depending on your Nest Aware subscription, you can also manually delete footage anytime. Simply select the “Delete video history” option in your camera’s settings.

  14. Skip the monthly gym fee.Few things are more difficult in the dead of winter than driving to a gym first thing in the morning. Choose a more  manageable routine: Pull up a workout from YouTube or Daily Burn and cast it to your TV with Chromecast, so you can sweat while the coffee is brewing. 

  15. New partners, new content.Over the past few months we’ve introduced new content partners for Chromecast and displays so you have tons of movies and TV shows to choose from based on your subscriptions, including Disney+, Amazon Prime Video, Hulu and Sling TV.

  16. Attention gamers! If you own a standalone Chromecast Ultra, you can play Stadia on it if you have an existing Stadia account. Link your Stadia controller to your Chromecast Ultra and you’re ready to go. For best results, connect an Ethernet cable to your Chromecast Ultra.

  17. Save on your energy bill.On your Nest Thermostat, seeing the Nest Leaf is an easy way to know you’re saving energy, and it encourages you to continually improve your savings over time. You’ll see the Leaf on your thermostat when you set a temperature that helps save energy. The more often you see a Leaf, the more you save.

  18. Enable 2-factor authentication, or migrate to a Google account. 2-factor authentication uses a secondary confirmation to make it harder for unauthorized people to access your account. Migrating to a Google account provides automatic security protections, proactive alerts about suspicious account activity and the security checkup

  19. Give your passwords a makeover.Repeating passwords makes your accounts more vulnerable to common hacks, so make sure each password you use is unique and complicated.

  20. Enlist extra protection from Chrome.When you type your credentials into a website, Chrome will now warn you if your username and password have been compromised in a data breach on some site or app. It will suggest that you change them everywhere they were used.

Cheers to a new decade—and some new gear! 

Ask a Techspert: How does motion sensing work?

Editor’s Note: Do you ever feel like a fish out of water? Try being a tech novice and talking to an engineer at a place like Google. Ask a Techspert is a series on the Keyword asking Googler experts to explain complicated technology for the rest of us. This isn’t meant to be comprehensive, but just enough to make you sound smart at a dinner party. 

Thanks to my allergies, I’ve never had a cat. They’re cute and cuddly for about five minutes—until the sneezing and itching set in. Still, I’m familiar enough with cats (and cat GIFs) to know that they always have a paw in the air, whether it’s batting at a toy or trying to get your attention. Whatever it is they’re trying to do, it often looks like they’re waving at us. So imagine my concern when I found out that you can now change songs, snooze alarms or silence your phone ringing on your Pixel 4 with the simple act of waving. What if precocious cats everywhere started unintentionally making us sleep late by waving their paws?

Fortunately, that’s not a problem. Google’s motion sensing radar technology—a feature called Motion Sense in the Pixel 4—is designed so that only human hands, as opposed to cat paws, can change the tracks on your favorite playlist. So how does this motion sensing actually work, and how did Google engineers design it to identify specific motions? 

To answer my questions, I found our resident expert on motion sensors, Brandon Barbello. Brandon is a product manager on our hardware team and he helped me unlock the mystery behind the motion sensors on your phone, and how they only work for humans. 

When I’m waving my hand in front of my screen, how can my phone sense something is there? 

Brandon tells me that your Pixel phone has a chip at the top with a series of antennas, some of which emit a radio signal and others that receive “bounce backs” of the same signal the other antenna emitted. “Those radio signals go out into the world, and then they hit things and bounce back. The receiver antennas read the signals as they bounce back and that’s how they’re able to sense something has happened. Your Pixel actually has four antennas: One that sends out signals, and three that receive.”

What happens after the antenna picks up the motion? 

According to Brandon, when the radio waves bounce back, the computer in your phone begins to process the information. “Essentially, the sensor picks up that you’re around, and that triggers your phone to keep an eye out for the relevant gestures,” he says.

How does the Pixel detect that a motion is a swipe and not something else? 

With the motion sensing functions on the Pixel, Brandon and his team use machine learning to determine what happened. “Those radio waves get analyzed and reduced into a series of numbers that can be fed into the machine learning models that detect if a reach or a swipe has just happened,” Brandon says. “We collected millions of motion samples to pre-train each phone to recognize intentional swipes. Specifically, we’ve trained the models to detect motions that look like they come from a human hand, and not, for instance, a coffee mug passing over the phone as you put it down on the table.”

What will motion sensors be capable of in the future? 

Brandon told me that he and his team plan to add more gestures to recognize beyond swiping, and that specific movements could be connected to more apps. “In the future, we want to create devices that can understand your body language, so they’re more intuitive to use and more helpful,” he tells me. 

At the moment, motion-sensing technology is focused on the practical, and there’s still improvements to be made and new ground to cover, but he says this technology can also be delightful and fun—like on the Pixel’s gesture-controlled Pokémon Live Wallpaper. Overall, motion sensing technology helps you use your devices in a whole new way, and that will keep changing as the tech advances. "We're just beginning to see the potential of motion sensing," Brandon says.

Capture your holiday on Pixel 4

Who doesn’t love holiday photos? Luckily for Pixel 4 owners, the camera on your phone is packed with all of the features you need to get the perfect picture every time, year round.

All is calm, all photos are bright

Holiday decorations and lights can make it difficult to capture that perfectly lit photo. Pixel 4’s Dual Exposure Controls ensure that no matter how decked your halls are, you always get the great photo you want by giving you control over the lighting, silhouettes and exposure in your shots.


Dual_Exposure_Controls.gif

Celebrate the festival of lights no matter how dark it is thanks to Night Sight on Pixel 4. As your family gathers around the menorah, snap a great picture of your loved ones’ faces lit by candle light by using the low-light photography mode on Pixel. And for those staying up and waiting for Santa, use Night Sight to capture the stockings hung by the chimney with care even as the fire dwindles. 

Night_Sight.gif

Ring in the New Year in new ways

Looking for an evening activity once the presents are unwrapped? Dec. 25 is also the start of a new moon, making it the best night for a photo of the sky. And if you’re planning to ring in the new year under the stars, Pixel 4 is the best companion, with the ability to capture astrophotography on Night Sight.


Bust out those old New Year’s Eve or family holiday photos for the perfect throwback holiday season pic, and use portrait blur--now available on Pixel devices in Google Photos--to give them new life, even years after they’ve been taken. 


Portrait_Blur.gif

Whether you’re gathered around the dining room table, the menorah or Christmas tree, or watching the ball drop in Times Square, this Pixel’s camera is your perfect companion. 


Happy holiday photos to all and to all a good night! 


Improvements to Portrait Mode on the Google Pixel 4 and Pixel 4 XL



Portrait Mode on Pixel phones is a camera feature that allows anyone to take professional-looking shallow depth of field images. Launched on the Pixel 2 and then improved on the Pixel 3 by using machine learning to estimate depth from the camera’s dual-pixel auto-focus system, Portrait Mode draws the viewer’s attention to the subject by blurring out the background. A critical component of this process is knowing how far objects are from the camera, i.e., the depth, so that we know what to keep sharp and what to blur.

With the Pixel 4, we have made two more big improvements to this feature, leveraging both the Pixel 4’s dual cameras and dual-pixel auto-focus system to improve depth estimation, allowing users to take great-looking Portrait Mode shots at near and far distances. We have also improved our bokeh, making it more closely match that of a professional SLR camera.
Pixel 4’s Portrait Mode allows for Portrait Shots at both near and far distances and has SLR-like background blur. (Photos Credit: Alain Saal-Dalma and Mike Milne)
A Short Recap
The Pixel 2 and 3 used the camera’s dual-pixel auto-focus system to estimate depth. Dual-pixels work by splitting every pixel in half, such that each half pixel sees a different half of the main lens’ aperture. By reading out each of these half-pixel images separately, you get two slightly different views of the scene. While these views come from a single camera with one lens, it is as if they originate from a virtual pair of cameras placed on either side of the main lens’ aperture. Alternating between these views, the subject stays in the same place while the background appears to move vertically.
The dual-pixel views of the bulb have much more parallax than the views of the man because the bulb is much closer to the camera.
This motion is called parallax and its magnitude depends on depth. One can estimate parallax and thus depth by finding corresponding pixels between the views. Because parallax decreases with object distance, it is easier to estimate depth for near objects like the bulb. Parallax also depends on the length of the stereo baseline, that is the distance between the cameras (or the virtual cameras in the case of dual-pixels). The dual-pixels’ viewpoints have a baseline of less than 1mm, because they are contained inside a single camera’s lens, which is why it’s hard to estimate the depth of far scenes with them and why the two views of the man look almost identical.

Dual Cameras are Complementary to Dual-Pixels
The Pixel 4’s wide and telephoto cameras are 13 mm apart, much greater than the dual-pixel baseline, and so the larger parallax makes it easier to estimate the depth of far objects. In the images below, the parallax between the dual-pixel views is barely visible, while it is obvious between the dual-camera views.
Left: Dual-pixel views. Right: Dual-camera views. The dual-pixel views have only a subtle vertical parallax in the background, while the dual-camera views have much greater horizontal parallax. While this makes it easier to estimate depth in the background, some pixels to the man’s right are visible in only the primary camera’s view making it difficult to estimate depth there.
Even with dual cameras, information gathered by the dual pixels is still useful. The larger the baseline, the more pixels that are visible in one view without a corresponding pixel in the other. For example, the background pixels immediately to the man’s right in the primary camera’s image have no corresponding pixel in the secondary camera’s image. Thus, it is not possible to measure the parallax to estimate the depth for these pixels when using only dual cameras. However, these pixels can still be seen by the dual pixel views, enabling a better estimate of depth in these regions.

Another reason to use both inputs is the aperture problem, described in our previous blog post, which makes it hard to estimate the depth of vertical lines when the stereo baseline is also vertical (or when both are horizontal). On the Pixel 4, the dual-pixel and dual-camera baselines are perpendicular, allowing us to estimate depth for lines of any orientation.

Having this complementary information allows us to estimate the depth of far objects and reduce depth errors for all scenes.

Depth from Dual Cameras and Dual-Pixels
We showed last year how machine learning can be used to estimate depth from dual-pixels. With Portrait Mode on the Pixel 4, we extended this approach to estimate depth from both dual-pixels and dual cameras, using Tensorflow to train a convolutional neural network. The network first separately processes the dual-pixel and dual-camera inputs using two different encoders, a type of neural network that encodes the input into an intermediate representation. Then, a single decoder uses both intermediate representations to compute depth.
Our network to predict depth from dual-pixels and dual-cameras. The network uses two encoders, one for each input and a shared decoder with skip connections and residual blocks.
To force the model to use both inputs, we applied a drop-out technique, where one input is randomly set to zero during training. This teaches the model to work well if one input is unavailable, which could happen if, for example, the subject is too close for the secondary telephoto camera to focus on.
Depth maps from our network where either only one input is provided or both are provided. Top: The two inputs provide depth information for lines in different directions. Bottom: Dual-pixels provide better depth in the regions visible in only one camera, emphasized in the insets. Dual-cameras provide better depth in the background and ground. (Photo Credit: Mike Milne)
The lantern image above shows how having both signals solves the aperture problem. Having one input only allows us to predict depth accurately for lines in one direction (horizontal for dual-pixels and vertical for dual-cameras). With both signals, we can recover the depth on lines in all directions.

With the image of the person, dual-pixels provide better depth information in the occluded regions between the arm and torso, while the large baseline dual cameras provide better depth information in the background and on the ground. This is most noticeable in the upper-left and lower-right corner of depth from dual-pixels. You can find more examples here.

SLR-Like Bokeh
Photographers obsess over the look of the blurred background or bokeh of shallow depth of field images. One of the most noticeable things about high-quality SLR bokeh is that small background highlights turn into bright disks when defocused. Defocusing spreads the light from these highlights into a disk. However, the original highlight is so bright that even when its light is spread into a disk, the disk remains at the bright end of the SLR’s tonal range.
Left: SLRs produce high contrast bokeh disks. Middle: It is hard to make out the disks in our old background blur. Right: Our new bokeh is closer to that of an SLR.
To reproduce this bokeh effect, we replaced each pixel in the original image with a translucent disk whose size is based on depth. In the past, this blurring process was performed after tone mapping, the process by which raw sensor data is converted to an image viewable on a phone screen. Tone mapping compresses the dynamic range of the data, making shadows brighter relative to highlights. Unfortunately, this also results in a loss of information about how bright objects actually were in the scene, making it difficult to produce nice high-contrast bokeh disks. Instead, the bokeh blends in with the background, and does not appear as natural as that from an SLR.

The solution to this problem is to blur the merged raw image produced by HDR+ and then apply tone mapping. In addition to the brighter and more obvious bokeh disks, the background is saturated in the same way as the foreground. Here’s an album showcasing the better blur, which is available on the Pixel 4 and the rear camera of the Pixel 3 and 3a (assuming you have upgraded to version 7.2 of the Google Camera app).
Blurring before tone mapping improves the look of the backgrounds by making it more saturated and by making disks higher contrast.
Try it Yourself
We have made Portrait Mode on the Pixel 4 better by improving depth quality, resulting in fewer errors in the final image and by improving the look of the blurred background. Depth from dual-cameras and dual-pixels only kicks in when the camera is at least 20 cm from the subject, i.e. the minimum focus distance of the secondary telephoto camera. So consider keeping your phone at least that far from the subject to get better quality portrait shots.

Acknowledgments
This work wouldn’t have been possible without Rahul Garg, Sergio Orts Escolano, Sean Fanello, Christian Haene, Shahram Izadi, David Jacobs, Alexander Schiffhauer, Yael Pritch Knaan and Marc Levoy. We would also like to thank the Google Camera team for helping to integrate these algorithms into the Pixel 4. Special thanks to our photographers Mike Milne, Andy Radin, Alain Saal-Dalma, and Alvin Li who took numerous test photographs for us.

Source: Google AI Blog


Let Google be your holiday travel tour guide

When it comes to travel, I’m a planner. I’m content to spend weeks preparing the perfect holiday getaway: deciding on the ideal destination, finding the cheapest flights and sniffing out the best accommodations. I’ve been dreaming about a trip to Greece next year, and—true story—I’ve already got a spreadsheet to compare potential destinations, organized by flight length and hotel perks.

But the thing I don’t like to do is plot out the nitty-gritty details. I want to visit the important museums and landmarks, but I don’t want to write up a daily itinerary ahead of time. I’m a vegetarian, so I need to find veggie-friendly restaurants, but I’d prefer to stumble upon a good local spot than plan in advance. And, since I don’t speak Greek, I want to be able to navigate transportation options without having to stop and ask people for help all the time.

So I’ve come to rely on some useful Google tools to make my trips work for the way I like to travel. Here’s what I’ve learned so far.

Let Maps do the talking

Getting dropped into a new city is disorienting, and all the more so when you need to ask for help but don’t know how to pronounce the name of the place you’re trying to get to. Google Maps now has a fix for this: When you’ve got a place name up in Maps, just press the new little speaker button next to it, and it will speak out a place's name and address in the local lingo. And if you want to continue the conversation, Google Maps will quickly link you to the Google Translate app.

gif of Google Translate feature in Google Maps

Let your phone be your guidebook

New cities are full of new buildings, new foods and even new foliage. But I don’t want to just see these things; I want to learn more about them. That’s where Google Lens comes in as my know-it-all tour guide and interpreter. It can translate a menu, tell me about the landmark I’m standing in front of or identify a tree I’ve never seen before. So whenever I think, “I wonder what that building is for,” I can just use my camera to get an answer in real time. 

using Google Lens to identify a flower

Photo credit: Joao Nogueira

Get translation help on the go

The Google Assistant’s real-time translation feature, interpreter mode, is now available on Android and iOS phones worldwide, enabling you to have a conversation with someone speaking a foreign language. So if I say, “Hey Google, be my Greek translator,” I can easily communicate with, say, a restaurant server who doesn’t speak English. Interpreter mode works across 44 languages, and it features different ways to communicate suited to your situation: you can type using a keyboard for quiet environments, or manually select what language to speak.

gif of Google Assistant interpreter mode

Use your voice to get things done

Typing is fine, but talking is easier, especially when I’m on vacation and want to make everything as simple as possible. The Google Assistant makes it faster to find what I’m looking for and plan what’s next, like weather forecasts, reminders and wake-up alarms. It can also help me with conversions, like “Hey Google, how much is 20 Euros in pounds?”

Using Google Assistant to answer questions

Photo credit: Joao Nogueira

Take pics, then chill

When I’m in a new place, my camera is always out. But sorting through all those pictures is the opposite of relaxing. So I offload that work onto Google Photos: It backs up my photos for free and lets me search for things in them . And when I want to see all the photos my partner has taken, I can create an album that we can both add photos to. And Photos will remind me of our vacation in the future, too, with story-style highlights at the top of the app.

photo of leafy old town street

Photo credit: Joao Nogueira

Look up

I live in a big city, which means I don’t get to see the stars much. Traveling somewhere a little less built up means I can hone my Pixel 4 astrophotography skills. It’s easy to use something stable, like a wall, as a makeshift tripod, and then just let the camera do its thing.

a stone tower at night with a starry sky in the background

Photo credit: DDay

Vacation unplugged

As useful as my phone is, I try to be mindful about putting it down and ignoring it as much as I can. And that goes double for when I’m on vacation. Android phones have a whole assortment of Digital Wellbeing features to help you disconnect. My favorite is definitely flip to shhh: Just place your phone screen-side down and it silences notifications until you pick it back up.

someone sitting on a boat at sunset watching the shoreline

Photo credit: Joao Nogueira

Source: Google LatLong


Interpreter mode brings real-time translation to your phone

You’ve booked your flights, found the perfect hotel and mapped out all of the must-see local attractions. Only one slight issue—you weren’t able to brush up on a new foreign language in time for your trip. The Google Assistant is here to help.


Travelers already turn to the Assistant for help researching and checking into flights, finding local restaurant recommendations and more. To give you even more help during your trip, the Assistant’s real-time translation feature, interpreter mode, is starting to roll out today on Assistant-enabled Android and iOS phones worldwide. Using your phone, you can have a back and forth conversation with someone speaking a foreign language.


To get started, just say “Hey Google, be my German translator” or “Hey Google, help me speak Spanish” and you’ll see and hear the translated conversation on your phone. After each translation, the Assistant may present Smart Replies, giving you suggestions that let you quickly respond without speaking—which can make your conversations faster and even more seamless.
Google Assistant_interpreter mode on mobile.gif

Interpreter mode helps you translate across 44 languages, and since it’s integrated with the Assistant, it’s already on your Android phone. To access it on iOS, simply download the latest Google Assistant app. Interpreter mode also features different ways to communicate suited to your situation: you can type using a keyboard for quiet environments, or manually select what language to speak.


Whether you’re heading on a trip this holiday season, gearing up for international travel in the New Year, or simply want to communicate with family members who speak another language, interpreter mode is here to remove language barriers no matter where you are.


Gute Reise! Translation: “Enjoy your trip!”

Interpreter mode brings real-time translation to your phone

You’ve booked your flights, found the perfect hotel and mapped out all of the must-see local attractions. Only one slight issue—you weren’t able to brush up on a new foreign language in time for your trip. The Google Assistant is here to help.


Travelers already turn to the Assistant for help researching and checking into flights, finding local restaurant recommendations and more. To give you even more help during your trip, the Assistant’s real-time translation feature, interpreter mode, is starting to roll out today on Assistant-enabled Android and iOS phones worldwide. Using your phone, you can have a back and forth conversation with someone speaking a foreign language.


To get started, just say “Hey Google, be my German translator” or “Hey Google, help me speak Spanish” and you’ll see and hear the translated conversation on your phone. After each translation, the Assistant may present Smart Replies, giving you suggestions that let you quickly respond without speaking—which can make your conversations faster and even more seamless.
Google Assistant_interpreter mode on mobile.gif

Interpreter mode helps you translate across 44 languages, and since it’s integrated with the Assistant, it’s already on your Android phone. To access it on iOS, simply download the latest Google Assistant app. Interpreter mode also features different ways to communicate suited to your situation: you can type using a keyboard for quiet environments, or manually select what language to speak.


Whether you’re heading on a trip this holiday season, gearing up for international travel in the New Year, or simply want to communicate with family members who speak another language, interpreter mode is here to remove language barriers no matter where you are.


Gute Reise! Translation: “Enjoy your trip!”