Tag Archives: Pixel

New Pixels—and new prices—are here


Last year, Pixel 3a gave people a chance to get the helpful features of Pixel at a more affordable price. This year, Pixel 4a—and the first 5G-enabled Pixels, Pixel 4a (5G) and Pixel 5 are coming to Australia this spring, and will continue to bring the features people love—like an incredible camera and feature drops that make your phone better over time—packaged in sleek new hardware at more affordable prices.
Meet Pixel 4a: The “everything you love about Google” phone 
Want to charge less often, take professional-looking photos, enjoy enterprise-grade security, all without breaking the bank? The Pixel 4a, available for $599, has your name on it.
Same great Pixel camera, new lower price 
With the same incredible camera experiences from Pixel 4 and a re-designed hole punch design, Pixel 4a brings the same features that have helped millions of Pixel owners take great shots. HDR+ with dual exposure controls, Portrait Mode, Top Shot, Night Sight with Astrophotography capabilities and fused video stabilisation—they’re all there.

Sleek design
The Pixel 4a comes in Just Black with a 5.8-inch OLED display. It has a matte finish that feels secure and comfortable in your hand and includes Pixel’s signature colour pop power button in mint. Check out the custom wallpapers that have some fun with the punch-hole camera.
Help for those who need it 
In addition to features like Recorder, which now connects with Google Docs to seamlessly save and share transcriptions and recordings (English only), Pixel 4a will include helpful experiences like the Personal Safety app which can provide real-time emergency notifications and car crash detection when turned on. Learn more about car crash detection.
Pixel 4a also has Live Caption, which provides real-time captioning (English only) for your video and audio content. New with the Pixel 4a launch—and also rolling out for Pixel 2, 3, 3a and 4 phones—Live Caption will now automatically caption your voice and video calls.

Google Assistant in more languages 
Introduced last year, the new Google Assistant is also available on Pixel 4a to help with multitasking across apps and getting things done quickly, like finding a photo or sending a text. You can now try out the new experience in Italian, German, French and Spanish, in addition to English, with more languages coming soon. Learn more at g.co/pixelassistant/languages.

Pre-order Pixel 4a now 
The Pixel 4a has a Qualcomm® Snapdragon™ 730G Mobile Platform, Titan M security module for on-device security, 6GB of RAM and 128GB of storage with an even bigger battery that lasts all day*. New Pixel 4a fabric cases will also be available in three colours.
Pixel 4a users can enjoy entertainment, games and apps and extra storage with three month free trials of YouTube Premium, Google Play Pass and Google One for new users. Learn more at g.co/pixel/4aoffers.
Pixel 4a is now available for pre-order in Australia on the Google Store and at JB Hi Fi, Vodafone and Harvey Norman. It will be on-sale online from September 10 at those partners and Officeworks, and in store from mid-October. For more information on availability, head to the Google Store.
Sneak peek at Pixel 4a (5G) and Pixel 5 
This spring, we’ll have two more devices to talk about: the Pixel 4a (5G) and Pixel 5, starting from $799, both with 5G* to make streaming videos, downloading content and playing games faster and smoother than ever. Pixel 4a (5G) and Pixel 5 will be available in Australia, the U.S., Canada, the United Kingdom, Ireland, France, Germany, Japan and Taiwan. In the coming months, we’ll share more about these devices and our approach to 5G. In the meantime, be sure to sign up to be the first to hear more.



*Approximate battery life based on a mix of talk, data, standby, and use of other features, with always on display off. An active display and other usage factors will decrease battery life. Pixel 4a battery testing conducted in Mountain View, California in early 2020 on pre-production hardware and software. Actual results may vary.
*Requires a 5G data plan (sold separately). 5G service not available on all carrier networks or in all areas. Contact carrier for details. 5G service, speed and performance depend on many factors including, but not limited to, carrier network capabilities, device configuration and capabilities, network traffic, location, signal strength and signal obstruction. Actual results may vary. Some features not available in all areas. Data rates may apply. See g.co/pixel/networkinfo for info.

A look at art in isolation captured on Pixel

Every industry has been affected by COVID-19, and the art world is no exception. Content creation requires a new level of imagination as many artists figure out how to approach their work within the confines of shelter in place.

Google Pixel’s Creator Labs program, an incubator for photographers and directors launched in Q4 2019, faced these new challenges as well. But the program’s simplicity actually aided the artists. Because Pixel was their primary tool, Creator Labs artists were able to explore ideas that came to them in quarantine, through an unfiltered lens. Given Pixel features like 4K video, Portrait Mode and HDR+, no complicated camera setups or highly produced shoots were necessary. 

Many flipped the camera on themselves, exploring the fluid dynamic between artist and muse. Myles Loftin, an artist based in New York who focuses on themes including identity and marginalized people in his work, dug deeper into exploring the importance of intimacy right now. “Taking self portraits has been one of the main things that has helped me pass the time during the last few months.  I feel like being indoors for so long I've been so much more in tune with myself and my body,” Myles says. “The Pixel makes it easy for me to set up really quickly and take self portraits whenever I want.”

003.jpg

Photo by Myles Loftin

Another artist, who goes by the alias Glassface, took a look at the tension of our new virtual work lives.  “Nothing kills creativity like fear or depression. And often, nothing helps heal and reshape our mental health like creativity itself,” he explains. “Isolation is a tough pill to swallow, but often it breeds incredible work.”

glassface.gif

An excerpt from Glassface's work. 

Other artists featured in the project include Mayan Toledano, June Canedo, Joshua Kissi, Tim Kellner, Andrew Thomas Huang and Anthony Prince Leslie. While quarantine certainly changed how they worked, it also inspired them to investigate this era from a new lens. Anthony perhaps best articulated what the process was like: “Work during quarantine has really changed my perspective. I now remember what it feels like to be present—moving at a pace where there is no peripheral blur on my tunnel vision. As a director, I’m inspired by people and their connections to each other. ” 


You can discover more Pixel-made art, including the work of several Pixel Creator Labs artists, on our Pixel Instagram page

Sensing Force-Based Gestures on the Pixel 4



Touch input has traditionally focussed on two-dimensional finger pointing. Beyond tapping and swiping gestures, long pressing has been the main alternative path for interaction. However, a long press is sensed with a time-based threshold where a user’s finger must remain stationary for 400–500 ms. By its nature, a time-based threshold has negative effects for usability and discoverability as the lack of immediate feedback disconnects the user’s action from the system’s response. Fortunately, fingers are dynamic input devices that can express more than just location: when a user touches a surface, their finger can also express some level of force, which can be used as an alternative to a time-based threshold.

While a variety of force-based interactions have been pursued, sensing touch force requires dedicated hardware sensors that are expensive to design and integrate. Further, research indicates that touch force is difficult for people to control, and so most practical force-based interactions focus on discrete levels of force (e.g., a soft vs. firm touch) — which do not require the full capabilities of a hardware force sensor.

For a recent update to the Pixel 4, we developed a method for sensing force gestures that allowed us to deliver a more expressive touch interaction experience By studying how the human finger interacts with touch sensors, we designed the experience to complement and support the long-press interactions that apps already have, but with a more natural gesture. In this post we describe the core principles of touch sensing and finger interaction, how we designed a machine learning algorithm to recognise press gestures from touch sensor data, and how we integrated it into the user experience for Pixel devices.

Touch Sensor Technology and Finger Biomechanics
A capacitive touch sensor is constructed from two conductive electrodes (a drive electrode and a sense electrode) that are separated by a non-conductive dielectric (e.g., glass). The two electrodes form a tiny capacitor (a cell) that can hold some charge. When a finger (or another conductive object) approaches this cell, it ‘steals’ some of the charge, which can be measured as a drop in capacitance. Importantly, the finger doesn’t have to come into contact with the electrodes (which are protected under another layer of glass) as the amount of charge stolen is inversely proportional to the distance between the finger and the electrodes.
Left: A finger interacts with a touch sensor cell by ‘stealing’ charge from the projected field around two electrodes. Right: A capacitive touch sensor is constructed from rows and columns of electrodes, separated by a dielectric. The electrodes overlap at cells, where capacitance is measured.
The cells are arranged as a matrix over the display of a device, but with a much lower density than the display pixels. For instance, the Pixel 4 has a 2280 × 1080 pixel display, but a 32 × 15 cell touch sensor. When scanned at a high resolution (at least 120 Hz), readings from these cells form a video of the finger’s interaction.
Slowed touch sensor recordings of a user tapping (left), pressing (middle), and scrolling (right).
Capacitive touch sensors don’t respond to changes in force per se, but are tuned to be highly sensitive to changes in distance within a couple of millimeters above the display. That is, a finger contact on the display glass should saturate the sensor near its centre, but will retain a high dynamic range around the perimeter of the finger’s contact (where the finger curls up).

When a user’s finger presses against a surface, its soft tissue deforms and spreads out. The nature of this spread depends on the size and shape of the user’s finger, and its angle to the screen. At a high level, we can observe a couple of key features in this spread (shown in the figures): it is asymmetric around the initial contact point, and the overall centre of mass shifts along the axis of the finger. This is also a dynamic change that occurs over some period of time, which differentiates it from contacts that have a long duration or a large area.
Touch sensor signals are saturated around the centre of the finger’s contact, but fall off at the edges. This allows us to sense small deformations in the finger’s contact shape caused by changes in the finger’s force.
However, the differences between users (and fingers) makes it difficult to encode these observations with heuristic rules. We therefore designed a machine learning solution that would allow us to learn these features and their variances directly from user interaction samples.

Machine Learning for Touch Interaction
We approached the analysis of these touch signals as a gesture classification problem. That is, rather than trying to predict an abstract parameter, such as force or contact spread, we wanted to sense a press gesture — as if engaging a button or a switch. This allowed us to connect the classification to a well-defined user experience, and allowed users to perform the gesture during training at a comfortable force and posture.

Any classification model we designed had to operate within users’ high expectations for touch experiences. In particular, touch interaction is extremely latency-sensitive and demands real-time feedback. Users expect applications to be responsive to their finger movements as they make them, and application developers expect the system to deliver timely information about the gestures a user is performing. This means that classification of a press gesture needs to occur in real-time, and be able to trigger an interaction at the moment the finger’s force reaches its apex.

We therefore designed a neural network that combined convolutional (CNN) and recurrent (RNN) components. The CNN could attend to the spatial features we observed in the signal, while the RNN could attend to their temporal development. The RNN also helps provide a consistent runtime experience: each frame is processed by the network as it is received from the touch sensor, and the RNN state vectors are preserved between frames (rather than processing them in batches). The network was intentionally kept simple to minimise on-device inference costs when running concurrently with other applications (taking approximately 50 µs of processing per frame and less than 1 MB of memory using TensorFlow Lite).
An overview of the classification model’s architecture.
The model was trained on a dataset of press gestures and other common touch interactions (tapping, scrolling, dragging, and long-pressing without force). As the model would be evaluated after each frame, we designed a loss function that temporally shaped the label probability distribution of each sample, and applied a time-increasing weight to errors. This ensured that the output probabilities were temporally smooth and converged towards the correct gesture classification.

User Experience Integration
Our UX research found that it was hard for users to discover force-based interactions, and that users frequently confused a force press with a long press because of the difficulty in coordinating the amount of force they were applying with the duration of their contact. Rather than creating a new interaction modality based on force, we therefore focussed on improving the user experience of long press interactions by accelerating them with force in a unified press gesture. A press gesture has the same outcome as a long press gesture, whose time threshold remains effective, but provides a stronger connection between the outcome and the user’s action when force is used.
A user long pressing (left) and firmly pressing (right) on a launcher icon.
This also means that users can take advantage of this gesture without developers needing to update their apps. Applications that use Android’s GestureDetector or View APIs will automatically get these press signals through their existing long-press handlers. Developers that implement custom long-press detection logic can receive these press signals through the MotionEvent classification API introduced in Android Q.

Through this integration of machine-learning algorithms and careful interaction design, we were able to deliver a more expressive touch experience for Pixel users. We plan to continue researching and developing these capabilities to refine the touch experience on Pixel, and explore new forms of touch interaction.

Acknowledgements
This project is a collaborative effort between the Android UX, Pixel software, and Android framework teams.

Source: Google AI Blog


New Pixel features for better sleep and personal safety

Whether you’re trying to extend your battery life or find ways to disconnect each night, Pixel’s latest features make it easier than ever to get the most out of your phone. And with the latest updates to the Personal Safety app, your Pixel is giving you more options to help keep you safe in an emergency.  


Adaptive Battery improvements

Adaptive Battery already learns your favorite apps and reduces power to the ones you rarely use. Now, Adaptive Battery on Pixel 2 and newer devices can predict when your battery will run out and further reduce background activity to keep your Pixel powered longer.

38_GIF_Bedtime_Experience_In_Clock_ScreenTime_Pixel4_EN (1).gif

Bedtime made better

The new bedtime feature in Clock helps you maintain a consistent sleep schedule and strike a better balance with your screen time each night. Fall asleep to calming sounds and limit interruptions while you sleep — and if you stay up on your phone past bedtime, you'll get a snapshot of how much time you’re spending awake and on which apps. Each morning, you can wake up with your favorite track or with a gradually brighter screen with Sunrise Alarm.


Recorder, Docs and the new Google Assistant all working together

The Recorder app now lets you start, stop and search voice recordings using the new Google Assistant. Simply say “Hey Google, start recording my meeting,” or “Hey Google, show me recordings about dogs.” You can even save a transcript directly to Google Docs, making it easier to share with others. Learn more about using Recorder on your Pixel. 


40_GIF_Safety_Check_Pixel4_EN (1).gif

Personal safety features

The Personal Safety app on Pixel 4 will now be available on all Pixel devices, and car crash detection is also coming to Pixel 3. (Car crash detection is not available in all languages or countries. Learn more about car crash detection’s availability in your language or country.) 


We’re introducing new safety features, too, like safety check, which schedules a check-in from the app at a later time. For example if you’re about to go on a run or hike alone, safety check will make sure you made it back safely. If you don’t respond to the scheduled check-in, the app will alert your emergency contacts. In the event that you need immediate help or are in a dangerous situation, emergency sharing notifies all of your emergency contacts and shares your real-time location through Google Maps so they can send help or find you.


And to be ultra-prepared, you can enable crisis alerts in the Personal Safety app to get notifications about natural disasters or other public emergencies. 


41_GIF_pixel4_Crisis_Alerts_EN (1).gif

For more information on the new features that just dropped and to see when the update will land on your phone, head to the Pixel forum


uDepth: Real-time 3D Depth Sensing on the Pixel 4



The ability to determine 3D information about the scene, called depth sensing, is a valuable tool for developers and users alike. Depth sensing is a very active area of computer vision research with recent innovations ranging from applications like portrait mode and AR to fundamental sensing innovations such as transparent object detection. Typical RGB-based stereo depth sensing techniques can be computationally expensive, suffer in regions with low texture, and fail completely in extreme low light conditions.

Because the Face Unlock feature on Pixel 4 must work at high speed and in darkness, it called for a different approach. To this end, the front of the Pixel 4 contains a real-time infrared (IR) active stereo depth sensor, called uDepth. A key computer vision capability on the Pixel 4, this technology helps the authentication system identify the user while also protecting against spoof attacks. It also supports a number of novel capabilities, such as after-the-fact photo retouching, depth-based segmentation of a scene, background blur, portrait effects and 3D photos.

Recently, we provided access to uDepth as an API on Camera2, using the Pixel Neural Core, two IR cameras, and an IR pattern projector to provide time-synchronized depth frames (in DEPTH16) at 30Hz. The Google Camera App uses this API to bring improved depth capabilities to selfies taken on the Pixel 4. In this post, we explain broadly how uDepth works, elaborate on the underlying algorithms, and discuss applications with example results for the Pixel 4.

Overview of Stereo Depth Sensing
All stereo camera systems reconstruct depth using parallax. To observe this effect, look at an object, close one eye, then switch which eye is closed. The apparent position of the object will shift, with closer objects appearing to move more. uDepth is part of the family of dense local stereo matching techniques, which estimate parallax computationally for each pixel. These techniques evaluate a region surrounding each pixel in the image formed by one camera, and try to find a similar region in the corresponding image from the second camera. When calibrated properly, the reconstructions generated are metric, meaning that they express real physical distances.
Pixel 4 front sensor setup, an example of an active stereo system.
To deal with textureless regions and cope with low-light conditions, we make use of an “active stereo” setup, which projects an IR pattern into the scene that is detected by stereo IR cameras. This approach makes low-texture regions easier to identify, improving results and reducing the computational requirements of the system.

What Makes uDepth Distinct?
Stereo sensing systems can be extremely computationally intensive, and it’s critical that a sensor running at 30Hz is low power while remaining high quality. uDepth leverages a number of key insights to accomplish this.

One such insight is that given a pair of regions that are similar to each other, most corresponding subsets of those regions are also similar. For example, given two 8x8 patches of pixels that are similar, it is very likely that the top-left 4x4 sub-region of each member of the pair is also similar. This informs the uDepth pipeline’s initialization procedure, which builds a pyramid of depth proposals by comparison of non-overlapping tiles in each image and selecting those most similar. This process starts with 1x1 tiles, and accumulates support hierarchically until an initial low-resolution depth map is generated.

After initialization, we apply a novel technique for neural depth refinement to support the regular grid pattern illuminator on the Pixel 4. Typical active stereo systems project a pseudo-random grid pattern to help disambiguate matches in the scene, but uDepth is capable of supporting repeating grid patterns as well. Repeating structure in such patterns produces regions that look similar across stereo pairs, which can lead to incorrect matches. We mitigate this issue using a lightweight (75k parameter) convolutional architecture, using IR brightness and neighbor information to adjust incorrect matches — in less than 1.5ms per frame.
Neural depth refinement architecture.
Following neural depth refinement, good depth estimates are iteratively propagated from neighboring tiles. This and following pipeline steps leverage another insight key to the success of uDepth — natural scenes are typically locally planar with only small nonplanar deviations. This permits us to find planar tiles that cover the scene, and only later refine individual depths for each pixel in a tile, greatly reducing computational load.

Finally, the best match from among neighboring plane hypotheses is selected, with subpixel refinement and invalidation if no good match could be found.
Simplified depth architecture. Green components run on the GPU, yellow on the CPU, and blue on the Pixel Neural Core.
When a phone experiences a severe drop, it can result in the factory calibration of the stereo cameras diverging from the actual position of the cameras. To ensure high-quality results during real-world use, the uDepth system is self-calibrating. A scoring routine evaluates every depth image for signs of miscalibration, and builds up confidence in the state of the device. If miscalibration is detected, calibration parameters are regenerated from the current scene. This follows a pipeline consisting of feature detection and correspondence, subpixel refinement (taking advantage of the dot profile), and bundle adjustment.
Left: Stereo depth with inaccurate calibration. Right: After autocalibration.
For more details, please refer to Slanted O(1) Stereo, upon which uDepth is based.

Depth for Computational Photography
The raw data from the uDepth sensor is designed to be accurate and metric, which is a fundamental requirement for Face Unlock. Computational photography applications such as portrait mode and 3D photos have very different needs. In these use cases, it is not critical to achieve video frame rates, but the depth should be smooth, edge-aligned and complete in the whole field-of-view of the color camera.
Left to right: raw depth sensing result, predicted depth, 3D photo. Notice the smooth rotation of the wall, demonstrating a continuous depth gradient rather than a single focal plane.
To achieve this we trained an end-to-end deep learning architecture that enhances the raw uDepth data, inferring a complete, dense 3D depth map. We use a combination of RGB images, people segmentation, and raw depth, with a dropout scheme forcing use of information for each of the inputs.
Architecture for computational photography depth enhancement.
To acquire ground truth, we leveraged a volumetric capture system that can produce near-photorealistic models of people using a geodesic sphere outfitted with 331 custom color LED lights, an array of high-resolution cameras, and a set of custom high-resolution depth sensors. We added Pixel 4 phones to the setup and synchronized them with the rest of the hardware (lights and cameras). The generated training data consists of a combination of real images as well as synthetic renderings from the Pixel 4 camera viewpoint.
Data acquisition overview.
Putting It All Together
With all of these components in place, uDepth produces both a depth stream at 30Hz (exposed via Camera2), and smooth, post-processed depth maps for photography (exposed via Google Camera App when you take a depth-enabled selfie). The smooth, dense, per-pixel depth that our system produces is available on every Pixel 4 selfie with Social Media Depth features enabled, and can be used for post-capture effects such as bokeh and 3D photos for social media.
Example applications. Notice the multiple focal planes in the 3D photo on the right.
Finally, we are happy to provide a demo application for you to play with that visualizes a real-time point cloud from uDepth — download it here (this app is for demonstration and research purposes only and not intended for commercial use; Google will not provide any support or updates). This demo app visualizes 3D point clouds from your Pixel 4 device. Because the depth maps are time-synchronized and in the same coordinate system as the RGB images, a textured view of the 3D scene can be shown, as in the example visualization below:
Example single-frame, RGB point cloud from uDepth on the Pixel 4.
Acknowledgements
This work would not have been possible without the contributions of many, many people, including but not limited to Peter Barnum, Cheng Wang, Matthias Kramm, Jack Arendt, Scott Chung, Vaibhav Gupta, Clayton Kimber, Jeremy Swerdlow, Vladimir Tankovich, Christian Haene, Yinda Zhang, Sergio Orts Escolano, Sean Ryan Fanello, Anton Mikhailov, Philippe Bouchilloux, Mirko Schmidt, Ruofei Du, Karen Zhu, Charlie Wang, Jonathan Taylor, Katrina Passarella, Eric Meisner, Vitalii Dziuba, Ed Chang, Phil Davidson, Rohit Pandey, Pavel Podlipensky, David Kim, Jay Busch, Cynthia Socorro Herrera, Matt Whalen, Peter Lincoln, Geoff Harvey, Christoph Rhemann, Zhijie Deng, Daniel Finchelstein, Jing Pu, Chih-Chung Chang, Eddy Hsu, Tian-yi Lin, Sam Chang, Isaac Christensen, Donghui Han, Speth Chang, Zhijun He, Gabriel Nava, Jana Ehmann, Yichang Shih, Chia-Kai Liang, Isaac Reynolds, Dillon Sharlet, Steven Johnson, Zalman Stern, Jiawen Chen, Ricardo Martin Brualla, Supreeth Achar, Mike Mehlman, Brandon Barbello, Chris Breithaupt, Michael Rosenfield, Gopal Parupudi, Steve Goldberg, Tim Knight, Raj Singh, Shahram Izadi, as well as many other colleagues across Devices and Services, Google Research, Android and X. 

Source: Google AI Blog


Soli Radar-Based Perception and Interaction in Pixel 4



The Pixel 4 and Pixel 4 XL are optimized for ease of use, and a key feature helping to realize this goal is Motion Sense, which enables users to interact with their Pixel in numerous ways without touching the device. For example, with Motion Sense you can use specific gestures to change music tracks or instantly silence an incoming call. Motion Sense additionally detects when you're near your phone and when you reach for it, allowing your Pixel to be more helpful by anticipating your actions, such as by priming the camera to provide a seamless face unlock experience, politely lowering the volume of a ringing alarm as you reach to dismiss it, or turning off the display to save power when you’re no longer near the device.

The technology behind Motion Sense is Soli, the first integrated short-range radar sensor in a consumer smartphone, which facilitates close-proximity interaction with the phone without contact. Below, we discuss Soli’s core radar sensing principles, design of the signal processing and machine learning (ML) algorithms used to recognize human activity from radar data, and how we resolved some of the integration challenges to prepare Soli for use in consumer devices.

Designing the Soli Radar System for Motion Sense
The basic function of radar is to detect and measure properties of remote objects based on their interactions with radio waves. A classic radar system includes a transmitter that emits radio waves, which are then scattered, or redirected, by objects within their paths, with some portion of energy reflected back and intercepted by the radar receiver. Based on the received waveforms, the radar system can detect the presence of objects as well as estimate certain properties of these objects, such as distance and size.

Radar has been under active development as a detection and ranging technology for almost a century. Traditional radar approaches are designed for detecting large, rigid, distant objects, such as planes and cars; therefore, they lack the sensitivity and resolution for sensing complex motions within the requirements of a consumer handheld device. Thus, to enable Motion Sense, the Soli team developed a new, small-scale radar system, novel sensing paradigms, and algorithms from the ground up specifically for fine-grained perception of human interactions.

Classic radar designs rely on fine spatial resolution relative to target size in order to resolve different objects and distinguish their spatial structures. Such spatial resolution typically requires broad transmission bandwidth, narrow antenna beamwidth, and large antenna arrays. Soli, on the other hand, employs a fundamentally different sensing paradigm based on motion, rather than spatial structure. Because of this novel paradigm, we were able to fit Soli’s entire antenna array for Pixel 4 on a 5 mm x 6.5 mm x 0.873 mm chip package, allowing the radar to be integrated in the top of the phone. Remarkably, we developed algorithms that specifically do not require forming a well-defined image of a target’s spatial structure, in contrast to an optical imaging sensor, for example. Therefore, no distinguishable images of a person’s body or face are generated or used for Motion Sense presence or gesture detection.
Soli’s location in Pixel 4.
Soli relies on processing temporal changes in the received signal in order to detect and resolve subtle motions. The Soli radar transmits a 60 GHz frequency-modulated signal and receives a superposition of reflections off of nearby objects or people. A sub-millimeter-scale displacement in a target’s position from one transmission to the next induces a distinguishable timing shift in the received signal. Over a window of multiple transmissions, these shifts manifest as a Doppler frequency that is proportional to the object’s velocity. By resolving different Doppler frequencies, the Soli signal processing pipeline can distinguish objects moving with different motion patterns.

The animations below demonstrate how different actions exhibit distinctive motion features in the processed Soli signal. The vertical axis of each image represents range, or radial distance, from the sensor, increasing from top to bottom. The horizontal axis represents velocity toward or away from the sensor, with zero at the center, negative velocities corresponding to approaching targets on the left, and positive velocities corresponding to receding targets on the right. Energy received by the radar is mapped into these range-velocity dimensions and represented by the intensity of each pixel. Thus, strongly reflective targets tend to be brighter relative to the surrounding noise floor compared to weakly reflective targets. The distribution and trajectory of energy within these range-velocity mappings show clear differences for a person walking, reaching, and swiping over the device.

In the left image, we see reflections from multiple body parts appearing on the negative side of the velocity axis as the person approaches the device, then converging at zero velocity at the top of the image as the person stops close to the device. In the middle image depicting a reach, a hand starts from a stationary position 20 cm from the sensor, then accelerates with negative velocity toward the device, and finally decelerates to a stop as it reaches the device. The reflection corresponding to the hand moves from the middle to the top of the image, corresponding to the hand’s decreasing range from the sensor over the course of the gesture. Finally, the third image shows a hand swiping over the device, moving with negative velocity toward the sensor on the left half of the velocity axis, passing directly over the sensor where its radial velocity is zero, and then away from the sensor on the right half of the velocity axis, before reaching a stop on the opposite side of the device.

Left: Presence - Person walking towards the device. Middle: Reach - Person reaching towards the device. Right: Swipe - Person swiping over the device.
The 3D position of each resolvable reflection can also be estimated by processing the signal received at each of Soli’s three receivers; this positional information can be used in addition to range and velocity for target differentiation.

The signal processing pipeline we designed for Soli includes a combination of custom filters and coherent integration steps that boost signal-to-noise ratio, attenuate unwanted interference, and differentiate reflections off a person from noise and clutter. These signal processing features enable Soli to operate at low-power within the constraints of a consumer smartphone.

Designing Machine Learning Algorithms for Radar
After using Soli’s signal processing pipeline to filter and boost the original radar signal, the resulting signal transformations are fed to Soli’s ML models for gesture classification. These models have been trained to accurately detect and recognize the Motion Sense gestures with low latency.

There are two major research challenges to robustly classifying in-air gestures that are common to any motion sensing technology. The first is that every user is unique and performs even simple motions, such as a swipe, in a myriad of ways. The second is that throughout the day, there may be numerous extraneous motions within the range of the sensor that may appear similar to target gestures. Furthermore, when the phone moves, the whole world looks like it’s moving from the point of view of the motion sensor in the phone.

Solving these challenges required designing custom ML algorithms optimized for low-latency detection of in-air gestures from radar signals. Soli’s ML models consist of neural networks trained using millions of gestures recorded from thousands of Google volunteers. These radar recordings were mixed with hundreds of hours of background radar recordings from other Google volunteers containing generic motions made near the device. Soli’s ML models were trained using TensorFlow and optimized to run directly on Pixel’s low-power digital signal processor (DSP). This allows us to run the models at low power, even when the main application processor is powered down.

Taking Soli from Concept to Product
Soli’s integration into the Pixel smartphone was possible because the end-to-end radar system — including hardware, software, and algorithms — was carefully designed to enable touchless interaction within the size and power constraints of consumer devices. Soli’s miniature hardware allowed the full radar system to fit into the limited space in Pixel’s upper bezel, which was a significant team accomplishment. Indeed, the first Soli prototype in 2014 was the size of a desktop computer. We combined hardware innovations with our novel temporal sensing paradigm described earlier in order to shrink the entire radar system down to a single 5.0 mm x 6.5 mm RFIC, including antennas on package. The Soli team also introduced several innovative hardware power management schemes and optimized Soli’s compute cycles, enabling Motion Sense to fit within the power budget of the smartphone.

Hardware innovations included iteratively shrinking the radar system from a desktop-sized prototype to a single 5.0 mm x 6.5 mm RFIC, including antennas on package.
For integration into Pixel, the radar system team collaborated closely with product design engineers to preserve Soli signal quality. The chip placement within the phone and the z-stack of materials above the chip were optimized to maximize signal transmission through the glass and minimize reflections and occlusions from surrounding components. The team also invented custom signal processing techniques to enable coexistence with surrounding phone components. For example, a novel filter was developed to reduce the impact of audio vibration on the radar signal, enabling gesture detection while music is playing. Such algorithmic innovations enabled Motion Sense features across a variety of common user scenarios.

Vibration due to audio on Pixel 4 appearing as an artifact in Soli’s range-doppler signal representation.
Future Directions
The successful integration of Soli into Pixel 4 and Pixel 4 XL devices demonstrates for the first time the feasibility of radar-based machine perception in an everyday mobile consumer device. Motion Sense in Pixel devices shows Soli’s potential to bring seamless context awareness and gesture recognition for explicit and implicit interaction. We are excited to continue researching and developing Soli to enable new radar-based sensing and perception capabilities.

Acknowledgments
The work described above was a collaborative effort between Google Advanced Technology and Projects (ATAP) and the Pixel and Android product teams. We particularly thank Patrick Amihood for major contributions to this blog post.

Source: Google AI Blog


Capture the night sky this Milky Way season

On a clear, dark night, you can see a faint glowing band of billions of distant stars across the sky: the Milky Way. Parts of it are visible on moonless nights throughout the year, but the brightest and most photogenic region, near the constellation Sagittarius, appears above the horizon from spring to early fall.


A few weeks ago, Sagittarius returned to the early morning sky, rising in the east just before dawn—and now is the perfect time to photograph it. Thanks to the astrophotography features in Night Sight on Pixel 4, 3a and 3, you can capture it with your phone. 


Before you head outside to catch the Milky Way, here are a few tips and tricks for taking breathtaking night time photos of your own.


Get out of town, and into the wilderness

The Milky Way isn’t very bright, so seeing and photographing it requires a dark night. Moonlight and light pollution from nearby cities tend to obscure it. 


To observe the brightest part of the Milky Way during spring, try to find a location with no large city to the east, and pick a night where the moon isn’t visible in the early morning hours. The nights between the new moon and about three days before the next full moon are ideal. For early 2020, this includes today (March 4) through March 6, March 24 through April 4 and April 22 through May 4. Look for the Milky Way in the hour before the first light of dawn. Of course you’ll want to check the weather forecast to make sure that the stars won’t be hidden by clouds.


Do your research

You can utilize tools to help track the rise and set of the sun and moon that will help you find the best time for your photo shoot. A light pollution map helps you find places to capture the best shot, and star charts can help you find constellations and other interesting celestial objects (like the Andromeda Galaxy) so you know what you’re shooting.


Stay steady and settle in

Once you’ve found the perfect spot, open the Camera App on Pixel and switch to Night Sight, and either mount the phone on a tripod, or support it on a rock or anything else that won’t move. Whatever you do, don’t try to steady the phone with your hands. Once stable, your phone will display “Astrophotography on” in the viewfinder to confirm it will use long exposures to capture your photo (up to four minutes total on Pixel 4, or one minute on Pixel 3 and 3a, depending on how dark the environment is).


Get comfortable with the camera features

To ensure you get great photos, explore all of the different options available in the Camera App. Say, for example, you are in the middle of taking a picture, and a car’s headlights are about to appear in the frame; you can tap the shutter button to stop the exposure and keep the lights from ruining your shot. You will get a photo even if you stop early, but letting the exposure run to completion will produce a clearer image.


The viewfinder in the Google Camera App works at full moon light levels, but in even darker environments the on-screen image may become too dim and grainy to be useful. Try this quick fix: Point the phone in what you think is the right direction, then tap the shutter button. Soon after the exposure begins, the viewfinder will show a much clearer image than before, and that image will be updated every few seconds. This allows you to check and correct which way the phone is pointing. Wait for the next update to see the effect of your corrections. Once you’re satisfied with the composition, tap the shutter button a second time to stop the exposure. Then tap the shutter button once more to start a new exposure, and let it run to completion without touching the phone.


The phone will try to focus automatically, but autofocus can fail in extremely dark scenes. For landscape shots you may just want to set focus to “far” by tapping the down arrow next to the viewfinder to access the focus options. This will make sure that anything further away than about 15 feet will be sharp.

Point Reyes After Dusk.jpg

Venus above the Pacific Ocean about one hour after sunset at Point Reyes National Seashore in California, captured on Pixel 4.


Use moonlight and twilight to your advantage 

Astrophotography mode isn’t only for taking pictures when it’s completely dark outside. It also takes impressive photos during nights with bright moonlight, or at dusk, when daylight is almost gone and the first stars have become visible.


No matter what time of day it is, spend a little extra time experimenting with the Google Camera App to find out what’s possible, and your photos will look great.


Northern Lights Finland.jpg

Aurora borealis near Kolari, Finland, on a night in February. Photo by Ingemar Eriksson, captured on Pixel 4.


New music controls, emoji and more features dropping for Pixel

A few months ago, Pixel owners got a few new, helpful features in our first feature drop. Beginning today, even more updates and new experiences will begin rolling out to Pixel users. 

Help when you need it

You can already use Motion Sense to skip forward or go back to a previous song. Now, if you have a Pixel 4, you can also pause and resume music with a tapping gesture above the phone. So you can easily pause music when you're having a conversation, without even picking up your phone.

12_Control_Your_Music_EN_1.gif

When you need help the most, your Pixel will be there too. Last October we launched the Personal Safety app on Pixel 4 for US users, which uses the phones’ sensors to quickly detect if you’ve been in a severe car crash1, and checks with you to see if you need emergency services. For those who need 911, you can request help via a voice command or with a single tap. Now, the feature is rolling out to Pixel users in Australia (000) and the UK (999). If you’re unresponsive, your Pixel will share relevant details, like location info, with emergency responders.


14_Get_Help_Calling_913_After_Car_Crash_EN.gif

We’re also rolling out some helpful features to more Pixel devices. Now Live Caption, the technology that automatically captions media playing on your phone, will begin rolling out to Pixel 2 owners. 

More fun with photos and video 

New AR effects you can use live on your Duo video call with friends make chatting more visually stimulating. These effects change based on your facial expressions, and move with you around the screen. Duo calls now come with a whole new layer of fun. 

Duomoji-marketing-P4XL.gif

Selfies on Pixel 4 are getting better, too. Your front-facing camera can now create images with depth, which improves Portrait Blur and color pop, and lets you create 3D photos for Facebook.

Emoji on Pixel will now be a more customizable and inclusive thanks to the emoji 12.1 update, with 169 new emoji to represent a wider variation of gender and skin tones, as well as more couple combinations to better reflect the world around us. 

New Inclusive Emoji 12.1 Update

A more powerful power button

Pixel is making it faster to pick the right card when using Google Pay. Just press and hold the power button to swipe through your debit and credit cards, event tickets, boarding passes or access anything else in Google Pay. This feature will be available to users in the US, UK, Canada, Australia, France, Germany, Spain, Italy, Ireland, Taiwan and Singapore. If you have Pixel 4, you can also quickly access emergency contacts and medical information. 

10_Quickly_Access_Payments_Emergency_Info_EN (1).gif

Getting on a flight is also getting easier. Simply take a screenshot of a boarding pass barcode and tap on the notification to add it to Google Pay. You will receive real-time flight updates, and on the day of your flight, you can just press the power button to pull up your boarding pass.  This feature will be rolling out gradually in all countries with Google Pay during March on Pixel 3, 3a and 4.

Customize your Pixel’s look and feel

A number of system-level advancements will give Pixel users more control over the look and feel of their devices.

You may know that Dark theme looks great and helps save battery power. Starting today, Dark theme gets even more helpful and flexible in switching from light to dark background, with the ability to schedule Dark theme based on local sunrise and sunset times. 

13_DarkMode_EN.gif

Have you forgotten to silence your phone when you get to work? Pixel gives you the ability to automatically enable certain rules based on WiFi network or physical location. You can now set up a rule to automatically silence your ringtone when you connect to your office WiFi, or go on Do Not Disturb when you walk in the front door of your house to focus on the people and things that matter most. 

Pixel 4 users are also getting some unique updates to the way they engage with the content on their phone. Improved long press options in Pixel’s launcher will get more and faster help from your apps. There’s also an update to Adaptive brightness, which now temporarily increases screen brightness to make reading content easier when in extremely bright ambient lighting, like direct sunlight. Check out more options for customizing your screen options.

Here’s to better selfies, more emoji and a quick pause when you need it! Check out our support page for more information on the new features, and look out for more helpful features dropping for Pixel users soon. 

 1 Not available in all languages or countries. Car crash detection may not detect all accidents. High-impact activities may trigger calls to emergency services. This feature is dependent upon network connectivity and other factors and may not be reliable for emergency communications or available in all areas. For country and language availability and more information see g.co/pixel/carcrashdetection

Source: Android


New music controls, emoji and more features dropping for Pixel

A few months ago, Pixel owners got a few new, helpful features in our first feature drop. Beginning today, even more updates and new experiences will begin rolling out to Pixel users. 

Help when you need it

You can already use Motion Sense to skip forward or go back to a previous song. Now, if you have a Pixel 4, you can also pause and resume music with a tapping gesture above the phone. So you can easily pause music when you're having a conversation, without even picking up your phone.

12_Control_Your_Music_EN_1.gif

When you need help the most, your Pixel will be there too. Last October we launched the Personal Safety app on Pixel 4 for US users, which uses the phones’ sensors to quickly detect if you’ve been in a severe car crash1, and checks with you to see if you need emergency services. For those who need 911, you can request help via a voice command or with a single tap. Now, the feature is rolling out to Pixel users in Australia (000) and the UK (999). If you’re unresponsive, your Pixel will share relevant details, like location info, with emergency responders.


14_Get_Help_Calling_913_After_Car_Crash_EN.gif

We’re also rolling out some helpful features to more Pixel devices. Now Live Caption, the technology that automatically captions media playing on your phone, will begin rolling out to Pixel 2 owners. 

More fun with photos and video 

New AR effects you can use live on your Duo video call with friends make chatting more visually stimulating. These effects change based on your facial expressions, and move with you around the screen. Duo calls now come with a whole new layer of fun. 

Duomoji-marketing-P4XL.gif

Selfies on Pixel 4 are getting better, too. Your front-facing camera can now create images with depth, which improves Portrait Blur and color pop, and lets you create 3D photos for Facebook.

Emoji on Pixel will now be a more customizable and inclusive thanks to the emoji 12.1 update, with 169 new emoji to represent a wider variation of gender and skin tones, as well as more couple combinations to better reflect the world around us. 

New Inclusive Emoji 12.1 Update

A more powerful power button

Pixel is making it faster to pick the right card when using Google Pay. Just press and hold the power button to swipe through your debit and credit cards, event tickets, boarding passes or access anything else in Google Pay. This feature will be available to users in the US, UK, Canada, Australia, France, Germany, Spain, Italy, Ireland, Taiwan and Singapore. If you have Pixel 4, you can also quickly access emergency contacts and medical information. 

10_Quickly_Access_Payments_Emergency_Info_EN (1).gif

Getting on a flight is also getting easier. Simply take a screenshot of a boarding pass barcode and tap on the notification to add it to Google Pay. You will receive real-time flight updates, and on the day of your flight, you can just press the power button to pull up your boarding pass.  This feature will be rolling out gradually in all countries with Google Pay during March on Pixel 3, 3a and 4.

Customize your Pixel’s look and feel

A number of system-level advancements will give Pixel users more control over the look and feel of their devices.

You may know that Dark theme looks great and helps save battery power. Starting today, Dark theme gets even more helpful and flexible in switching from light to dark background, with the ability to schedule Dark theme based on local sunrise and sunset times. 

13_DarkMode_EN.gif

Have you forgotten to silence your phone when you get to work? Pixel gives you the ability to automatically enable certain rules based on WiFi network or physical location. You can now set up a rule to automatically silence your ringtone when you connect to your office WiFi, or go on Do Not Disturb when you walk in the front door of your house to focus on the people and things that matter most. 

Pixel 4 users are also getting some unique updates to the way they engage with the content on their phone. Improved long press options in Pixel’s launcher will get more and faster help from your apps. There’s also an update to Adaptive brightness, which now temporarily increases screen brightness to make reading content easier when in extremely bright ambient lighting, like direct sunlight. Check out more options for customizing your screen options.

Here’s to better selfies, more emoji and a quick pause when you need it! Check out our support page for more information on the new features, and look out for more helpful features dropping for Pixel users soon. 

 1 Not available in all languages or countries. Car crash detection may not detect all accidents. High-impact activities may trigger calls to emergency services. This feature is dependent upon network connectivity and other factors and may not be reliable for emergency communications or available in all areas. For country and language availability and more information see g.co/pixel/carcrashdetection

Source: Android


Ask a Techspert: How does motion sensing work?

Editor’s Note: Do you ever feel like a fish out of water? Try being a tech novice and talking to an engineer at a place like Google. Ask a Techspert is a series on the Keyword asking Googler experts to explain complicated technology for the rest of us. This isn’t meant to be comprehensive, but just enough to make you sound smart at a dinner party. 

Thanks to my allergies, I’ve never had a cat. They’re cute and cuddly for about five minutes—until the sneezing and itching set in. Still, I’m familiar enough with cats (and cat GIFs) to know that they always have a paw in the air, whether it’s batting at a toy or trying to get your attention. Whatever it is they’re trying to do, it often looks like they’re waving at us. So imagine my concern when I found out that you can now change songs, snooze alarms or silence your phone ringing on your Pixel 4 with the simple act of waving. What if precocious cats everywhere started unintentionally making us sleep late by waving their paws?

Fortunately, that’s not a problem. Google’s motion sensing radar technology—a feature called Motion Sense in the Pixel 4—is designed so that only human hands, as opposed to cat paws, can change the tracks on your favorite playlist. So how does this motion sensing actually work, and how did Google engineers design it to identify specific motions? 

To answer my questions, I found our resident expert on motion sensors, Brandon Barbello. Brandon is a product manager on our hardware team and he helped me unlock the mystery behind the motion sensors on your phone, and how they only work for humans. 

When I’m waving my hand in front of my screen, how can my phone sense something is there? 

Brandon tells me that your Pixel phone has a chip at the top with a series of antennas, some of which emit a radio signal and others that receive “bounce backs” of the same signal the other antenna emitted. “Those radio signals go out into the world, and then they hit things and bounce back. The receiver antennas read the signals as they bounce back and that’s how they’re able to sense something has happened. Your Pixel actually has four antennas: One that sends out signals, and three that receive.”

What happens after the antenna picks up the motion? 

According to Brandon, when the radio waves bounce back, the computer in your phone begins to process the information. “Essentially, the sensor picks up that you’re around, and that triggers your phone to keep an eye out for the relevant gestures,” he says.

How does the Pixel detect that a motion is a swipe and not something else? 

With the motion sensing functions on the Pixel, Brandon and his team use machine learning to determine what happened. “Those radio waves get analyzed and reduced into a series of numbers that can be fed into the machine learning models that detect if a reach or a swipe has just happened,” Brandon says. “We collected millions of motion samples to pre-train each phone to recognize intentional swipes. Specifically, we’ve trained the models to detect motions that look like they come from a human hand, and not, for instance, a coffee mug passing over the phone as you put it down on the table.”

What will motion sensors be capable of in the future? 

Brandon told me that he and his team plan to add more gestures to recognize beyond swiping, and that specific movements could be connected to more apps. “In the future, we want to create devices that can understand your body language, so they’re more intuitive to use and more helpful,” he tells me. 

At the moment, motion-sensing technology is focused on the practical, and there’s still improvements to be made and new ground to cover, but he says this technology can also be delightful and fun—like on the Pixel’s gesture-controlled Pokémon Live Wallpaper. Overall, motion sensing technology helps you use your devices in a whole new way, and that will keep changing as the tech advances. "We're just beginning to see the potential of motion sensing," Brandon says.