Tag Archives: Pixel

10 ways Google Assistant can help you during the holidays

As fun as the holidays can be, they’re also filled with lots of to-do lists, preparation and planning. Before the hustle and bustle of the season begins, we wanted to share a few ways you can use Google Assistant to stay on top of things and do what matters most — spending quality time with family and friends.

  1. Get together over a good meal made easy with hands-free help in the kitchen. Surprise your family and friends with a new dish or dessert or find inspiration by saying, “Hey Google, find me Thanksgiving recipes.”
  2. …And if you happen to come across a few new favorites, tap on that recipe and ask your Assistant to save it for you by saying “Hey Google, add to my cookbook.” Then when it comes time for a holiday feast, all your recent recipes will be waiting for you on your Smart Display and will show up when you say “Hey Google, show me my cookbook.” Once you've gathered your ingredients, select the recipe you want to cook and say “Hey Google, start cooking” to get step-by-step instructions on your Smart Display.
Illustration of a Smart Display with a recipe on the screen. There is also a photo of a warm drink with whipped cream on the screen.

3. When the food is prepared and the table is set, let everyone know dinner is ready withBroadcast. Just say, “Hey Google, broadcast ‘dinner is ready.’”

4. How early is too early for festive music? The limit does not exist! And even if you don’t have something queued up, you can just say“Hey Google, play Christmas music.”

5. Want to avoid scrolling endlessly for gifts? Android users can use Assistant to browse shopping apps like Walmart with just their voice. If you have the Walmart app installed on your Android phone, try saying“Hey Google, search Walmart for bicycles.”

6. Avoid spending hours waiting on hold when you call to adjust travel plans or return a gift. Pixel users can take advantage of Hold For Me, where Google Assistant will wait on the line for you and let you know when a real person is ready to take your call.

7. Connect and feel close from anywhere with video calling. Make a group call with Duo supporting up to 32 people on your Nest Hub Max — or send a “happy holidays!” message using one of the fun AR effects on mobile devices. To start a Duo call, just say, “Hey Google, make a video call.”

8. Keep your family’s busy holiday schedule on track with Family Bell from Google. Say “Hey Google, set up a Family Bell” to be reminded with delightful sounds on your speakers or smart displays when it’s time to tackle important moments of your day, like holiday meals or volunteering at the local gift drive. And for routines that require a little extra work — like getting the kids to bed after a get together — create a Family Bell checklist on your Smart Display with get ready bells that remind them of key tasks to complete, like brushing their teeth and putting on pajamas.

9. Have some fun and create new memories with a hands-free family game night. Put your game face on and say, “Hey Google, let’s play a game.”

10. Spark some holiday magic with a story from Google. We’ve added a new interactive story from Grabbit, a twist on the classic fairytale, “Hansel and Gretel.” Play the story from either the perspective of Hansel and Gretel or the Witch, and decide how the story unfolds. Just say “Hey Google, talk to Twisted Hansel and Gretel” and let the adventure begin! More interactive stories from Grabbit like “Jungle Book,” “Alice in Wonderland” and “Sherlock Holmes” will soon be available on your Google Nest smart display devices between now and the new year.

Improved On-Device ML on Pixel 6, with Neural Architecture Search

This fall Pixel 6 phones launched with Google Tensor, Google’s first mobile system-on-chip (SoC), bringing together various processing components (such as central/graphic/tensor processing units, image processors, etc.) onto a single chip, custom-built to deliver state-of-the-art innovations in machine learning (ML) to Pixel users. In fact, every aspect of Google Tensor was designed and optimized to run Google’s ML models, in alignment with our AI Principles. That starts with the custom-made TPU integrated in Google Tensor that allows us to fulfill our vision of what should be possible on a Pixel phone.

Today, we share the improvements in on-device machine learning made possible by designing the ML models for Google Tensor’s TPU. We use neural architecture search (NAS) to automate the process of designing ML models, which incentivize the search algorithms to discover models that achieve higher quality while meeting latency and power requirements. This automation also allows us to scale the development of models for various on-device tasks. We’re making these models publicly available through the TensorFlow model garden and TensorFlow Hub so that researchers and developers can bootstrap further use case development on Pixel 6. Moreover, we have applied the same techniques to build a highly energy-efficient face detection model that is foundational to many Pixel 6 camera features.

An illustration of NAS to find TPU-optimized models. Each column represents a stage in the neural network, with dots indicating different options, and each color representing a different type of building block. A path from inputs (e.g., an image) to outputs (e.g., per-pixel label predictions) through the matrix represents a candidate neural network. In each iteration of the search, a neural network is formed using the blocks chosen at every stage, and the search algorithm aims to find neural networks that jointly minimize TPU latency and/or energy and maximize accuracy.

Search Space Design for Vision Models
A key component of NAS is the design of the search space from which the candidate networks are sampled. We customize the search space to include neural network building blocks that run efficiently on the Google Tensor TPU.

One widely-used building block in neural networks for various on-device vision tasks is the Inverted Bottleneck (IBN). The IBN block has several variants, each with different tradeoffs, and is built using regular convolution and depthwise convolution layers. While IBNs with depthwise convolution have been conventionally used in mobile vision models due to their low computational complexity, fused-IBNs, wherein depthwise convolution is replaced by a regular convolution, have been shown to improve the accuracy and latency of image classification and object detection models on TPU.

However, fused-IBNs can have prohibitively high computational and memory requirements for neural network layer shapes that are typical in the later stages of vision models, limiting their use throughout the model and leaving the depthwise-IBN as the only alternative. To overcome this limitation, we introduce IBNs that use group convolutions to enhance the flexibility in model design. While regular convolution mixes information across all the features in the input, group convolution slices the features into smaller groups and performs regular convolution on features within that group, reducing the overall computational cost. Called group convolution–based IBNs (GC-IBNs), their tradeoff is that they may adversely impact model quality.

Inverted bottleneck (IBN) variants: (a) depthwise-IBN, depthwise convolution layer with filter size KxK sandwiched between two convolution layers with filter size 1x1; (b) fused-IBN, convolution and depthwise are fused into a convolution layer with filter size KxK; and (c) group convolution–based GC-IBN that replaces with the KxK regular convolution in fused-IBN with group convolution. The number of groups (group count) is a tunable parameter during NAS.
Inclusion of GC-IBN as an option provides additional flexibility beyond other IBNs. Computational cost and latency of different IBN variants depends on the feature dimensions being processed (shown above for two example feature dimensions). We use NAS to determine the optimal choice of IBN variants.

Faster, More Accurate Image Classification
Which IBN variant to use at which stage of a deep neural network depends on the latency on the target hardware and the performance of the resulting neural network on the given task. We construct a search space that includes all of these different IBN variants and use NAS to discover neural networks for the image classification task that optimize the classification accuracy at a desired latency on TPU. The resulting MobileNetEdgeTPUV2 model family improves the accuracy at a given latency (or latency at a desired accuracy) compared to the existing on-device models when run on the TPU. MobileNetEdgeTPUV2 also outperforms their predecessor, MobileNetEdgeTPU, the image classification models designed for the previous generation of the TPU.

Network architecture families visualized as connected dots at different latency targets. Compared with other mobile models, such as FBNet, MobileNetV3, and EfficientNets, MobileNetEdgeTPUV2 models achieve higher ImageNet top-1 accuracy at lower latency when running on Google Tensor’s TPU.

MobileNetEdgeTPUV2 models are built using blocks that also improve the latency/accuracy tradeoff on other compute elements in the Google Tensor SoC, such as the CPU. Unlike accelerators such as the TPU, CPUs show a stronger correlation between the number of multiply-and-accumulate operations in the neural network and latency. GC-IBNs tend to have fewer multiply-and-accumulate operations than fused-IBNs, which leads MobileNetEdgeTPUV2 to outperform other models even on Pixel 6 CPU.

MobileNetEdgeTPUV2 models achieve ImageNet top-1 accuracy at lower latency on Pixel 6 CPU, and outperform other CPU-optimized model architectures, such as MobileNetV3.

Improving On-Device Semantic Segmentation
Many vision models consist of two components, the base feature extractor for understanding general features of the image, and the head for understanding domain-specific features, such as semantic segmentation (the task of assigning labels, such as sky, car, etc., to each pixel in an image) and object detection (the task of detecting instances of objects, such as cats, doors, cars, etc., in an image). Image classification models are often used as feature extractors for these vision tasks. As shown below, the MobileNetEdgeTPUV2 classification model coupled with the DeepLabv3+ segmentation head improves the quality of on-device segmentation.

To further improve the segmentation model quality, we use the bidirectional feature pyramid network (BiFPN) as the segmentation head, which performs weighted fusion of different features extracted by the feature extractor. Using NAS we find the optimal configuration of blocks in both the feature extractor and the BiFPN head. The resulting models, named Autoseg-EdgeTPU, produce even higher-quality segmentation results, while also running faster.

The final layers of the segmentation model contribute significantly to the overall latency, mainly due to the operations involved in generating a high resolution segmentation map. To optimize the latency on TPU, we introduce an approximate method for generating the high resolution segmentation map that reduces the memory requirement and provides a nearly 1.5x speedup, without significantly impacting the segmentation quality.

Left: Comparing the performance, measured as mean intersection-over-union (mIOU), of different segmentation models on the ADE20K semantic segmentation dataset (top 31 classes). Right: Approximate feature upsampling (e.g., increasing resolution from 32x32 → 512x512). Argmax operation used to compute per-pixel labels is fused with the bilinear upsampling. Argmax performed on smaller resolution features reduces memory requirements and improves latency on TPU without a significant impact to quality.

Higher-Quality, Low-Energy Object Detection
Classic object detection architectures allocate ~70% of the compute budget to the feature extractor and only ~30% to the detection head. For this task we incorporate the GC-IBN blocks into a search space we call “Spaghetti Search Space”1, which provides the flexibility to move more of the compute budget to the head. This search space also uses the non-trivial connection patterns seen in recent NAS works such as MnasFPN to merge different but related stages of the network to strengthen understanding.

We compare the models produced by NAS to MobileDet-EdgeTPU, a class of mobile detection models customized for the previous generation of TPU. MobileDets have been demonstrated to achieve state-of-the-art detection quality on a variety of mobile accelerators: DSPs, GPUs, and the previous TPU. Compared with MobileDets, the new family of SpaghettiNet-EdgeTPU detection models achieves +2.2% mAP (absolute) on COCO at the same latency and consumes less than 70% of the energy used by MobileDet-EdgeTPU to achieve similar accuracy.

Comparing the performance of different object detection models on the COCO dataset with the mAP metric (higher is better). SpaghettiNet-EdgeTPU achieves higher detection quality at lower latency and energy consumption compared to previous mobile models, such as MobileDets and MobileNetV2 with Feature Pyramid Network (FPN).

Inclusive, Energy-Efficient Face Detection
Face detection is a foundational technology in cameras that enables a suite of additional features, such as fixing the focus, exposure and white balance, and even removing blur from the face with the new Face Unblur feature. Such features must be designed responsibly, and Face Detection in the Pixel 6 were developed with our AI Principles top of mind.

Left: The original photo without improvements. Right: An unblurred face in a dynamic environment. This is the result of Face Unblur combined with a more accurate face detector running at a higher frames per second.

Since mobile cameras can be power-intensive, it was important for the face detection model to fit within a power budget. To optimize for energy efficiency, we used the Spaghetti Search Space with an algorithm to search for architectures that maximize accuracy at a given energy target. Compared with a heavily optimized baseline model, SpaghettiNet achieves the same accuracy at ~70% of the energy. The resulting face detection model, called FaceSSD, is more power-efficient and accurate. This improved model, combined with our auto-white balance and auto-exposure tuning improvements, are part of Real Tone on Pixel 6. These improvements help better reflect the beauty of all skin tones. Developers can utilize this model in their own apps through the Android Camera2 API.

Toward Datacenter-Quality Language Models on a Mobile Device
Deploying low-latency, high-quality language models on mobile devices benefits ML tasks like language understanding, speech recognition, and machine translation. MobileBERT, a derivative of BERT, is a natural language processing (NLP) model tuned for mobile CPUs.

However, due to the various architectural optimizations made to run these models efficiently on mobile CPUs, their quality is not as high as that of the large BERT models. Since MobileBERT on TPU runs significantly faster than on CPU, it presents an opportunity to improve the model architecture further and reduce the quality gap between MobileBERT and BERT. We extended the MobileBERT architecture and leveraged NAS to discover models that map well to the TPU. These new variants of MobileBERT, named MobileBERT-EdgeTPU, achieve up to 2x higher hardware utilization, allowing us to deploy large and more accurate models on TPU at latencies comparable to the baseline MobileBERT.

MobileBERT-EdgeTPU models, when deployed on Google Tensor’s TPU, produce on-device quality comparable to the large BERT models typically deployed in data centers.

Performance on the question answering task (SQuAD v 1.1). While the TPU in Pixel 6 provides a ~10x acceleration over CPU, further model customization for the TPU achieves on-device quality comparable to the large BERT models typically deployed in data centers.

Conclusion
In this post, we demonstrated how designing ML models for the target hardware expands the on-device ML capabilities of Pixel 6 and brings high-quality, ML-powered experiences to Pixel users. With NAS, we scaled the design of ML models to a variety of on-device tasks and built models that provide state-of-the-art quality on-device within the latency and power constraints of a mobile device. Researchers and ML developers can try out these models in their own use cases by accessing them through the TensorFlow model garden and TF Hub.

Acknowledgements
This work is made possible through a collaboration spanning several teams across Google. We’d like to acknowledge contributions from Rachit Agrawal, Berkin Akin, Andrey Ayupov, Aseem Bathla, Gabriel Bender, Po-Hsein Chu, Yicheng Fan, Max Gubin, Jaeyoun Kim, Quoc Le, Dongdong Li, Jing Li, Yun Long, Hanxiao Lu, Ravi Narayanaswami, Benjamin Panning, Anton Spiridonov, Anakin Tung, Zhuo Wang, Dong Hyuk Woo, Hao Xu, Jiayu Ye, Hongkun Yu, Ping Zhou, and Yanqi Zhuo. Finally, we’d like to thank Tom Small for creating illustrations for this blog post.



1The resulting architectures tend to look like spaghetti because of the connection patterns formed between blocks. 

Source: Google AI Blog


Pixel art: How designers created the new Pixel 6 colors

During a recent visit to Google’s Color, Material and Finish (better known as CMF) studio, I watched while Jess Ng and Jenny Davis opened drawer after drawer and placed object after object on two white tables. A gold hoop earring, a pale pink shell — all pieces of inspiration that Google designers use to come up with new colors for devices, including the just-launched Pixel 6 and Pixel 6 Pro.

“We find inspiration everywhere,” Jenny says. “It’s not abnormal to have a designer come to the studio with a toothbrush or some random object they found on their walk or wherever.”

The CMF team designs how a Google device will physically look and feel. “Color, material and finish are a big part of what defines a product,” Jess, a CMF hardware designer, says. “It touches on the more emotional part of how we decide what to buy.” And Jenny, CMF Manager for devices and services, agrees. “We always joke around that in CMF, the F stands for ‘feelings,’ so we joke that we design feelings.”

The new Pixel 6 comes in Sorta Seafoam and Kinda Coral, while the Pixel 6 Pro comes in Sorta Sunny and Cloudy White, and both are available in Stormy Black. Behind those five shades are years of work, plenty of trial and error…and lots and lots of fine-tuning. “It’s actually a very complex process,” Jenny says.

Mademore complex by COVID-19. Both Jenny and Jess describe the color selection process as highly collaborative and hands-on, which was difficult to accomplish while working from home. Designers aren’t just working with their own teams, but with those on the manufacturing and hardware side as well. “We don’t design color after the hardware design is done — we actually do it together,” Jenny says. The Pixel 6 and Pixel 6 Pro’s new premium look and feel influenced the direction of the new colors, and the CMF team needed to see colors and touch items in order to select and eliminate the shades.

They don’t only go hands-on with the devices, they do the same with sources of inspiration. “I remember one time I really wanted to share this color because I thought it would be really appropriate for one of our products, so I ended up sending my boss one of my sweaters through a courier delivery!” Jenny says. “We found creative workarounds.”

The team that designed the new Pixel 6 and Pixel 6 Pro case colors did as well. “The CMF team would make models and then take photos of the models and I would try to go in and look at them in person and physically match the case combinations against the different phone colors,” says Nasreen Shad, a Pixel Accessories product manager. “Then we’d render or photograph them and send them around to the team to review and see if what was and wasn’t working.” In addition to the challenge of working remotely, Nasreen’s team was also working on something entirely new: colorful, translucent cases.

Nasreen says they didn’t want to cover up the phones, but complement them instead, so they went with a translucent tinted plastic. Each device has a case that corresponds to its color family, but you can mix and match them for interesting new shades.

That process involved lots of experimenting. For example, what eventually became the Golden Glow case started out closer to a bronze color, which didn’t pair as well with the Stormy Black phone. “We had to tune it to a peachy shade, so that it looked good with its ‘intended pairing,’ Sorta Sunny, but with everything else, too. That meant ordering more resins and color chips in different tones, but it ended with some really beautiful effects.”

Beautiful effects, and tons of options. “I posted a picture of all of the possible combinations you can make with the phones and the cases and people kept asking me, ‘how many phones did Google just release!?’” Nasreen laughs. “And I had to be like, ‘No, no, no, these are just the cases!’”

A photograph showing the various Pixel 6 and Pixel 6 Pro phones in different colors in different colored cases, illustrating how many options there are.

Google designers often only know the devices and colors by temporary, internal code names. It's up to their colleagues to come up with the names you see on the Google Store site now. But one person who absolutely knows their official names is Lily Hackett, a Product Marketing Manager who works on a team that names device colors. “The way that we go about color naming is unique,” she says. “We like to play on the color. When you think about it, it’s actually very difficult to describe color, and the colors we often use are subtle — so we like to be specific with our approach to the name.”

Because color can be so subjective (one person’s white and gold dress is another’s black and blue dress), Lily’s team often checks in with CMF designers to make sure the words and names they’re gravitating toward actually describe the colors accurately. “It’s so nice to go to color experts and say, ‘Is this right? Is this a word you would use to describe this color?’”

Lily says their early brainstorming sessions can result in lists of 75 or more options. “It’s truly a testament to our copywriting team. When we were brainstorming for Stormy Black, they had everything under the sun — they had everything under the moon! It was incredible to see how many words they came up with.”

These days, everyone is looking ahead at new colors and new names, but the team is excited to see the rest of the world finally get to see their work. “I couldn’t wait for them to come out,” Lily says. “My favorite color was even the first to sell out on the Google Store! I was like, ‘Yes, everyone else loves it, too!’”

8 more things to love about the new Pixel phones

Last week we unveiled the new Pixel 6 and Pixel 6 Pro — and we unveiled a lot. Aside from the two new phones themselves, there was also Google Tensor, our custom system on a chip (SoC) that takes advantage of our machine learning research. Then there’s Magic Eraser, which will take unwanted people and objects out of your photos — plus Pixel Pass, a new way to buy, and a ton of new features packed into Android 12.

More from this Collection

Pixel 6 and Pixel 6 Pro

The Pixel 6 and Pixel 6 Pro have arrived, and so have plenty of new features.

View all 8 articles

Amid all thenew, you may have missed a thing or two. But don’t worry, we went ahead and collected everything you might have missed, and some extras, too.

  1. One of the key differences between Pixel 6 and previous editions is the radical redesign of the hardware encasedin aluminum and glass.

2. Real Tone is a significant advancement, making the Pixel 6 camera more equitable, and that’s not all: It also improves Google Photos' auto enhance feature on Android and iOS with better face detection, auto white balance and auto exposure, so that it works well across skin tones.

3. Speech recognition has been updated to take advantage of Google Tensor so you can do more with voice. We’ve added automatic punctuation while dictating and support for voice commands like “send” and “clear” to send a message or edit it. With new emoji support, I can just say ‘‘pasta emoji” while dictating. (Which, I admit, is going to get a lot of use.)

4. We’ve partnered with Snap to bring exclusive Snapchat features to the Pixel. For example, you can set it up so when you tap the back of your Pixel 6 or Pixel 6 Pro twice, it will launch the Snapchat selfie camera.

5. When you're flipping through your photos on a Pixel 6 or Pixel 6 Pro, Google Photos can proactively suggest using Magic Eraser to remove photobombers in the background.

Animated GIF showing Magic Eraser being used to take people out of the background of a photo.

6. The camera bar is a major new hardware design feature in the Pixel 6, and part of the reason it’s there is to fit a much bigger sensor, which captures more light so photos look sharper — in fact, the new sensor lets in 150% more light than that of the Pixel 5’s. The Pixel Pro 6’s Telephoto camera also uses a prism inside the camera to bend the light so the lens can fit inside the camera bar.

7. The Pixel 6 comes in Kinda Coral, Sorta Seafoam and Stormy Black, and the Pixel 6 Pro comes in Cloudy White, Sorta Sunny and the same Stormy Black. These shades are stunning on their own, but you can customize them even more with the new translucent cases: Combine the Sorta Seafoam Pixel 6 with the Light Rain case for an icy new look.

8. New in Android 12 and exclusive to Pixel 6 and Pixel 6 Pro, Gboard now features Grammar Correction. Not only will it make communication easier, but it will also work entirely on-device to preserve privacy. You can learn more over on the Google AI blog.

Pixel 6: Setting a new standard for mobile security

With Pixel 6 and Pixel 6 Pro, we’re launching our most secure Pixel phone yet, with 5 years of security updates and the most layers of hardware security. These new Pixel smartphones take a layered security approach, with innovations spanning across the Google Tensor system on a chip (SoC) hardware to new Pixel-first features in the Android operating system, making it the first Pixel phone with Google security from the silicon all the way to the data center. Multiple dedicated security teams have also worked to ensure that Pixel’s security is provable through transparency and external validation.

Secure to the Core

Google has put user data protection and transparency at the forefront of hardware security with Google Tensor. Google Tensor’s main processors are Arm-based and utilize TrustZone™ technology. TrustZone is a key part of our security architecture for general secure processing, but the security improvements included in Google Tensor go beyond TrustZone.

Figure 1. Pixel Secure Environments

The Google Tensor security core is a custom designed security subsystem dedicated to the preservation of user privacy. It's distinct from the application processor, not only logically, but physically, and consists of a dedicated CPU, ROM, one-time-programmable (OTP) memory, crypto engine, internal SRAM, and protected DRAM. For Pixel 6 and 6 Pro, the security core’s primary use cases include protecting user data keys at runtime, hardening secure boot, and interfacing with Titan M2TM.

Your secure hardware is only as good as your secure OS, and we are using Trusty, our open source trusted execution environment. Trusty OS is the secure OS used both in TrustZone and the Google Tensor security core.

With Pixel 6 and Pixel 6 Pro your security is enhanced by the new Titan M2TM, our discrete security chip, fully designed and developed by Google. In this next generation chip, we moved to an in-house designed RISC-V processor, with extra speed and memory, and made it even more resilient to advanced attacks. Titan M2TM has been tested against the most rigorous standard for vulnerability assessment, AVA_VAN.5, by an independent, accredited evaluation lab. Titan M2™ supports Android Strongbox, which securely generates and stores keys used to protect your PINs and password, and works hand-in-hand with Google Tensor security core to protect user data keys while in use in the SoC.

Moving a step higher in the system, Pixel 6 and Pixel 6 Pro ship with Android 12 and a slew of Pixel-first and Pixel-exclusive features.

Enhanced Controls

We aim to give users better ways to control their data and manage their devices with every release of Android. Starting with Android 12 on Pixel, you can use the new Security hub to manage all your security settings in one place. It helps protect your phone, apps, Google Account, and passwords by giving you a central view of your device’s current configuration. Security hub also provides recommendations to improve your security, helping you decide what settings best meet your needs.

For privacy, we are launching Privacy Dashboard, which will give you a simple and clear timeline view of the apps that have accessed your location, microphone and camera in the last 24 hours. If you notice apps that are accessing more data than you expected, the dashboard provides a path to controls to change those permissions on the fly.

To provide additional transparency, new indicators in Pixel’s status bar will show you when your camera and mic are being accessed by apps. If you want to disable that access, new privacy toggles give you the ability to turn off camera or microphone access across apps on your phone with a single tap, at any time.

The Pixel 6 and Pixel 6 Pro also include a toggle that lets you remove your device’s ability to connect to less-secure 2G networks. While necessary in certain situations, accessing 2G networks can open up additional attack vectors; this toggle helps users mitigate those risks when 2G connectivity isn’t needed.

Built-in security

By making all of our products secure by default, Google keeps more people safe online than anyone else in the world. With the Pixel 6 and Pixel 6 Pro, we’re also ratcheting up the dial on default, built-in protections.

Our new optical under-display fingerprint sensor ensures that your biometric information is secure and never leaves your device. As part of our ongoing security development lifecycle, Pixel 6 and 6 Pro’s fingerprint unlock has been externally validated by security experts as a strong and secure biometric unlock mechanism meeting the Class 3 strength requirements defined in the Android 12 Compatibility Definition Document (CDD).

Phishing continues to be a huge attack vector, affecting everyone across different devices.

The Pixel 6 and Pixel 6 Pro introduce new anti-phishing protections. Built-in protections automatically scan for potential threats from phone calls, text messages, emails, and links sent through apps, notifying you if there’s a potential problem.

Users are also now better protected against bad apps by enhancements to our on-device detection capabilities within Google Play Protect. Since its launch in 2017, Google Play Protect has provided the ability to detect malicious applications even when the device is offline. The Pixel 6 and Pixel 6 Pro uses new machine learning models that improve the detection of malware in Google Play Protect. The detection runs on your Pixel, and uses a privacy preserving technology called federated analytics to discover commonly-run bad apps. This will help to further protect over 3 billion users by improving Google Play Protect, which already analyzes over 100 billion apps every day to detect threats.

Many of Pixel’s privacy-preserving features run inside Private Compute Core, an open source sandbox isolated from the rest of the operating system and apps. Our open source Private Compute Services manages network communication for these features, and uses federated learning, federated analytics, and private information retrieval to improve features while preserving privacy. Some features already running on Private Compute Core include Live Caption, Now Playing, and Smart Reply suggestions.

Google Binary Transparency (GBT) is the newest addition to our open and verifiable security infrastructure, providing a new layer of software integrity for your device. Building on the principles pioneered by Certificate Transparency, GBT helps ensure your Pixel is only running verified OS software. It works by using append-only logs to store signed hashes of the system images. The logs are public and can be used to verify that what’s published is the same as what’s on the device – giving users and researchers the ability to independently verify OS integrity for the first time.

Beyond the Phone

Defense-in-depth isn’t just a matter of hardware and software layers. Security is a rigorous process. Pixel 6 and Pixel 6 Pro benefit from in-depth design and architecture reviews, memory-safe rewrites to security critical code, static analysis, formal verification of source code, fuzzing of critical components, and red-teaming, including with external security labs to pen-test our devices. Pixel is also part of the Android Vulnerability Rewards Program, which paid out $1.75 million last year, creating a valuable feedback loop between us and the security research community and, most importantly, helping us keep our users safe.

Capping off this combined hardware and software security system, is the Titan Backup Architecture, which gives your Pixel a secure foot in the cloud. Launched in 2018, the combination of Android’s Backup Service and Google Cloud’s Titan Technology means that backed-up application data can only be decrypted by a randomly generated key that isn't known to anyone besides the client, including Google. This end-to-end service was independently audited by a third party security lab to ensure no one can access a user's backed-up application data without specifically knowing their passcode.

To top it all off, this end-to-end security from the hardware across the software to the data center comes with no fewer than 5 years of guaranteed Android security updates on Pixel 6 and Pixel 6 Pro devices from the date they launch in the US. This is an important commitment for the industry, and we hope that other smartphone manufacturers broaden this trend.

Together, our secure chipset, software and processes make Pixel 6 and Pixel 6 Pro the most secure Pixel phone yet.

Pixel 6’s camera combines hardware, software and ML

Last week, we announced Pixel 6 and Pixel 6 Pro, and we spent some time introducing the new Pixel Camera, which gets a big boost from Google Tensor, Google’s first System on a Chip (SoC) designed specifically for Pixel. But there’s so much more to talk about — so we wanted to take some time to show you how the new camera uses the latest technology from the Pixel hardware and research teams as well as our Pixel software team.

From HDR+ to Night Sight, Pixel has a history of building state-of-the-art cameras using computational photography, and Pixel 6 and Pixel 6 Pro are no exception. Google Tensor allows us to combine new camera hardware with thoughtful software, as well as advancements in machine learning (ML).

Photo of Pixel 6 and Pixel 6 Pro.

Pixel 6 and Pixel 6 Pro camera bar.

Videos that look as good as photos

For the first time ever, Pixel Camera has Live HDR+ enabled in all video modes – even 4K60 – and in popular social and chat apps, too. With Pixel 6 and Pixel 6 Pro, we drastically accelerated Live HDR+ by designing it directly into Google Tensor’s Image Signal Processor (ISP). Google Tensor enabled many other improvements to video, too, including real-time tone mapping of people and upgrades to stabilization.

Alt text: Frames of 4K60 Live HDR+ video at a beach during sunset and a field in the afternoon.

Google Tensor enables Live HDR+ in video.

Night Sight is better than ever

Pixel 6 and Pixel 6 Pro have a larger main rear camera that can capture 2.5x as much light as Pixel 5, so your Night Sight photos will be sharper and more detailed than ever. The larger camera works with a new laser detect auto focus system and Google Tensor’s ISP, which use new motion detection algorithms to capture shorter exposures that aren’t as blurry.

hoto of a hiker sitting in a dark ice cave and photo of a woman leaning on a pink and white car at night.

The new main sensor captures 2.5x as much light as ever before.

A more equitable camera

With Real Tone, Pixel 6 and Pixel 6 Pro are designed for people of every skin tone, so that your portraits look authentic and true to life. Google Tensor enables an advanced ML-based face detection model that more accurately auto-exposes photos of people of color. We also updated our auto-white balance algorithm to detect and correct inaccurate skin tones, and, if you enable Frequent Faces, Pixel Camera will also learn how to better auto-white balance the people you photograph most-frequently.

Alt text: Photo of a person looking toward the camera in a green shirt. Photo of a person looking toward the camera against an orange background.

Real Tone helps everyone feel seen, no matter their skin tone.

Fewer blurry photos of people

With Face Unblur, more of your photos of people will come out crisp and sharp. If Google Tensor detects someone is moving quickly, Pixel 6 and Pixel 6 Pro simultaneously take a darker but sharper photo on the Ultra Wide camera and a brighter but blurrier photo on the main camera. Google Tensor then uses machine learning to automatically combine the two photos, giving you a well-exposed photo with a sharp face.

Alt text: Photo of a young girl playing under a colorful awning.

The upgraded Ultra Wide camera is used to take sharp photos of moving people, even when you don’t zoom out.

Easy-to-use creative effects

Sometimes, you want a bit of blur – with Motion Mode (now in Beta on Pixel 6 and Pixel 6 Pro), you can easily capture high-quality Action Pans and Long Exposures. Instead of using a sleight of hand or tripod, you can just press the Pixel Camera shutter button and rely on Google Tensor to handle motion vector calculations, frame interpolation, subject segmentation, hand-shake rejection, and blur rendering.

Photo of a city street with streaks of light from taillights and a NYC cab zipping through Manhattan.

Action Pan and Long Exposure features in Motion Mode create artful blur of movement.

Zooming in…

Pixel 6 Pro’s new telephoto camera uses an updated version of Super Res Zoom with HDR+ Bracketing so zoomed photos look sharp, not grainy. The sensor used in the 4x optical camera is larger than Pixel 5’s main sensor and it gathers more light, so fitting it in Pixel 6 Pro is no easy feat. It’s hidden inside the camera bar and uses a prism that bends the light so the lens can be oriented sideways.

Schematic of Pixel 6 Pro telephoto camera.
Photo of person relaxing on a bench, taken from top of building.

Pixel 6 Pro has the best zoom we’ve ever put in a Pixel phone.

…and zooming out

Speaking of zoom, sometimes selfies feel too zoomed in, making it tricky to get a picture with all your friends or family. We built a new Ultra Wide selfie camera for Pixel 6 Pro that lets you ditch the selfie stick. When taking a selfie, you can zoom out by pressing the .7x button and capture photos of yourself or a group without extending your arm or fumbling around. And if you want to vlog on YouTube, Pixel 6 Pro’s selfie camera now supports 4K video recording and Speech enhancement mode, too! Pixel 6 uses Tensor’s TPU to simultaneously process audio and visual cues to isolate speech. It can reduce background noise up to 80% in noisy environments.

A group of friends posing for a selfie.

Pixel 6 Pro lets you zoom out to .7x on selfies so you can capture the entire crew.

Easier edits with Magic Eraser

Magic Eraser in Google Photos can help remove photobombers from your photos with the tap of a button. Using novel algorithms for confidence, segmentation, and inpainting, Magic Eraser can easily remove suggested distractions from your photos or let you select things you want to remove. Better yet, these machine learning models run on-device using Google Tensor, so you can even use Magic Eraser when you’re away from connectivity or when your photos haven’t backed up yet.

Photo of a person sitting in the grass with two photobombers in the background and the same photo with the photobombers removed using Magic Eraser.

Magic Eraser removes distracting people in the background.

This is Pixel Camera's biggest hardware update ever, from the new Google Tensor chip to the debut of the camera bar. It's the best Pixel Camera we've built, and we can't wait to see what you do with it. Share your photos and videos on #TeamPixel through Snapchat, Instagram and Twitter.

More personal, more powerful: Meet Pixel 6 and Pixel 6 Pro

The wait is over: Pixel 6 and Pixel 6 Pro, the completely redesigned Google phones, are here. Powered by Google Tensor, Google’s first-ever processor, and shipping with Android 12, both phones are fast, smart, secure and designed to adapt to you.

Pixel 6 is an outstanding all-around phone and it starts at only $599. If you want all the advanced capabilities and upgraded finishes, Pixel 6 Pro is the right phone for you, starting at $899.

Powering the new Pixel lineup is Google Tensor, a mobile system on a chip designed specifically around Google’s industry-leading AI. Google Tensor enables entirely new capabilities for your smartphone, and makes Pixel 6 and Pixel 6 Pro more helpful and more personal.

Distinct design

Pixel has a bold new design this year with a cohesive look across the software on the inside and the hardware on the outside. The first thing you’ll notice is the Camera Bar, giving the phone a clean, symmetrical design that puts the camera front-and-center.

Image of Pixel 6 and Pixel 6 Pro phones laying on a gray surface. The phones are different colors and showing the camera bar. They are all laying screen side down.

Pixel 6 has a distinctive graphic and vibrant look. The matte black metal band complements the expressive, versatile color options. Pixel 6 Pro was inspired by the finishes you see in luxury jewelry and watches. It’s made with a polished metal unibody and transitions into gorgeous curved glass in colors that complement the metallic frames.

Speaking of color, Android 12 brings a full redesign to the OS, with Material You.

Android 12 on Pixel 6

Android 12 builds on the best features of Android so your phone can really beyour phone: It can adapt to you, it’s secure by default and private by design. And Android 12 looks especially stunning on Pixel 6.

Animated GIF showing the At A Glance feature on a new Pixel 6 phone.

When you choose your wallpaper, your entire UI will update to reflect that choice. Everything will feel more responsive and smoother. At a Glance, which shows up on the home and lock screen, has a fresh new look and some new capabilities. Here, you’ll find what you need, right when you need it — like your boarding pass the day of your flight or stats from your current workout.

And Pixel 6 is again the highest rated phone for security. It includes the next generation Titan M2TM, which works with Tensor security core to protect your sensitive user data, PINs and passwords. We’ve also extended our support window to at least five years of security updates, so your phone has the most up-to-date protection.

New Pixel, new camera

Pixel 6 and Pixel 6 Pro have the most advanced cameras we’ve ever built. The entire camera experience is improved from the hardware to Pixel’s revolutionary computational photography.

Both Pixel 6 and Pixel 6 Pro have a new 1/1.3 inch sensor on the back. This primary sensor now captures up to 150% more light (compared to Pixel 5’s primary camera), meaning you’re going to get photos and videos with even greater detail and richer color. Both phones also have completely new ultrawide lenses with larger sensors, so photos look great when you want to fit more in your shot.

Pixel 6 Pro also has an amazing telephoto lens with 4x optical zoom and up to 20x zoom with an improved version of Pixel’s Super Res Zoom. There’s also an upgraded ultrawide front camera that records 4K video. You can make use of that wider front camera in Snapchat’s new ultrawide selfie feature. Plus, for instant Snapchat access, the new Quick Tap to Snap feature is coming exclusively to Pixel 6 and Pixel 6 Pro later this year.

Magic Eraser makes distractions in your photos disappear, just like that. With a few taps in Google Photos, remove strangers and unwanted objects.

Animated GIF of a photo of a couple talking. The Magic Eraser feature takes out various people in the background.

Motion Mode features options like Action Pan and Long Exposure, which bring movement to your shots. You can use Action Pan to take photos of your kids riding their scooter or landing crazy skateboarding tricks against a stylish blurred background. Or create beautiful long exposure shots where your subject is moving, like waterfalls or vibrant city scenes.

Another significant advancement in photography across Pixel and Google Photos is Real Tone. Going back decades, cameras have been designed to photograph light skin — a bias that’s crept into many of our modern digital imaging products and algorithms. Our teams have been working directly with photographers, cinematographers and colorists who are celebrated for their beautiful and accurate imagery of communities of color. We asked them to test our cameras and editing tools and provide honest feedback, which helped make our camera and auto enhancement features more equitable.

Smarts and speech

Pixel 6 and Pixel 6 Pro also have improved speech recognition and language understanding models, so it can make everyday tasks easier. For instance, you can now use your voice to quickly type, edit, and send messages with Assistant voice typing in Messages, Gmail and more. Let Google Assistant help with adding punctuation, making corrections, inserting emojis and sending your messages.

You might find yourself occasionally trying to decide if you have time to call a business now, or if you should call later to avoid waiting on hold. Now, Wait Times and Direct My Call, available in the U.S. and in English, make that decision easier: Before you even place your call to a toll-free business number, you’ll see the current and expected hour-by-hour Wait Times for the rest of the week.

And when you call the business, Direct My Call helps you get to the right place. Powered by Duplex technology, Google Assistant transcribes the automated message and menu options for you in real-time and displays them on your screen for you to see and tap. For more information about these advancements in calling assistance, please see our blog post.

Animated GIF showing how Call Waiting and Direct my Call work.

Finally, Live Translate enables you to message with people in different languages, including English, French, German, Italian and Japanese. It works by detecting whether a message in your chat apps, like WhatsApp or Snapchat, is different from your language, and if so, automatically offers you a translation. All of this detection and processing happens entirely on-device within Private Compute Core, so no data ever leaves the device, and it works even without network connectivity. With support for Interpreter mode, you’ll also be able to take turns translating what is said in up to 48 languages. Activate Assistant and say “Be my interpreter.”

Animated GIF showing a Pixel 6 phone using interpreter mode.

One more thing: When you get an incoming call, just say “accept” or “decline” without having to use “Hey Google” every time by enabling Quick phrases. You can also “stop” and “snooze” alarms and timers.

Get your hands on the new Pixel

Pre-order Pixel 6 today, which starts at $599 and $899 for the Pixel 6 Pro. The phones will be available on store shelves with all major U.S. carriers starting on October 28. We’re also launching a new collection of specially designed cases for Pixel 6, so you can protect your phone in style.

We’re also introducing Pixel Pass, an easy subscription that delivers the best of Google. Starting at $45 per month for U.S. customers, Pixel Pass gives you a brand new Pixel 6 along with Google One, YouTube Premium and YouTube Music Premium, Google Play Pass and Preferred Care. Pixel Pass with Pixel 6 Pro starts at only $55 per month. After two years, you’ll have the option to upgrade to a new Pixel.

However you buy it, and whichever Pixel 6 you pick, we know you are going to love your new phone.

Say hello to better phone calls

Our smartphones can do amazing things: They can capture great photos, and they act as our alarm clock, our camera, our stereo, our library, our game console and more, all in one. But making phone calls, the original “feature” of our devices, has mostly remained the same for decades. When we call businesses to get something done, we’re often met with long, automated systems and endless elevator music. And as we go about our days, we’re often distracted by calls from unknown numbers, spammers and scammers. That’s why we are always seeking improvements with phone calls, and so today we’re excited to announce our latest advancements in calling assistance to make them better.

A better way to call businesses

Starting today on Pixel 6 and Pixel 6 Pro devices in the U.S. our latest Phone app features, Wait Times and Direct My Call, make calling businesses easier. Before you even place your call to a toll-free business number, you’ll see the current and projected Wait Times for the rest of the week. That can help you decide whether you have time to call now, or plan when to call later to avoid long waits. Wait Times are inferred from call length data that is not linked to user identifiers.

When calling a business, see menu options on the screen for you to tap.

Once you ring the business, Direct My Call helps you get to the right place with less hassle. Google Assistant transcribes the automated message and menu options for you in real time and displays them on your screen for you to see and tap, so you don’t need to remember all the options. Direct My Call is powered by Google’s Duplex technology, which uses advanced speech recognition and language understanding models to determine when the business wants you to do something ​​– like select a number (“Press 1 for hours and locations”), say a word (“Say ‘representative’ to speak with one of our agents”) or input your account number.

When calling a business, see menu options on the screen for you to tap.

Direct My Call builds on previous features we've released that make calling businesses easier. Last year, we launched Hold For Me to help reduce the number of minutes you spend on hold. It already saves Pixel users in the United States over 1.5 million minutes each month, and it’s expanding to Pixel users in Australia, Canada and Japan in the coming months. Assistant is able to recognize when hold music is being played and understands the difference between a recorded message (like “Hello, thank you for waiting”) and a representative on the line thanks to Duplex technology, so that you can go back to your day and get notified when someone is ready to talk.

Press “Hold for me” and let Google Assistant wait on hold, then notify you when someone is ready to talk.

Know who’s calling you

Receiving calls from unknown numbers is a drag, and a majority of Americans choose not to answer them. They also report missing important calls they assume are spam. That’s why starting today, we’re improving Google’s extensive caller ID coverage of businesses with help from our users. You can now share information about unknown businesses that you call or answer (such as the type of business) and over time that information will be displayed on incoming calls to help others know more about who’s calling them. This information is not joined with any user identifiers. We expect this to double the number of businesses that have caller ID information – so you can answer more calls with confidence.

Caller ID identifies a type of business, in addition to the phone number.

If you do get a call from an unknown number, not to worry – Call Screen helps you find out who they are and why they’re calling before you pick up. Call Screen helps users in the U.S., Canada and Japan screen 37 million calls each month, and today we’re expanding manual Call Screen to Pixel users in the U.K., France, Germany, Australia, Ireland, Italy and Spain. Our latest on-device speech models make the transcriptions more accurate than ever on Pixel 6 and Pixel 6 Pro thanks to Pixel’s new Google Tensor.

Google Assistant answers calls and transcribes the conversation.

Keeping your data safe

All audio transcriptions are processed on your device, which makes the experiences fast and also protects your privacy. No audio from the call will be shared with Google unless you explicitly decide to share it to help improve features. After the experience is over, like when you return to a call after Google Assistant was on hold for you or after Google Assistant screened a call, audio stops being processed altogether.

It’s time to rethink phone calls, and our latest calling assistance features are designed to save you time and make it easier than ever to connect with the right contact at the right time.

Image equity: Making image tools more fair for everyone

Pictures are a big part of how we see each other and the world around us, and historically racial bias in camera technology has overlooked and excluded people of color. That same bias can carry through in our modern imaging tools if they aren’t tested with a diverse group of people and inputs, delivering unfair experiences for people of color, like over-brightening or unnaturally desaturating skin. We acknowledge that Google has struggled in this area in the past, and are committed to continuing to improve our products accordingly. As part of Google’s Product Inclusion and Equity efforts, our teams are on a mission to build camera and imaging products that work equitably for all people, so that everyone feels seen, no matter their skin tone.

Pixel 6: A more equitable camera

Building better tools for a community works best when they’re built with the community. For the new Pixel 6 Camera, we partnered with a diverse range of renowned image makers who are celebrated for their beautiful and accurate depictions of communities of color—including Kira Kelly, Deun Ivory, Adrienne Raquel, Kristian Mercado, Zuly Garcia, Shayan Asgharnia, Natacha Ikoli and more—to help our teams understand where we needed to do better. With their help, we've significantly increased the number of portraits of people of color in the image datasets that train our camera models. Their feedback helped us make the key improvements across our face detection, camera and editing products that we call Real Tone.

Let’s take a deeper look at how we approached these improvements:

  • In computational photography, making a great portrait depends on the camera’s ability to detect a face. We radically diversified the images that train our face detector to “see” more diverse faces in a wider array of lighting conditions.
  • Auto-white balance models help determine color in a picture. Our partners helped us make better decisions about how to render the nuances of skin for people of color.
  • Auto-exposure models help determine the brightness of an image. Feedback from our experts helped us ensure that our camera shows you as you are — not unnaturally darker or brighter.
  • Our teams noticed that stray light had a tendency to disproportionately wash out darker skin tones, so we developed and implemented an algorithm to reduce its effect in our images.
  • Blurriness in portraits is a consistent concern for people with darker skin tones, so our teams used the Tensor chip’s processing power to make our portraits sharper through motion metering, even in low light conditions.

It was important for us to be sure that our adjustments were resonant with our collaborators as well, and we’re proud that they rated Pixel 6’s rendering of skin tone, brightness, depth and detail as best for people of color in a device-agnostic survey comparing top smartphone cameras.

Google Photos: More nuanced auto enhancements

Our partners’ expertise also helped our teams improve Google Photos’ popular auto enhance feature, so you can achieve a beautiful, representative photo regardless of when you took the photo, or which device you used. The updated auto enhance is designed to improve your picture’s color and lighting with just a tap, and works well across skin tones. It will roll out in Google Photos across Android and iOS devices in the coming weeks.

A mission, not a moment

We’re committed to building a more equitable experience across all of our camera and image products. To improve the visibility of meeting participants, we recently launched automaticlighting adjustments in Google Meet, and tested it across a range of skin tones to ensure it works well for everyone. And our Research teams are identifying more inclusive ways to handle skin tone in AI systems, both in Google products and across the industry. We’ll continue to partner with experts, listen to feedback and invest in tools and experiences that work for everyone. Because everyone deserves to be seen as they are.

Learn more about our efforts on Real Tone at http://g.co/pixel/realtone.

Photobombs begone with Magic Eraser in Google Photos

Sometimes things get in the way of the perfect photo — like an accidental photobomb or power lines you didn’t notice. They can distract from the photo, pulling attention from what you were really trying to capture. Removing distractions from photos isn’t an impossible task, but it typically requires sophisticated editing tools, know-how and time.

That’s why we’re launching Magic Eraser on Pixel 6 to help you remove those distractions in just a few taps right in Google Photos. And you’re not limited to newly captured photos — you can clean up all your photos, even those taken years ago or on non-Pixel phones.

Magic Eraser can detect distractions in your photos, like people in the background, power lines and power poles, and suggest what you might want to remove. Then, you can choose whether to erase them all at once or tap to remove them one by one.

Gif showing Magic Eraser being used in Google Photos on Pixel 6 on a photo of a child on the beach with people in the background. Magic Eraser suggests to "remove people in the background," then removes them, resulting in an image with just the child on the beach.

You can also circle or brush over what you want to remove. Using machine learning, Magic Eraser can figure out what you’re trying to remove based on what you circle, so you don’t have to spend time worrying about precise brushing. Then, once you decide what you want to erase, Magic Eraser uses machine learning again to predict what the pixels would look like if the distraction weren't there.

Gif showing Magic Eraser being used in Google Photos on Pixel 6 to manually remove distractions from the background of a photo of a child at a pumpkin patch. A person and various items are circled and then removed, resulting in an image with just the child.

Remove distractions from new photos taken on Pixel 6 or older photos taken on any camera, like this one from 20 years ago.

Magic Eraser builds on our suite of helpful editing features — including smart suggestions for portraits, photos of the sky and more — so you can get stunning photos easily and quickly. Developed through a close collaboration between the Google Photos and Google Research teams, these features are powered by machine learning and advances in computational photography.

Magic Eraser will be available in Google Photos on the Pixel 6 when it launches on October 28. So focus on capturing what matters — and if you find a distraction after the fact, Magic Eraser is there to help.