Author Archives: The Official Google Blog

Adding three new colors to the Nest Thermostat family

Your home is your space. It’s also a place where you can express your style with color and personal touches. And we want our products to reflect your aesthetic while giving you the help you need. So we recently introduced three new colors to the Nest Learning Thermostat lineup (bringing the total to seven) to give you more options to fit your style.  


These new finishes—black, brass and polished steel—are part of the new Artists Collection, inspired by the work of industrial artists who create beautiful pieces using various metals. Just like the original Nest Learning Thermostat that comes in copper, black, stainless steel or white, these new thermostats are designed to look beautiful in your home while also keeping you comfortable and helping you save energy.


  • Polished steelis a high-end, highly polished design for those who like to keep things timeless and classy.

  • Mirror black is striking and bold, with the deep lacquered black look of a grand piano.

  • Brass is warm and subtle - it can act as a pop of color for your home or blend in with other metal accents you may have.

They can program themselves to create a personalized schedule and turn down automatically to save energy when you’re away. You can control your thermostat from a phone, tablet, Google Home Hub or even an Android Watch or Apple Watch with the Nest or Google Home app. And, you can use your smart speaker or display to change the temperature with your voice—just say, “Hey Google, set the temperature to 68.”

These new Nest Learning Thermostats are available in the US (and the polished steel finish is also available in Canada) for $249.

Robbie Ivey’s story: how technology removes barriers

At Google we believe in the power of technology to make a difference in people’s lives. And for 19-year-old Robbie Ivey from Michigan, that certainly rings true.


Robbie has duchenne muscular dystrophy, which has left him able to control only his eyes, head and right thumb joint. Among the many challenges Robbie and his family face, nighttime is one of the key ones. For years, Robbie’s mom Carrie has set her alarm every few hours to get up and change his position in bed so he doesn’t get bed sores or infections. Earlier this year, a sleep-deprived Carrie put out a message to the Muscular Dystrophy Association asking for help to try and find a better way.  She got a response from Bill Weir, a retired tech worker, who thought he could set up Robbie’s bed to be controlled by voice activation. While working on the bed, Bill had an epiphany: if he can control the bed this way, why not everything else in Robbie’s bedroom universe?


As part of our efforts to spotlight accessible technologies throughout National Disability Awareness Month, we hear directly from Robbie about how technology has helped him gain more independence in his life as he starts off on his first year at Oakland Universityin Rochester.

A new course to teach people about fairness in machine learning

In my undergraduate studies, I majored in philosophy with a focus on ethics, spending countless hours grappling with the notion of fairness: both how to define it and how to effect it in society. Little did I know then how critical these studies would be to my current work on the machine learning education team where I support efforts related to the responsible development and use of AI.


As ML practitioners build, evaluate, and deploy machine learning models, they should keep fairness considerations (such as how different demographics of people will be affected by a model’s predictions) in the forefront of their minds. Additionally, they should proactively develop strategies to identify and ameliorate the effects of algorithmic bias.


To help practitioners achieve these goals, Google’s engineering education and ML fairness teams developed a 60-minute self-study training module on fairness, which is now available publicly as part of our popular Machine Learning Crash Course (MLCC).

ML bias

The MLCC Fairness module explores how human biases affect data sets. For example, people asked to describe a photo of bananas may not remark on their color (“yellow bananas”) unless they perceive it as atypical.

Students who complete this training will learn:

  • Different types of human biases that can manifest in machine learning models via data
  • How to identify potential areas of human bias in data before training a model
  • Methods for evaluating a model’s predictions not just for overall performance, but also for bias

In conjunction with the release of this new Fairness module, we’ve added more than a dozen new fairness entries to our Machine Learning Glossary (tagged with a scale icon in the right margin). These entries provide clear, concise definitions of the key fairness concepts discussed in our curriculum, designed to serve as a go-to reference for both beginners and experienced practitioners. We also hope these glossary entries will help further socialize fairness concerns within the ML community.


We’re excited to share this module with you, and hope that it provides additional tools and frameworks that aid in building systems that are fair and inclusive for all. You can learn more about our work in fairness and on other responsible AI practices on our website.

Strike a pose with Pixel 3

With Pixel, we want to give you a camera that you can always trust and rely on. That means a camera which is fast, can take photos in any light and has built-in intelligence to capture those moments that only happen once. The camera should also give you a way to get creative with your photos and videos and be able to easily edit and share.

To celebrate Pixel 3 hitting the shelves in the US today, here are 10 things you can do with the Pixel camera.

1. Just point and shoot!

The Pixel camera has HDR+ on by default which uses computational photography to help you take better pictures in scenes where there is a range of brightness levels. When you press the shutter button, HDR+ actually captures a rapid burst of pictures, then quickly combines them into one. This improves results in both low-light and high dynamic range situations.

2. Top Shot

Get the best shot on the first try. When you take a motion photo, Top Shot captures alternate high-quality shots, then recommends the best one—even if it’s not exactly when you hit the shutter. Behind the scenes,Top Shot looks for those shots where everyone is smiling, with eyes open and facing the camera. Just click on the thumbnail when you take a picture and you’ll get a suggestion to choose a better picture when one is available. You can also find top shots on photos whenever you want by swiping up on the photo in Google Photos. Top Shot works best on people and is getting better all the time.

Top Shot

Top Shot on Pixel 3 

3. Night Sight

In low light scenes when you'd typically use flash—but don't want to because it makes a big scene, blinds your friends, and leaves harsh, uneven lighting—Night Sight can help you take colorful, detailed and low-noise pictures in super low light. Night Sight is coming soon to Pixel. 

4. Super Res Zoom

Pixel 3 lets you zoom in and still get sharp, detailed images. Fun fact: this works by taking advantage of the natural shaking of your hand when you take a photo. For every zoomed shot, we combine a burst of slightly different images, resulting in better resolution, and lower noise. So when you pinch-zoom before pressing the shutter, you’ll definitely get a lot more details in your picture than if you crop afterwards.

5. Group Selfie Cam

If you’re having trouble fitting everyone in shot, or you want the beautiful scenery as well as your beautiful face, try our new wide angle lens that lets you get much more in your selfie. You can get up to 184% more in the shot*, or 11 people is my own personal record. Wide angle lenses fit more people in the shot, but they also stretch and distort faces that are on the edge. The Pixel camera uses AI to correct this, so every face looks natural and you can use the full field of view of the selfie cam.

6. Photobooth

You spend ages getting the selfie at precisely the right angle, but then you try and reach the shutter button and lose the frame. Photobooth mode lets you take photos without pressing the shutter button: simply smile, poke your tongue out, or pucker those lips.

7. Playground

Bring more of your imagination to a scene with Playmoji— augmented reality characters that react to each other and to you—and add animated stickers and fun captions to your photos and videos. Playground also works on the front camera, so you can up your selfie game by standing next to characters you love, like Iron Man from the Marvel Cinematic Universe.

Playground on Pixel 3

Playground on Pixel 3 helps you create and play with the world around you

8. Google Lens Suggestions

Just point the Pixel 3 camera at contact info, URLs, and barcodes and it’ll automatically suggest things to do like calling the number, or sending an email. This all happens without you having to type anything and Lens will show the suggestions even when you’re offline. It’s particularly helpful with business cards, movie posters, and takeout menus.

9. Portrait Mode

Our improved Portrait Mode on Pixel is designed to give you even sharper and more beautiful images this year. Plus we’ve added some fun editing options in Google Photos—like being able to change the blurriness of the background, or change the part of the picture in focus after you’ve taken it. Google Photos can also make the subject of your photo pop by leaving them in color, while changing the background to black and white.

Portrait Mode

Portrait Mode and color pop with Pixel 3 and Google Photos

10. Smooth video

We’ve added new selfie video stabilization so now you can get super smooth video from the front or back cameras. And if you’re recording someone or something that is moving, just tap on them and the video will lock on the subject as they, or you, move—so you don’t lose focus.

Finally, if you’re a pro photographer, we’ve added a bunch of new features to help you manage your photography from the ability to export RAW, to external mic support, to synthetic fill flash which mimics professional lighting equipment to bring a beautiful glow to your pictures.

Once you’ve taken all those amazing photos and videos, Pixel comes with unlimited storage so you never get that “storage full” pop up at a crucial moment.** 

Share your pics using #teampixel so we can see what you create with Pixel 3.



*Compared to iPhone Xs

**Free, unlimited online original-quality storage for photos/videos uploaded from Pixel 3 to Google Photos through 1/31/2022, and those photos/videos will remain free at original quality. g.co/help/photostorage

Open platforms like Android unlock potential

As a scientist, educator and businesswoman, my goal is to engage as many young minds as possible to get them excited about science and technology. That’s why the explosion in affordable technology over the last few years has been so exciting for STEM evangelists like me. Technology is no longer available only to the affluent and the privileged; instead, computers, tablets and smartphones are in the hands of individuals across all income levels. Reaching such a diverse audience is critical to our society’s ability to design the next generation of digital technologies and train the workforce of the future.

As a professor and the founder and Chief Technology Officer atZyrobotics, a company that develops interactive STEM games and learning tools for children, I want our company’s educational programs to be available to the greatest number of people in order to have the greatest level of impact. In order to be successful, companies like mine need to reach kids where they spend their time—on their tablets, phones and other electronic learning devices. That means we want our apps to be compatible with as many devices as possible, and it’s why we’ve chosen to use Android’s open platform for our development. I’ve been able to reach far more people by building upon open platforms like Android than I ever could by teaching in a classroom.

As an app developer, I’ve benefited from Android’s ease of use, open coding platform, and popularity within diverse segments of the population. We've been able to expand our reach to all audiences, particularly those in disadvantaged communities. Many lower-income people (and many in developing countries) rely on more affordable or older Android devices, and because Android lets us update apps on older-model phones, we can ensure we’re providing the best experience to these users. Open platforms are also the main reason why most of our apps, including those that teach young children to code, are free.

Zyrobotics would be far less successful without the app stores housed on Android and Apple and the number of users we are able to reach through those platforms. Both Google and Apple’s app stores have been especially useful in helping us maximize our apps’ exposure to the children and parents with whom we want to connect, and helped us introduce important STEM concepts to children as early as five and six years old through30 STEM-focused apps and games, such as our award-winning Turtle “Learn to Code” app.

The United Statescontinues to lag behind other industrialized nations when it comes to preparing our children for STEM careers, and thattechnology workforce gap is partly a result of a lack of early engagement in STEM. Reaching children when their interests are just beginning to take shape is vital to building a more vibrant, diverse and successful STEM workforce for the future. Android helps us do that. I support smart regulation of technology companies that helps ensure that today’s technology be made even more widely available, accessible and unbiased.

The benefits of technology to educate and empower the next generation are immeasurable. Open platforms create opportunities—for companies like mine, and the people we serve. Let's keep it that way.

Ayanna Howard, Ph.D., is Chief Technology Officer (CTO) at Zyrobotics, an educational technology company, and the Linda J. and Mark C. Smith Professor at the Georgia Institute of Technology. Her artificial intelligence (AI), robotics and assistive technology research has resulted in more than 250 peer-reviewed publications and a number of commercialized products.

Finding my way back to Antarctica with the help of Google Earth

Editor’s note: This guest post comes from a rock climber and adventurer who used Google Earth to aid his quest to explore Antarctica's remote Queen Maud Land with other athletes from The North Face team.


Nearly twenty-two years ago, my late friend Alex Lowe, Jon Krakauer and I huddled over a stack of tattered Norwegian maps from the“International Geophysical Year, 1957 - 58.” These were the first maps of Antarctica's remote Queen Maud Land, a stark glacial landscape dotted with impossibly jagged granite spires protruding from thousands of feet of ice. As we scanned the only detailed account of this faraway land, the complex and cryptic landscape made it blatantly obvious why these were some of the last unclimbed peaks on earth.


Back in ‘98, our paper maps were a static window into this dynamic land. We peeked in with trepidation, knowing that once we arrived on the ice cap, our lives would depend on rough estimations and ballpark figures, which still left a lot to chance. How many days would it take to reach the towers from our base camp? What if a storm pinned us down? What if we were unable to cross a dangerously crevassed part of the glacier?


Two decades later,  the same thirst for pushing limits in the face of the unknown is calling  me back to Queen Maud Land. This time the adventure began with my family in the comfort of our living room in Bozeman, Montana—our paper maps are replaced with smartphones and laptops. With Google Earth, my family was able to explore Queen Maud Land with me before my boots ever touched the ground. Together, we flew over snow covered glaciers and found our way up the massive granite walls I hoped to scale with my fellow teammates who are climbing with me as a part of an expedition put together by The North Face. We understood the complexity and enormity of the expedition together.

North Face image 1

I always tell my family that the most important part of the mission is coming home—a goal that requires obsessive preparation, planning and training. Google Earth allowed us to drop pins on potential landing zones suitable for the fixed wing aircraft we were going to travel in. With the ability to visually assess the landscape in 3D, we could better see hazards and challenges before embarking on the expedition. Climate change has dramatically altered the landscape of the Antarctica I explored in the nineties and looking at up-to-date satellite imagery helped me come up with a new approach to navigating the terrain.

NF

When we finally touched down on the ice, my fellow climber Cedar Wright aptly mentioned that “it was pretty surreal to recognize a place you had never physically been by your time spent exploring it remotely using Google Earth.” And he was right. After we got our bearings, we were able to confidently and strategically explore dozens of never-before-climbed peaks in this lunar landscape. The challenges of climbing in the frozen landscape were ever present, but the gift of being able to successfully put up so many stunning new climbs with a team of this caliber was an unforgettable privilege.

NF

Conrad Anker working his way up Ulvetanna, “The Wolf’s Tooth,” in the Drygalski Mountain Range, in Antarctica. Photo by Savannah Cummins.


On expeditions like these we are reminded of why we explore. They’re a physical and mental challenge that demonstrate how we are capable of succeeding in places we never before thought possible. The spirit of exploration is alive and well across our society–and technology like Google Earth opens up even more possibilities to explore ... so, what will your next adventure be?


Learn more about the expedition and check out all of the photos & videos from The North Face expedition to Antarctica.


Titan M makes Pixel 3 our most secure phone yet


Security has always been a top priority for Pixel, spanning both the hardware and software of our devices. This includes monthly security updates and yearly OS updates, so Pixel always has the most secure version of Android, as well as Google Play Protect to help safeguard your phone from malware. Last year on Pixel 2, we also included a dedicated tamper-resistant hardware security module to protect your lock screen and strengthen disk encryption.

This year, with Pixel 3, we’re advancing our investment in secure hardware with Titan M, an enterprise-grade security chip custom built for Pixel 3 to secure your most sensitive on-device data and operating system. With Titan M, we took the best features from the Titan chip used in Google Cloud data centers and tailored it for mobile.



Here are a few ways Titan M protects your phone.

Security in the Bootloader

First, to protect Android from outside tampering, we’ve integrated Titan M into Verified Boot, our secure boot process.

Titan M helps the bootloader—the program that validates and loads Android when the phone turns on—make sure that you’re running the right version of Android. Specifically, Titan M stores the last known safe Android version and prevents “bad actors” from moving your device back to run on an older, potentially vulnerable, version of Android behind your back. Titan M also prevents attackers running in Android attempting to unlock the bootloader.

Lock Screen Protection & Disk Encryption On-Device

Pixel 3 also uses Titan M to verify your lock screen passcode. It makes the process of guessing multiple  password combinations harder by limiting the amount of logon attempts, making it difficult for bad actors to unlock your phone. Only upon successful verification of your passcode will Titan M allow for decryption.

In addition, the secure flash and fully independent computation of Titan M makes it harder for an attacker to tamper with this process to gain the secrets to decrypt your data.

Secure Transactions in Third-Party Apps

Third, Titan M is used not only to protect Android and its functionality, but also to protect third-party apps and secure sensitive transactions. With Android 9, apps can now take advantage of StrongBox KeyStore APIs to generate and store their private keys in Titan M. The Google Pay team is actively testing out these new APIs to secure transactions.

For apps that rely on user interaction to confirm a transaction, Titan M also enables Android 9 Protected Confirmation, an API for protecting the most security-critical operations. As more processes come online and go mobile—like e-voting, and P2P money transfers—these APIs can help to ensure that the user (not malware) has confirmed the transaction. Pixel 3 is the first device to ship with this protection.

Insider Attack Resistance

Last, but not least, to prevent tampering, Titan M is built with insider attack resistance. The firmware on Titan M will never be updated unless you have entered your passcode, meaning bad actors cannot bypass your lock screen to update the firmware to a malicious version.

With the Pixel 3, we’ve increased our investment in security and put industry-leading hardware features into the device, so you can rest assured that your security and privacy are well protected. In the coming months, the security community will be able to audit Titan through its open-source firmware. In the meantime, you can test out Titan M and all of the smarts Pixel 3 brings, when it goes on sale on Thursday, October 18 in the U.S.

Schools in London give new life to old computers

Replacing aging computers with new devices can be a strain on school budgets, which means that schools often find themselves with out-of-date hardware sitting in cupboards, collecting dust. However, there’s a way to give old devices new life—by replacing their current operating system with one that’s easy to use, manage and is ready for the cloud.


We’re partnering with London Grid for Learning, a nonprofit organization focused on improving schools’ access to technology and Neverware (creator of the CloudReady operating system), to help schools across London extend the life of their old devices. LGfL has committed to purchasing CloudReady licenses for over 85 percent of London’s schools so they can transform their slow, older hardware into fast, nimble devices that run just like Chromebooks. As CloudReady is based on Google’s Chromium OS, it perfectly complements a cloud-first digital approach, such as using G Suite for Education.

At Connaught School for Girls in East London, pupils and teachers were struggling to use old and slow machines, especially once the school started integrating more digital tools, including Google Classroom. Tight budgets hindered replacement of the devices. The school saw Neverware as a budget-friendly way to revive its old laptops for the Google Classroom adoption, without purchasing a fleet of new devices or paying for laptop disposal.

DQX9cmHX0AAh0k1.jpg

The results were transformative as the students started using the devices more. ‘’In the last academic year, the devices were booked four times. Now the laptops are booked 21 out of 25 periods per week, creating better access to IT for our students,’’ Silk says. “The beauty of Neverware is that it just works and your older devices are no longer a liability; they can be an asset again.”   


Given current budgetary pressures and compliance demands, it’s more important than ever to find practical solutions that increase secure, affordable access to technology in schools. By partnering with London Grid for Learning and Neverware, Google for Education is improving access to education technology in London schools, whilst also contributing to the sustainability of older technology. If you are an LGfL school, visit go.neverware.com/LGfL to learn how you can use CloudReady by Neverware to refresh your underperforming or underutilised devices. All other schools in the UK can check out CloudReady directly at their website.

Pixel 3 and on-device AI: Putting superpowers in your pocket

Last week we announced Pixel 3 and Pixel 3XL, our latest smartphones that combine the best of Google’s AI, software, and hardware to deliver radically helpful experiences. AI is a key ingredient in Pixel that unlocks new, useful capabilities, dramatically changing how we interact with our phones and the world around us.

But what exactly is AI?

Artificial intelligence (AI) is a fancy term for all the technology that lets our devices learn by example and act a bit smarter, from understanding written or spoken language to recognizing people and objects in images. AI is built by “training” machine learning models—a computer learns patterns from lots of example data, and uses these patterns to generate predictions. We’ve built one of the most secure and robust cloud infrastructures for processing this data to make our products smarter. Today, AI helps with everything from filtering spam emails in Gmail to getting answers on Google Search.

What is AI

Machine learned models in the cloud are a secure way to make Google products smarter over time.

Bringing the best AI experiences to Pixel 3 involved some re-thinking from the ground up. Our phones are powerful computers with multiple sensors which enable new helpful and secure experiences when data is processed on your device. These AI-powered features can work offline and don’t require a network connection. And they can keep data on device, private to you. With Pixel 3, we complement our traditional approach to AI, where machine learning and data processing is done in the cloud, with reliable, accessible AI on device, when you’re on the go.

AI on device

The most powerful machine learning models can now run directly on your Pixel to power fast experiences which work even when you’re offline.

Benefits of on-device AI

We’ve been working to miniaturize AI models to bring the power of machine learning and computing in the cloud directly to your Pixel. With on-device AI, new kinds of experiences become possible—that are lightning fast, are more battery efficient, and keep data on your device. We piloted this technology last year with Now Playing, bringing automatic music recognition to Pixel 2. This year, your Phone app and camera both use on-device AI to give you new superpowers, allowing you to interact more seamlessly with the world around you.

AI benefits

On-device AI works without having to go back to a server and consumes less of your battery life.

Take Call Screen, a new feature in the Phone app, initially launching in English in the U.S., where the Google Assistant helps you screen calls, including from unknown or unrecognized numbers. Anytime you receive an incoming call, just tap the “Screen Call” button and on-device speech recognition is used to transcribe the conversation from the caller (who is calling? why they are calling?) so you can decide whether to pick up, hang up, or mark as spam and block. Because everything happens on your device, neither the audio nor transcript from a screened call is sent to anyone other than you.

AI Call Screen

Call Screen uses on-device speech recognition to transcribe the caller’s responses in real time, without sending audio or transcripts off your phone.

This year’s Pixel camera helps you capture great moments and do more with what you see by building on-device AI right into your viewfinder. New low-power vision models can recognize facial expressions, objects, and text without having to send images off your device. Photobooth Mode is powered by an image scoring model that analyzes facial expressions and photo quality in real time. This will automatically capture smiles and funny faces so you can take selfies without having to reach for the shutter button. Top Shot uses the same kind of image analysis to suggest great, candid moments from a motion photo—recommending alternative shots in HDR+. 

Playground creates an intelligent AR experience by using AI models to recommend Playmoji, stickers, and captions so that you can express yourself based on the scene you’re in. And without having to take a photo at all, image recognition lets you act on info from the world around you—surfacing Google Lens suggestions to call phone numbers or show website addresses—right from your camera.

Pixel 3 is just the beginning. We want to empower people with new AI-driven abilities. With our advances in on-device AI, we can develop new, helpful experiences that run right on your phone and are fast, efficient, and private to you.

Complying with the EC’s Android decision

In July, in our response to the European Commission’s competition decision against Android, we said that rapid innovation, wide choice and falling prices are classic hallmarks of robust competition, and that Android has enabled all of them. We believe that Android has created more choice, not less. That’s why last week we filed our appeal of the Commission’s decision at the General Court of the European Union.

At the same time, we’ve been working on how to comply with the decision. We have now informed the European Commission of the changes we will make while the appeal is pending.

First, we’re updating the compatibility agreements with mobile device makers that set out how Android is used to develop smartphones and tablets. Going forward, Android partners wishing to distribute Google apps may also build non-compatible, or forked, smartphones and tablets for the European Economic Area (EEA).

Second, device manufacturers will be able to license the Google mobile application suite separately from the Google Search App or the Chrome browser. Since the pre-installation of Google Search and Chrome together with our other apps helped us fund the development and free distribution of Android, we will introduce a new paid licensing agreement for smartphones and tablets shipped into the EEA. Android will remain free and open source.

Third, we will offer separate licenses to the Google Search app and to Chrome.

We’ll also offer new commercial agreements to partners for the non-exclusive pre-installation and placement of Google Search and Chrome. As before, competing apps may be pre-installed alongside ours.

These new licensing options will come into effect on October 29, 2018, for all new smartphones and tablets launched in the EEA. We’ll be working closely with our Android partners in the coming weeks and months to transition to the new agreements. And of course, we remain deeply committed to continued innovation for the Android ecosystem.