Supporting the news industry and next-generation journalists on YouTube

Canadian journalists Anushree Dave and Muhammad Lila admitted into the Creator Program for Independent Journalists

Brandon Gonez joins the Sustainability Lab for Digital-First Newsrooms

Over the past few years, we’ve seen more and more people turn to YouTube every day to get their news. We want to connect our community to authoritative, trustworthy content, and believe we have a responsibility to support innovation and a sustainable ecosystem. That’s why we work alongside news organizations from around the world--both through our products that help news partners reach audiences and monetize their video content, as well as through grants and training programs as part of the Google News Initiative. 

We’re committed to supporting the future of journalism, and that means continuing to create opportunities for the industry to harness the latest technology and techniques for growth on YouTube. In April, we opened applications for two new programs focused on supporting the next generation of reporters and newsrooms. We’re excited to announce today the selection of nearly 50 independent journalists from around the world and over 40 digital-first newsrooms across the programs. 

Our Creator Program for Independent Journalists aims to give the growing number of reporters publishing independently the tools needed to succeed on YouTube. This year, two Canadian journalists were admitted into the inaugural program. Anushree Dave, a science reporter aspiring to create PopSci for YouTube, focusing on the intersection of science, technology, and society. Muhammad Lila is a former warzone correspondent who now specializes in finding stories of hope, courage and reslilience in places you least expect. 

We are also thrilled to announce that The Brandon Gonez Show has joined the Sustainability Lab for Digital-First Newsrooms, which provides support for digital native newsrooms to start and expand their video operations.The Brandon Gonez Show is a weekly online platform that provides local and national news, and shares important untold stories with audiences of all backgrounds. 

“Being an independent journalist allows me to set the editorial direction of our platform and focus on people who are left out of the conversation”, Brandon Gonez explained. “Starting The Brandon Gonez Show has allowed my team and me to fill that gap and ensure that more voices have a microphone to amplify their stories. Joining the Youtube program can really help to create a strong foundation and lead to massive growth. That growth can easily translate to more impact and greater results for the stories and people we cover.” 

Altogether, the participants represent 25 countries, speak nearly 20 different languages, and report on a wide range of topics, spanning local news and national politics, to undertold stories about marginalized communities. You can read more about the selected participants for the Creator Program here and for the Sustainability Lab here

Over the course of the next year, we’ll offer journalists in the Creator Program training in industry best practices, including comprehensive sessions on video production and editing, audience development, entrepreneurship, and achieving financial sustainability on the platform. 

Participants will receive grants to help fuel their new video operations. They’ll also be connected with experts at YouTube to answer questions, and join groups of their peers to share insights and experiences. The digital newsrooms selected for our Sustainability Labs will receive grants, one-on-one support from YouTube, and have rich opportunities to learn from each other as they develop video news capabilities and business plans. 

We also hope to learn from our first set of participants how we can further improve and iterate on these programs for future classes. Our goal is to work together with the industry and help journalists and newsrooms thrive on YouTube. 

It remains an ongoing priority to build a more sustainable video news ecosystem as we continue to raise up authoritative content on our platform. There’s still a lot of work to do, but we’re eager to increase access to credible, trustworthy information from a diversity of sources to everyone who comes to YouTube to learn more about what’s happening in the world.

Persistence paid off for intern James Frater

Welcome to the latest edition of “My Path to Google,” where we talk to Googlers, interns and alumni about how they got to Google, what their roles are like and even some tips on how to prepare for interviews.

Today we spoke with James Frater, a business intern working virtually in London. Learn how James’s passion for equitable solutions and love of learning brought him to Google.

What do you do at Google?

I am a Business Development Representative Intern for Google Cloud working in the Europe, Middle East and Africa (EMEA) region. In the role, I help leaders within organizations to work through their specific pain points and match them up with the arsenal of specific solutions that Google has to meet their needs. 

I am fortunate to be in one of the most supportive and encouraging teams I have ever had the pleasure of working in. It means that everyday when I wake up, I look forward to coming to work because I know that irrespective of the challenges that lie ahead, I have a team that will support me.

What made you decide to apply to Google?

My decision to apply to Google was simple. I wanted to be somewhere that allowed me to build sustainable and scalable tech solutions that measurably improved the lives of the people that needed the most help. In particular, a long term goal of mine is to make sure that everyone in the Caribbean has access to good healthcare, education and technology that makes their lives easier. Google is a positive and transformative vehicle that serves the needs of billions of people. I wanted to be a part of that.

I had applied to Google before; this was the third year in a row, in fact! I was really determined to get in because I knew what a great opportunity this was and I really believe I had what it took to be a Googler. I was fortunate enough to attend a Google Black talent event in 2020 and I was able to get some really great advice about applications. For example, in the interview it’s less about arriving at the right answer and more about the thought process. Being able to ask clarifying questions, especially when you’re not sure, will impress your interviewer. It was definitely third time lucky for me!

How would you describe your path to Google?

My path to my current role was… unconventional to say the least. I am a medical student, who has completed a management degree and also dabbles in efforts to reduce inequitable access to opportunities. I have completed internships in insurance, professional services, education and technology.

A picture of James Frater smiling

James Frater

What’s something you’re working on outside your internship?

I am very passionate about the structural challenges that a lot of underrepresented groups face, so I work to make access to institutions (primarily educational) more equitable. I co-founded The Ladder Project CIC which is a social enterprise that helps to holistically develop young people through a series of online and in-person workshops. Our mission is to ensure that all students leaving school are equipped with the skills required to succeed in the world of work and in higher education. Having projects and interests outside of my internship is something that has been encouraged, so it really gives me the confidence to bring my whole self to work.

What’s one thing you wish you could go back and tell yourself before applying?

"Relax!" is probably the main thing but some more practical things are:

  1. Qualify everything you say on your CV/resume. Put numbers and percentages, talk about the impact your work had and its significance in context.

  2. In interviews, it is okay — and encouraged — to talk through your thinking, especially when you are not sure.

  3. Enjoy the process.

Any tips for aspiring Googlers?

Start creating solutions that help people. You don't have to wait until you get into a role to start doing things you are passionate about. I started doing talks and workshops for young people. From that, I co-founded The Ladder Project to help even more young people. It will also make your application stand out if you are able to demonstrate that level of initiative.

Improved Detection of Elusive Polyps via Machine Learning

With the increasing ability to consistently and accurately process large amounts of data, particularly visual data, computer-aided diagnostic systems are more frequently being used to assist physicians in their work. This, in turn, can lead to meaningful improvements in health care. An example of where this could be especially useful is in the diagnosis and treatment of colorectal cancer (CRC), which is especially deadly and results in over 900K deaths per year, globally. CRC originates in small pre-cancerous lesions in the colon, called polyps, the identification and removal of which is very successful in preventing CRC-related deaths.

The standard procedure used by gastroenterologists (GIs) to detect and remove polyps is the colonoscopy, and about 19 million such procedures are performed annually in the US alone. During a colonoscopy, the gastroenterologist uses a camera-containing probe to check the intestine for pre-cancerous polyps and early signs of cancer, and removes tissue that looks worrisome. However, complicating factors, such as incomplete detection (in which the polyp appears within the field of view, but is missed by the GI, perhaps due to its size or shape) and incomplete exploration (in which the polyp does not appear in the camera’s field of view), can lead to a high fraction of missed polyps. In fact, studies suggest that 22%–28% of polyps are missed during colonoscopies, of which 20%–24% have the potential to become cancerous (adenomas).

Today, we are sharing progress made in using machine learning (ML) to help GIs fight colorectal cancer by making colonoscopies more effective. In “Detection of Elusive Polyps via a Large Scale AI System”, we present an ML model designed to combat the problem of incomplete detection by helping the GI detect polyps that are within the field of view. This work adds to our previously published work that maximizes the coverage of the colon during the colonoscopy by flagging for GI follow-up areas that may have been missed. Using clinical studies, we show that these systems significantly improve polyp detection rates.

Incomplete Exploration
To help the GI detect polyps that are outside the field of view, we previously developed an ML system that reduces the rate of incomplete exploration by estimating the fractions of covered and non-covered regions of a colon during a colonoscopy. This earlier work uses computer vision and geometry in a technique we call colonoscopy coverage deficiency via depth, to compute segment-by-segment coverage for the colon. It does so in two phases: first computing depth maps for each frame of the colonoscopy video, and then using these depth maps to compute the coverage in real time.

The ML system computes a depth image (middle) from a single RGB image (left). Then, based on the computation of depth images for a video sequence, it calculates local coverage (right), and detects where the coverage has been deficient and a second look is required (blue color indicates observed segments where red indicates uncovered ones). You can learn more about this work in our previous blog post.

This segment-by-segment work yields the ability to estimate what fraction of the current segment has been covered. The helpfulness of such functionality is clear: during the procedure itself, a physician may be alerted to segments with deficient coverage, and can immediately return to review these areas, potentially reducing the rates of missed polyps due to incomplete exploration.

Incomplete Detection
In our most recent paper, we look into the problem of incomplete detection. We describe an ML model that aids a GI in detecting polyps that are within the field of view, so as to reduce the rate of incomplete detection. We developed a system that is based on convolutional neural networks (CNN) with an architecture that combines temporal logic with a single frame detector, resulting in more accurate detection.

This new system has two principal advantages. The first is that the system improves detection performance by reducing the number of false negatives detections of elusive polyps, those polyps that are particularly difficult for GIs to detect. The second advantage is the very low false positive rate of the system. This low false positive rate makes these systems more likely to be adopted in the clinic.

Examples of the variety of polyps detected by the ML system.

We trained the system on 3600 procedures (86M video frames) and tested it on 1400 procedures (33M frames). All the videos and metadata were de-identified. The system detected 97% of the polyps (i.e., it yielded 97% sensitivity) at 4.6 false alarms per procedure, which is a substantial improvement over previously published results. Of the false alarms, follow-up review showed that some were, in fact, valid polyp detections, indicating that the system was able to detect polyps that were missed by the performing endoscopist and by those who annotated the data. The performance of the system on these elusive polyps suggests its generalizability in that the system has learned to detect examples that were initially missed by all who viewed the procedure.

We evaluated the system performance on polyps that are in the field of view for less than five seconds, which makes them more difficult for the GI to detect, and for which models typically have much lower sensitivity. In this case the system attained a sensitivity that is about three times that of the sensitivity that the original procedure achieved. When the polyps were present in the field of view for less than 2 seconds, the difference was even more stark — the system exhibited a 4x improvement in sensitivity.

It is also interesting to note that the system is fairly insensitive to the choice of neural network architecture. We used two architectures: RetinaNet and  LSTM-SSD. RetinaNet is a leading technique for object detection on static images (used for video by applying it to frames in a consecutive fashion). It is one of the top performers on a variety of benchmarks, given a fixed computational budget, and is known for balancing speed of computation with accuracy. LSTM-SSD is a true video object detection architecture, which can explicitly account for the temporal character of the video (e.g., temporal consistency of detections, ability to deal with blur and fast motion, etc.). It is known for being robust and very computationally lightweight and can therefore run on less expensive processors. Comparable results were also obtained on the much heavier Faster R-CNN architecture. The fact that results are similar across different architectures implies that one can choose the network meeting the available hardware specifications.

Prospective Clinical Research Study
As part of the research reported in our detection paper we ran a clinical validation on 100 procedures in collaboration with Shaare Zedek Medical Center in Jerusalem, where our system was used in real time to help GIs. The system helped detect an average of one polyp per procedure that would have otherwise been missed by the GI performing the procedure, while not missing any of the polyps detected by the GIs, and with 3.8 false alarms per procedure. The feedback from the GIs was consistently positive.

We are encouraged by the potential helpfulness of this system for improving polyp detection, and we look forward to working together with the doctors in the procedure room to further validate this research.

Acknowledgements
The research was conducted by teams from Google Health and Google Research, Israel with support from Verily Life Sciences, and in collaboration with Shaare Zedek Medical Center. Verily is advancing this research via a newly established center in Israel, led by Ehud Rivlin. This research was conducted by Danny Veikherman, Tomer Golany, Dan M. Livovsky, Amit Aides, Valentin Dashinsky, Nadav Rabani, David Ben Shimol, Yochai Blau, Liran Katzir, Ilan Shimshoni, Yun Liu, Ori Segol, Eran Goldin, Greg Corrado, Jesse Lachter, Yossi Matias, Ehud Rivlin, and Daniel Freedman. Our appreciation also goes to several institutions and GIs who provided advice along the way and tested our system prototype. We would like to thank all of our team members and collaborators who worked on this project with us, including: Chen Barshai, Nia Stoykova, and many others.

Source: Google AI Blog


Improved Detection of Elusive Polyps via Machine Learning

With the increasing ability to consistently and accurately process large amounts of data, particularly visual data, computer-aided diagnostic systems are more frequently being used to assist physicians in their work. This, in turn, can lead to meaningful improvements in health care. An example of where this could be especially useful is in the diagnosis and treatment of colorectal cancer (CRC), which is especially deadly and results in over 900K deaths per year, globally. CRC originates in small pre-cancerous lesions in the colon, called polyps, the identification and removal of which is very successful in preventing CRC-related deaths.

The standard procedure used by gastroenterologists (GIs) to detect and remove polyps is the colonoscopy, and about 19 million such procedures are performed annually in the US alone. During a colonoscopy, the gastroenterologist uses a camera-containing probe to check the intestine for pre-cancerous polyps and early signs of cancer, and removes tissue that looks worrisome. However, complicating factors, such as incomplete detection (in which the polyp appears within the field of view, but is missed by the GI, perhaps due to its size or shape) and incomplete exploration (in which the polyp does not appear in the camera’s field of view), can lead to a high fraction of missed polyps. In fact, studies suggest that 22%–28% of polyps are missed during colonoscopies, of which 20%–24% have the potential to become cancerous (adenomas).

Today, we are sharing progress made in using machine learning (ML) to help GIs fight colorectal cancer by making colonoscopies more effective. In “Detection of Elusive Polyps via a Large Scale AI System”, we present an ML model designed to combat the problem of incomplete detection by helping the GI detect polyps that are within the field of view. This work adds to our previously published work that maximizes the coverage of the colon during the colonoscopy by flagging for GI follow-up areas that may have been missed. Using clinical studies, we show that these systems significantly improve polyp detection rates.

Incomplete Exploration
To help the GI detect polyps that are outside the field of view, we previously developed an ML system that reduces the rate of incomplete exploration by estimating the fractions of covered and non-covered regions of a colon during a colonoscopy. This earlier work uses computer vision and geometry in a technique we call colonoscopy coverage deficiency via depth, to compute segment-by-segment coverage for the colon. It does so in two phases: first computing depth maps for each frame of the colonoscopy video, and then using these depth maps to compute the coverage in real time.

The ML system computes a depth image (middle) from a single RGB image (left). Then, based on the computation of depth images for a video sequence, it calculates local coverage (right), and detects where the coverage has been deficient and a second look is required (blue color indicates observed segments where red indicates uncovered ones). You can learn more about this work in our previous blog post.

This segment-by-segment work yields the ability to estimate what fraction of the current segment has been covered. The helpfulness of such functionality is clear: during the procedure itself, a physician may be alerted to segments with deficient coverage, and can immediately return to review these areas, potentially reducing the rates of missed polyps due to incomplete exploration.

Incomplete Detection
In our most recent paper, we look into the problem of incomplete detection. We describe an ML model that aids a GI in detecting polyps that are within the field of view, so as to reduce the rate of incomplete detection. We developed a system that is based on convolutional neural networks (CNN) with an architecture that combines temporal logic with a single frame detector, resulting in more accurate detection.

This new system has two principal advantages. The first is that the system improves detection performance by reducing the number of false negatives detections of elusive polyps, those polyps that are particularly difficult for GIs to detect. The second advantage is the very low false positive rate of the system. This low false positive rate makes these systems more likely to be adopted in the clinic.

Examples of the variety of polyps detected by the ML system.

We trained the system on 3600 procedures (86M video frames) and tested it on 1400 procedures (33M frames). All the videos and metadata were de-identified. The system detected 97% of the polyps (i.e., it yielded 97% sensitivity) at 4.6 false alarms per procedure, which is a substantial improvement over previously published results. Of the false alarms, follow-up review showed that some were, in fact, valid polyp detections, indicating that the system was able to detect polyps that were missed by the performing endoscopist and by those who annotated the data. The performance of the system on these elusive polyps suggests its generalizability in that the system has learned to detect examples that were initially missed by all who viewed the procedure.

We evaluated the system performance on polyps that are in the field of view for less than five seconds, which makes them more difficult for the GI to detect, and for which models typically have much lower sensitivity. In this case the system attained a sensitivity that is about three times that of the sensitivity that the original procedure achieved. When the polyps were present in the field of view for less than 2 seconds, the difference was even more stark — the system exhibited a 4x improvement in sensitivity.

It is also interesting to note that the system is fairly insensitive to the choice of neural network architecture. We used two architectures: RetinaNet and  LSTM-SSD. RetinaNet is a leading technique for object detection on static images (used for video by applying it to frames in a consecutive fashion). It is one of the top performers on a variety of benchmarks, given a fixed computational budget, and is known for balancing speed of computation with accuracy. LSTM-SSD is a true video object detection architecture, which can explicitly account for the temporal character of the video (e.g., temporal consistency of detections, ability to deal with blur and fast motion, etc.). It is known for being robust and very computationally lightweight and can therefore run on less expensive processors. Comparable results were also obtained on the much heavier Faster R-CNN architecture. The fact that results are similar across different architectures implies that one can choose the network meeting the available hardware specifications.

Prospective Clinical Research Study
As part of the research reported in our detection paper we ran a clinical validation on 100 procedures in collaboration with Shaare Zedek Medical Center in Jerusalem, where our system was used in real time to help GIs. The system helped detect an average of one polyp per procedure that would have otherwise been missed by the GI performing the procedure, while not missing any of the polyps detected by the GIs, and with 3.8 false alarms per procedure. The feedback from the GIs was consistently positive.

We are encouraged by the potential helpfulness of this system for improving polyp detection, and we look forward to working together with the doctors in the procedure room to further validate this research.

Acknowledgements
The research was conducted by teams from Google Health and Google Research, Israel with support from Verily Life Sciences, and in collaboration with Shaare Zedek Medical Center. Verily is advancing this research via a newly established center in Israel, led by Ehud Rivlin. This research was conducted by Danny Veikherman, Tomer Golany, Dan M. Livovsky, Amit Aides, Valentin Dashinsky, Nadav Rabani, David Ben Shimol, Yochai Blau, Liran Katzir, Ilan Shimshoni, Yun Liu, Ori Segol, Eran Goldin, Greg Corrado, Jesse Lachter, Yossi Matias, Ehud Rivlin, and Daniel Freedman. Our appreciation also goes to several institutions and GIs who provided advice along the way and tested our system prototype. We would like to thank all of our team members and collaborators who worked on this project with us, including: Chen Barshai, Nia Stoykova, and many others.

Source: Google AI Blog


New from Google Nest: The latest Cams and Doorbell are coming

Google Nest’s mission is to build products that make a more helpful home. All of this starts with helping you understand what’s happening within the walls of your home and outside of it. 

One of Nest’s first goals was to simplify home security, and it helped millions of people across the globe do this. So when we started dreaming up our next generation of cameras and doorbells, we wanted to incorporate the way the connected home — and your expectations — were heading. That included smarter alerts, wire-free options for installation flexibility, greater value and beautiful designs, plus enhanced privacy and security. We wanted our newest line to give you the most comprehensive set of intelligent alerts right out of the box, and easily work with your other Nest products, like displays. 

Today we’re introducing our next-generation Nest Cams and Doorbell: Google Nest Cam (battery) is our first outdoor/indoor battery-powered camera ($329); Google Nest Doorbell (battery) is our first battery-powered doorbell ($329). Learn more about 11 things to love about the new Nest Cam and Doorbell
Meet the new Google Nest Cam and Google Nest Doorbell

Then there’s Google Nest Cam with floodlight, our first connected floodlight camera ($549) and finally the second-generation Google Nest Cam (wired), a wired indoor camera and our most affordable Nest Cam ever ($169). 

We’ve heard how much people appreciate it when their Nest products all work well together. These new devices are no different. With the new Nest Cams and a display, you can keep an eye on the backyard from your kitchen and get alerts when the doorbell rings. Our new cameras are also fully integrated with the Google Home app. The Google Home app works with any compatible Android or iOS device, giving you access to all your compatible home devices in one place, anywhere and anytime. 

The new battery-powered Nest Cam and Nest Doorbell will go on sale on August 25, and are available for preorder today from the Google Store, JB Hi-Fi, Harvey Norman, Officeworks and The Good Guys. And for those who preorder, you can also secure an extra gift of a second-generation Nest Hub from selected retailers. 

Nest Cam with floodlight and the new wired indoor Nest Cam are coming soon. 

To learn more, visit the Google Store

New from Google Nest: The latest Cams and Doorbells are coming

Hero Image: Nest Cam and Nest Doorbell

Google Nest’s mission is to build products that make a more helpful home. All of this starts with helping you understand what’s happening within the walls of your home and outside of it. 


One of Nest’s first goals was to simplify home security, and it helped millions of people across the globe do this. So when we started dreaming up what our next generation of cameras and doorbells would be like, we wanted to incorporate the way the connected home — and your expectations — were heading. That included smarter alerts, wire-free options for installation flexibility, greater value and beautiful designs, plus enhanced privacy and security. We wanted our newest line to give you the most comprehensive set of intelligent alerts right out of the box, and easily work with your other Nest products, like displays.


Today we’re introducing our next-generation Nest Cams and Doorbell: Google Nest Cam (battery) is our first outdoor/indoor battery-powered camera (NZ$359); Google Nest Doorbell (battery) is our first battery-powered doorbell (NZ$359). Learn more about 11 things to love about the new Nest Cam and Doorbell.



Then there’s Google Nest Cam with floodlight, our first connected floodlight camera (NZ$599) and finally the second-generation Google Nest Cam (wired), a wired indoor camera and our most affordable Nest Cam ever (NZ$189).


We’ve heard how much people appreciate it when their Nest products all work well together. These new devices are no different. With the new Nest Cams and a display, you can keep an eye on the backyard from your kitchen and get alerts when the doorbell rings. Our new cameras are also fully integrated with the Google Home app. The Google Home app works with any compatible Android or iOS device, giving you access to all your compatible home devices in one place, any where and any time. 


The new battery-powered Nest Cam and Nest Doorbell will go on sale on August 25, and are available for preorder today from Google Store, Noel Leeming, JB Hi-Fi, Harvey Norman, PB Tech and 2Degrees. And for those who preorder, you can also secure an extra gift of a second-generation Nest Hub from selected retailers.


Nest Cam with floodlight and the new wired indoor Nest Cam are coming soon.


To learn more, visit the Google Store.


Post content

New from Google Nest: The latest Cams and Doorbells are here

Google Nest’s mission is to create a home that takes care of the people inside it and the world around it. All of this starts with helping you understand what’s happening within the walls of your home and outside of it. 

One of Nest’s first goals was to simplify home security, and we did this with our first line of cameras. So when we started dreaming up what our next generation of cameras and doorbells would be like, we wanted to incorporate the way the connected home — and your expectations — were heading. That included smarter alerts, wire-free options for installation flexibility, greater value and beautiful designs, plus enhanced privacy and security. We wanted our new line to give you the most comprehensive set of intelligent alerts right out of the box, and easily work with your other Nest products, like displays.

Today we’re introducing our next-generation Nest Cams and Doorbell: Google Nest Cam (battery) is our first outdoor/indoor battery-powered camera ($179.99); Google Nest Doorbell (battery) is our first battery-powered doorbell ($179.99). Then there’s Google Nest Cam with floodlight, our first connected floodlight camera ($279.99) and finally the second-generation Google Nest Cam (wired), a wired indoor camera and our most affordable Nest Cam ever ($99.99).

The new battery-powered Nest Cam and Nest Doorbell are available for preorder today and will go on sale on Aug. 24. Nest Cam with floodlight and the new wired indoor Nest Cam are coming soon.

Security with smarts

Because we’re all overloaded with notifications every day, our next-generation cameras and doorbell are made to send you the most helpful alerts. They detect important events that happen in and around the home, including alerts for people, animals and vehicles — and in Nest Doorbell’s case, also packages. Our new cameras and doorbell can do this because they process what they see on-device, which means more relevant notifications and added privacy and security. On-device processing means that all of this works right out of the box, no subscription required. 

Nest Cam (battery) installation


More versatility for your home

Nest’s new camera and doorbell line are truly made for every home. Battery technology allows you to install Nest Cam and Nest Doorbell nearly anywhere in your home — not just where there’s a power outlet or existing doorbell wires. The wire-free design makes installation even easier, too. And for those who prefer the option to wire their devices, we’ve  added the ability to wire the battery-powered Nest Cam and Nest Doorbell. The Google Store is stocked with accessories for Nest Cam and Nest Doorbell so it’s easy to install them where you want.

The new cameras and doorbells work better together with other Nest products, like Displays

Nest devices that work together

Customers have told us how much they appreciate that their Nest products all work well together. These new devices are no different. With the new Nest Cams and a display, you can keep an eye on the backyard from your kitchen and get alerts when the doorbell rings. Our new cameras are also fully integrated with the Google Home app, giving you access to all your compatible home devices in one place. With a Nest Aware subscription, you can unlock even more: Extend your event video history from three hours to 30 or 60 days, gain advanced features like familiar face detection (not available in Illinois) and get continuous 24/7 video history on wired Nest Cams with Nest Aware Plus. 

Nest floodlight installed on a house

Reliability when you need it

In case of a power or Wi-Fi outage, Nest Doorbell,  both Nest Cam (battery) and Nest Cam with floodlight have local storage fallback, meaning they’ll record up to one hour of events on-device (about a week’s worth of events). Nest Cam (wired) also records on-device if your Wi-Fi is down. When service returns, the devices will upload your events to the cloud, so you can review what happened.

Made with care 

We believe that technology for the home should be welcoming, and complement your decor rather than distract from it. When designing our new products, we drew design inspiration from lighting and architecture to create products that look great together and in lots of different settings. In the U.S., the new indoor wired Nest Cam and battery-powered Nest Doorbell come in several colors inspired by nature, and all of the new devices are designed sustainably with recycled materials.

Learn more about the new battery-powered Nest Doorbell and Nest Cam, available for preorder today and on sale Aug. 24.

11 things to love about the new Nest Cam and Doorbell

Google Nest Cam (battery) and Google Nest Doorbell (battery) are the latest additions to the Nest family — and they’re Nest’s first battery-powered security devices, built for every home. Here are 11 things to know:

  • Smarter alerts, right out of the box:Your new Nest Cam and Doorbell can do more right out of the box because we moved object detection on-device, allowing us to include features that are usually behind a subscription (like Activity Zones and smart alerts, including package, animal, vehicle and person detection) for no additional cost, plus three hours of event video history. Thanks to on-device processing, they can also record up to a week’s worth of events if power or Wi-Fi is out.
  • Made with Machine Learning:Building a camera that uses ML to recognize objects requires showing the ML model millions of images first. Our new Nest Cameras and Doorbells have been trained on 40 million images to accommodate lots of different environments and lighting conditions. Thanks to a cutting edge TPU chip, our new cameras run an ML model up to 7.5 times per second, so reliability and accuracy are even better.
  • Works in any home:Nest Cam and Doorbell’s wire-free designs, built-in rechargeable batteries and optional power connectors allow you to install them where you want — not only where there’s a power outlet or pre-existing wiring. 
  • Set up your way: Make sure to check out Google Store’s accessories. In addition to weatherproof cables, a tabletop stand with a power cord allows you to place your Nest Cam on an indoor surface, like a mantle. There’s also an anti-theft mount that tethers your Nest Cam to the magnetic mount for extra security. For Nest Doorbell, there’s a horizontal wedge and an AC adapter.
  • Works better, together: Nest Cam and Nest Doorbell seamlessly work with your Nest displays. Just say “Hey Google, show me the backyard” to see your Nest Cam feed. And you can set up your speakers and displays to chime when someone rings your Nest Doorbell, while using your display to see who’s at the door and take action from the screen. 

  • All on the Google Home app: It’s easy to see all of your events quickly, and your 24/7 live feed at any time in the Google Home app. If you have more than one Nest camera, you can view all of them in one place, alongside your other connected home devices. You can even filter by event type — for example, you can pull up every package delivery. 
  • See clearly in a variety of conditions:Both Nest Cam and Nest Doorbell have night vision, 6x zoom, and HDR so images are crisp in the dark or bright light. And we gave Nest Doorbell a taller field of view so you can see visitors from head to toe and packages as close as eight inches away from your door.
  • Extra secure with a Google account: Your devices are only as secure as your account. That’s why the new Nest Cam and Nest Doorbell require a Google account, which comes with added protections like suspicious activity detection, 2-step verification and password checkup. Read more about our commitment to privacy and security in Nest’s dedicated Safety Center.
  • Add a Nest Aware subscription:With a Nest Aware subscription ($6 monthly), you’ll get familiar face detection and the ability to call 911 from the Google Home app (U.S. only) as well as 30 days of event video history. With a Nest Aware Plus subscription ($12 monthly), you’ll get all of this with 60 days of event video history and the option for 10 days of continuous video recording when your Nest Cam is plugged into a power outlet.
  • Made with care: Nest Cam and Doorbell are made with recycled materials and rigorously tested through drops and extreme weather, like heavy rain and hurricane-strength winds.
  • Built for your life: Nest technology is designed to fit into your home, not distract from it. Nest Cam is sleek and white and fits in anywhere — indoors or outdoors. And Nest Doorbell’s design was inspired by clean, minimalist architecture. In the U.S., it comes in four different colors so your front door can make a great first impression.

The new battery-powered Nest Cam and Nest Doorbell are available for pre-order today for $179.99 — you can visit the Google Store to find out more, including whether Nest Cam and Nest Doorbell will be available in your country.