Tag Archives: AI

Strike a pose with Pixel 3

With Pixel, we want to give you a camera that you can always trust and rely on. That means a camera which is fast, can take photos in any light and has built-in intelligence to capture those moments that only happen once. The camera should also give you a way to get creative with your photos and videos and be able to easily edit and share.

To celebrate Pixel 3 hitting the shelves in the US today, here are 10 things you can do with the Pixel camera.

1. Just point and shoot!

The Pixel camera has HDR+ on by default which uses computational photography to help you take better pictures in scenes where there is a range of brightness levels. When you press the shutter button, HDR+ actually captures a rapid burst of pictures, then quickly combines them into one. This improves results in both low-light and high dynamic range situations.

2. Top Shot

Get the best shot on the first try. When you take a motion photo, Top Shot captures alternate high-quality shots, then recommends the best one—even if it’s not exactly when you hit the shutter. Behind the scenes,Top Shot looks for those shots where everyone is smiling, with eyes open and facing the camera. Just click on the thumbnail when you take a picture and you’ll get a suggestion to choose a better picture when one is available. You can also find top shots on photos whenever you want by swiping up on the photo in Google Photos. Top Shot works best on people and is getting better all the time.

Top Shot

Top Shot on Pixel 3 

3. Night Sight

In low light scenes when you'd typically use flash—but don't want to because it makes a big scene, blinds your friends, and leaves harsh, uneven lighting—Night Sight can help you take colorful, detailed and low-noise pictures in super low light. Night Sight is coming soon to Pixel. 

4. Super Res Zoom

Pixel 3 lets you zoom in and still get sharp, detailed images. Fun fact: this works by taking advantage of the natural shaking of your hand when you take a photo. For every zoomed shot, we combine a burst of slightly different images, resulting in better resolution, and lower noise. So when you pinch-zoom before pressing the shutter, you’ll definitely get a lot more details in your picture than if you crop afterwards.

5. Group Selfie Cam

If you’re having trouble fitting everyone in shot, or you want the beautiful scenery as well as your beautiful face, try our new wide angle lens that lets you get much more in your selfie. You can get up to 184% more in the shot*, or 11 people is my own personal record. Wide angle lenses fit more people in the shot, but they also stretch and distort faces that are on the edge. The Pixel camera uses AI to correct this, so every face looks natural and you can use the full field of view of the selfie cam.

6. Photobooth

You spend ages getting the selfie at precisely the right angle, but then you try and reach the shutter button and lose the frame. Photobooth mode lets you take photos without pressing the shutter button: simply smile, poke your tongue out, or pucker those lips.

7. Playground

Bring more of your imagination to a scene with Playmoji— augmented reality characters that react to each other and to you—and add animated stickers and fun captions to your photos and videos. Playground also works on the front camera, so you can up your selfie game by standing next to characters you love, like Iron Man from the Marvel Cinematic Universe.

Playground on Pixel 3

Playground on Pixel 3 helps you create and play with the world around you

8. Google Lens Suggestions

Just point the Pixel 3 camera at contact info, URLs, and barcodes and it’ll automatically suggest things to do like calling the number, or sending an email. This all happens without you having to type anything and Lens will show the suggestions even when you’re offline. It’s particularly helpful with business cards, movie posters, and takeout menus.

9. Portrait Mode

Our improved Portrait Mode on Pixel is designed to give you even sharper and more beautiful images this year. Plus we’ve added some fun editing options in Google Photos—like being able to change the blurriness of the background, or change the part of the picture in focus after you’ve taken it. Google Photos can also make the subject of your photo pop by leaving them in color, while changing the background to black and white.

Portrait Mode

Portrait Mode and color pop with Pixel 3 and Google Photos

10. Smooth video

We’ve added new selfie video stabilization so now you can get super smooth video from the front or back cameras. And if you’re recording someone or something that is moving, just tap on them and the video will lock on the subject as they, or you, move—so you don’t lose focus.

Finally, if you’re a pro photographer, we’ve added a bunch of new features to help you manage your photography from the ability to export RAW, to external mic support, to synthetic fill flash which mimics professional lighting equipment to bring a beautiful glow to your pictures.

Once you’ve taken all those amazing photos and videos, Pixel comes with unlimited storage so you never get that “storage full” pop up at a crucial moment.** 

Share your pics using #teampixel so we can see what you create with Pixel 3.



*Compared to iPhone Xs

**Free, unlimited online original-quality storage for photos/videos uploaded from Pixel 3 to Google Photos through 1/31/2022, and those photos/videos will remain free at original quality. g.co/help/photostorage

A new course to teach people about fairness in machine learning

In my undergraduate studies, I majored in philosophy with a focus on ethics, spending countless hours grappling with the notion of fairness: both how to define it and how to effect it in society. Little did I know then how critical these studies would be to my current work on the machine learning education team where I support efforts related to the responsible development and use of AI.


As ML practitioners build, evaluate, and deploy machine learning models, they should keep fairness considerations (such as how different demographics of people will be affected by a model’s predictions) in the forefront of their minds. Additionally, they should proactively develop strategies to identify and ameliorate the effects of algorithmic bias.


To help practitioners achieve these goals, Google’s engineering education and ML fairness teams developed a 60-minute self-study training module on fairness, which is now available publicly as part of our popular Machine Learning Crash Course (MLCC).

ML bias

The MLCC Fairness module explores how human biases affect data sets. For example, people asked to describe a photo of bananas may not remark on their color (“yellow bananas”) unless they perceive it as atypical.

Students who complete this training will learn:

  • Different types of human biases that can manifest in machine learning models via data
  • How to identify potential areas of human bias in data before training a model
  • Methods for evaluating a model’s predictions not just for overall performance, but also for bias

In conjunction with the release of this new Fairness module, we’ve added more than a dozen new fairness entries to our Machine Learning Glossary (tagged with a scale icon in the right margin). These entries provide clear, concise definitions of the key fairness concepts discussed in our curriculum, designed to serve as a go-to reference for both beginners and experienced practitioners. We also hope these glossary entries will help further socialize fairness concerns within the ML community.


We’re excited to share this module with you, and hope that it provides additional tools and frameworks that aid in building systems that are fair and inclusive for all. You can learn more about our work in fairness and on other responsible AI practices on our website.

Pixel 3 and on-device AI: Putting superpowers in your pocket

Last week we announced Pixel 3 and Pixel 3XL, our latest smartphones that combine the best of Google’s AI, software, and hardware to deliver radically helpful experiences. AI is a key ingredient in Pixel that unlocks new, useful capabilities, dramatically changing how we interact with our phones and the world around us.

But what exactly is AI?

Artificial intelligence (AI) is a fancy term for all the technology that lets our devices learn by example and act a bit smarter, from understanding written or spoken language to recognizing people and objects in images. AI is built by “training” machine learning models—a computer learns patterns from lots of example data, and uses these patterns to generate predictions. We’ve built one of the most secure and robust cloud infrastructures for processing this data to make our products smarter. Today, AI helps with everything from filtering spam emails in Gmail to getting answers on Google Search.

What is AI

Machine learned models in the cloud are a secure way to make Google products smarter over time.

Bringing the best AI experiences to Pixel 3 involved some re-thinking from the ground up. Our phones are powerful computers with multiple sensors which enable new helpful and secure experiences when data is processed on your device. These AI-powered features can work offline and don’t require a network connection. And they can keep data on device, private to you. With Pixel 3, we complement our traditional approach to AI, where machine learning and data processing is done in the cloud, with reliable, accessible AI on device, when you’re on the go.

AI on device

The most powerful machine learning models can now run directly on your Pixel to power fast experiences which work even when you’re offline.

Benefits of on-device AI

We’ve been working to miniaturize AI models to bring the power of machine learning and computing in the cloud directly to your Pixel. With on-device AI, new kinds of experiences become possible—that are lightning fast, are more battery efficient, and keep data on your device. We piloted this technology last year with Now Playing, bringing automatic music recognition to Pixel 2. This year, your Phone app and camera both use on-device AI to give you new superpowers, allowing you to interact more seamlessly with the world around you.

AI benefits

On-device AI works without having to go back to a server and consumes less of your battery life.

Take Call Screen, a new feature in the Phone app, initially launching in English in the U.S., where the Google Assistant helps you screen calls, including from unknown or unrecognized numbers. Anytime you receive an incoming call, just tap the “Screen Call” button and on-device speech recognition is used to transcribe the conversation from the caller (who is calling? why they are calling?) so you can decide whether to pick up, hang up, or mark as spam and block. Because everything happens on your device, neither the audio nor transcript from a screened call is sent to anyone other than you.

AI Call Screen

Call Screen uses on-device speech recognition to transcribe the caller’s responses in real time, without sending audio or transcripts off your phone.

This year’s Pixel camera helps you capture great moments and do more with what you see by building on-device AI right into your viewfinder. New low-power vision models can recognize facial expressions, objects, and text without having to send images off your device. Photobooth Mode is powered by an image scoring model that analyzes facial expressions and photo quality in real time. This will automatically capture smiles and funny faces so you can take selfies without having to reach for the shutter button. Top Shot uses the same kind of image analysis to suggest great, candid moments from a motion photo—recommending alternative shots in HDR+. 

Playground creates an intelligent AR experience by using AI models to recommend Playmoji, stickers, and captions so that you can express yourself based on the scene you’re in. And without having to take a photo at all, image recognition lets you act on info from the world around you—surfacing Google Lens suggestions to call phone numbers or show website addresses—right from your camera.

Pixel 3 is just the beginning. We want to empower people with new AI-driven abilities. With our advances in on-device AI, we can develop new, helpful experiences that run right on your phone and are fast, efficient, and private to you.

The Applied Computing Series gets college students into computer science

What do fighting wildfires, searching for dogs in photos and using portrait mode on your phone have in common? Data science and machine learning. Experts across a range of businesses and industries are using data to give machines the ability to “learn” and complete tasks.


But as the field of data science is rapidly growing, workforce projections show that there isn’t enough new talent to meet increasing demand for these roles, especially in machine learning. Given the nationwide scarcity of computer science faculty, we’ve been thinking about how to give students a hands-on computer science education, without CS PHD educators.


At a handful of colleges across the country, we’re piloting the Applied Computing Series (ACS): two college-level introductory computer science and data science courses and a machine learning intensive. The Series will help students understand how to use the best available tools to manipulate and understand data and then solve critical business problems.

20180918-Google Edu-Bay Path U-173.jpg

Students at Bay Path University learning Python programming as part of our first ACS cohort of universities.


The machine learning intensive is meant for students who have already taken introductory computer science classes and who want to pursue more advanced coursework. The intensive will ultimately prepare them for opportunities as data engineers, technical program managers, or data analysts in industries ranging from healthcare to insurance to entertainment and media. Through partnerships with colleges and universities, we provide industry-relevant content and projects; and colleges and universities provide experienced faculty to lead in-class project work and provide coaching for students.


The Applied Computing courses are currently available to students at eight colleges and universities: Adrian College, Agnes Scott College, Bay Path University, Heidelberg University, Holy Names University, Lasell College, SUNY Buffalo State, and Sweet Briar College. If you’re a university and want to apply to be a site for the Applied Computing courses in the fall of 2019, find out more on our website.


The machine learning intensive will start in February 2019 at Mills College and again during the summer session at Agnes Scott College, Bay Path University, Heidelberg University and Scripps College and is open for applications from all U.S. students. If you’re a student who has already completed college-level computer and/or data science coursework and want to apply for the machine learning intensive, learn more at our website.

ShadowPlay: Using our hands to have some fun with AI

Editor’s note:TensorFlow, our open source machine learning platform, is just that—open to anyone. Companies, nonprofits, researchers and developers have used TensorFlow in some pretty cool ways and at Google, we're always looking to do the same. Here's one of those stories.

Chinese shadow puppetry—which uses silhouette figures and music to tell a story—is an ancient Chinese art form that’s been used by generations to charm communities and pass along cultural history. At Google, we’re always experimenting with how we can connect culture with AI and make it fun, which got us thinking: can AI help put on a shadow puppet show?

So we created ShadowPlay, an interactive installation that celebrates the shadow puppetry art form. The installation, built using TensorFlow and TPUs, uses AI to recognize a person’s hand gestures and then magically transform the shadow figure into digital animations representing the 12 animals of the Chinese zodiac and in an interactive show.

Shadowplay.gif

Attendees use their hands to make shadow figures, which transform into animated characters and creates.

We debuted ShadowPlay at the World AI Conference and Google Developers Day in Shanghai in September. To build the experience, we developed a custom machine learning model that was trained on a dataset made up of lots of examples of people’s hand shadows, which could eventually recognize the shadow and match it to the corresponding animal. “In order to bring this project to life, we asked Googlers to help us train the model by making a lot of fun hand gestures. Once we saw the reaction of users seeing their hand shadows morph into characters, it was impossible not to smile!”, says Miguel de Andres-Clavera, Project Lead at Google. To make sure the experience could guess what animal people were making with high accuracy, we trained the model using TPUs, our custom machine learning hardware accelerators.

We had so much fun building ShadowPlay (almost as much fun as practicing our shadow puppets … ), that we’ll be bringing it to more events around the world soon!

Highlights from the Google AI Residency Program



In 2016, we welcomed the inaugural class of the Google Brain Residency, a select group of 27 individuals participating in a 12-month program focused on jump-starting a career in machine learning and deep learning research. Since then, the program has experienced rapid growth, leading to its evolution into the Google AI Residency, which serves to provide residents the opportunity to embed themselves within the broader group of Google AI teams working on machine learning research and its applications.
Some of our 2017 Google AI residents at the 2017 Neural Information Processing Systems Conference, hosted in Long Beach, California.
The accomplishments of the second class of residents are as remarkable as those of the first, publishing multiple works to various top-tier machine learning, robotics and healthcare conferences and journals. Publication topics include:
  • A study on the effect of adversarial examples on human visual perception.
  • An algorithm that enables robots to learn more safely by avoiding states from which they cannot reset.
  • Initialization methods which enable training of neural network with unprecedented depths of 10K+ layers.
  • A method to make training more scalable by using larger mini-batches, which when applied to ResNet-50 on ImageNet reduced training time without compromising test accuracy.
  • And many more...
This experiment demonstrated (for the first time) the susceptibility of human time-limited vision to adversarial examples. For more details, see “Adversarial Examples that Fool both Computer Vision and Time-Limited Humans” accepted at NIPS 2018).
An algorithm for safe reinforcement learning prevents robots from taking actions they cannot undo. For more details, see “Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning” (accepted at ICLR 2018).
Extremely deep CNNs can be trained without the use of any special tricks simply by using a specially designed (Delta-Orthogonal) initialization. Test (solid) and training (dashed) curves on MNIST (top) and CIFAR10 (bottom). For more details, see “Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks” accepted at ICML 2018.
Applying a sequence of simple scaling rules, we increase the SGD batch size and reduce the number of parameter updates required to train our model by an order of magnitude, without sacrificing test set accuracy. This enables us to dramatically reduce model training time. For more details, see “Don’t Decay the Learning Rate, Increase the Batch Size”, accepted at ICLR 2018.
With the 2017 class of Google AI residents graduated and off to pursue the next exciting phase in their careers, their desks were quickly filled in June by the 2018 class. Furthermore, this new class is the first to be embedded in various teams across Google’s global offices, pursuing research in areas such as perception, algorithms and optimization, language, healthcare and much more. We look forward to seeing what they can accomplish and contribute to the broader research community!

If you are interested in joining the fourth class, applications for the 2019 Google AI Residency program are now open! Visit g.co/airesidency/apply for more information on how to apply. Also, check out g.co/airesidency to see more resident profiles, past Resident publications, blog posts and stories. We can’t wait to see where the next year will take us, and hope you’ll collaborate with our research teams across the world!

Source: Google AI Blog


Google hardware. Designed to work better together.

This year marks Google’s 20th anniversary—for two decades we’ve been working toward our mission to organize the world’s information and make it universally accessible and useful for everybody. Delivering information has always been in our DNA. It’s why we exist. From searching the world, to translating it, to getting a great photo of it, when we see an opportunity to help people, we’ll go the extra mile. We love working on really hard problems that make life easier for people, in big and small ways.

There’s a clear line from the technology we were working on 20 years ago to the technology we’re developing today—and the big breakthroughs come at the intersection of AI, software and hardware, working together. This approach is what makes the Google hardware experience so unique, and it unlocks all kinds of helpful benefits. When we think about artificial intelligence in the context of consumer hardware, it isn’t artificial at all—it’s helping you get real things done, every day. A shorter route to work. A gorgeous vacation photo. A faster email response. 

So today, we’re introducing our third-generation family of consumer hardware products, all made by Google:

  • For life on the go, we’re introducing the Pixel 3 and Pixel 3 XL—designed from the inside out to be the smartest, most helpful device in your life. It’s a phone that can answer itself, a camera that won’t miss a shot, and a helpful Assistant even while it’s charging.

  • For life at work and at play, we’re bringing the power and productivity of a desktop to a gorgeous tablet called Pixel Slate. This Chrome OS device is both a powerful workstation at the office, and a home theater you can hold in your hands.

  • And for life at home we designed Google Home Hub, which lets you hear and see the info you need, and manage your connected home from a single screen. With its radically helpful smart display, Google Home Hub lays the foundation for a truly thoughtful home.

Please visit our updated online store to see the full details, pricing and availability

The new Google devices fit perfectly with the rest of our family of products, including Nest, which joined the Google hardware family at the beginning of this year. Together with Nest, we’re pursuing our shared vision of a thoughtful home that isn’t just smart, it’s also helpful and simple enough for everyone to set up and use. It's technology designed for the way you live.

Ivy Ross + Hardware Design

Our goal with these new products, as always, is to create something that serves a purpose in people’s lives—products that are so useful they make people wonder how they ever lived without them. The simple yet beautiful design of these new devices continues to bring the smarts of the technology to the forefront, while providing people with a bold piece of hardware.

Our guiding principle

Google's guiding principle is the same as it’s been for 20 years—to respect our users and put them first. We feel a deep responsibility to provide you with a helpful, personal Google experience, and that guides the work we do in three very specific ways:

  • First, we want to provide you with an experience that is unique to you. Just like Google is organizing the world’s information, the combination of AI, software and hardware can organize your information—and help out with the things you want to get done. The Google Assistant is the best expression of this, and it’s always available when, where, and however you need it.

  • Second, we’re committed to the security of our users. We need to offer simple, powerful ways to safeguard your devices. We’ve integrated Titan™ Security, the system we built for Google, into our new mobile devices. Titan™ Security protects your most sensitive on-device data by securing your lock screen and strengthening disk encryption.

  • Third, we want to make sure you’re in control of your digital wellbeing. From our research, 72 percent of our users are concerned about the amount of time people spend using tech. We take this very seriously and have developed new tools that make people’s lives easier and cut back on distractions.

A few new things made by Google

With these Made by Google devices, our goal is to provide radically helpful solutions. While it’s early in the journey, we’re taking an end-to-end approach to consumer technology that merges our most innovative AI with intuitive software and powerful hardware. Ultimately, we want to help you do more with your days while doing less with your tech—so you can focus on what matters most.

Improving Search for the next 20 years

https://storage.googleapis.com/gweb-uniblog-publish-prod/images/BLR_-_Koshys_1_1.max-1000x1000.jpg
Growing up in India, there was one good library in my town that I had access to—run by the British Council.  It was modest by western standards, and I had to take two buses just to get there. But I was lucky, because for every child like me, there were many more who didn’t have access to the same information that I did. Access to information changed my life, bringing me to the U.S. to study computer science and opening up huge possibilities for me that would not have been available without the education I had.
Ben's library
The British Council Library in my hometown.


When Google started 20 years ago, our mission was to organize the world’s information and make it universally accessible and useful. That seemed like an incredibly ambitious mission at the time—even considering that in 1998 the web consisted of just 25 million pages (roughly the equivalent of books in a small library).
Fast forward to today, and now we index hundreds of billions of pages in our index—more information than all the libraries in the world could hold. We’ve grown to serve people all over the world, offering Search in more than 150 languages and over 190 countries.
Through all of this, we’ve remained grounded in our mission. In fact, providing greater access to information is as core to our work today as it was when we first started. And while almost everything has changed about technology and the information available to us, the core principles of Search have stayed the same.
  • First and foremost, we focus on the user. Whether you’re looking for recipes, studying for an exam, or finding information on where to vote, we’re focused on serving your information needs.
  • We strive to give you the most relevant, highest quality information as quickly as possible. This was true when Google started with the Page Rank algorithm—the foundational technology to Search. And it’s just as true today.
  • We see billions of queries every day, and 15 percent of queries are ones we’ve never seen before. Given this scale, the only way to provide Search effectively is through an algorithmic approach. This helps us not just solve all the queries we’ve seen yesterday, but also all the ones we can’t anticipate for tomorrow.
  • Finally, we rigorously test every change we make. A key part of this testing is the rater guidelines which define our goals in search, and which are publicly available for anyone to see. Every change to Search is evaluated by experimentation and by raters using these guidelines. Last year alone, we ran more than 200,000 experiments that resulted in 2,400+ changes to search. Search will serve you better today than it did yesterday, and even better tomorrow.
As Google marks our 20th anniversary, I wanted to share a first look at the next chapter of Search, and how we’re working to make information more accessible and useful for people everywhere. This next chapter is driven by three fundamental shifts in how we think about Search:
    Underpinning each of these are our advancements in AI, improving our ability to understand language in ways that weren’t possible when Google first started. This is incredibly exciting, because over 20 years ago when I studied neural nets at school, they didn’t actually work very well...at all!
    But we’ve now reached the point where neural networks can help us take a major leap forward from understanding words to understanding concepts. Neural embeddings, an approach developed in the field of neural networks, allow us to transform words to fuzzier representations of the underlying concepts, and then match the concepts in the query with the concepts in the document. We call this technique neural matching. This can enable us to address queries like: “why does my TV look strange?” to surface the most relevant results for that question, even if the exact words aren’t contained in the page. (By the way, it turns out the reason is called the soap opera effect).
    Finding the right information about my TV is helpful in the moment. But AI can have much more profound effects. Whether it’s predicting areas that might be affected in a flood, or helping you identify the best job opportunities for you, AI can dramatically improve our ability to make information more accessible and useful.
    I’ve worked on Search at Google since the early days of its existence. One of the things that keeps me so inspired about Search all these years is our mission and how timeless it is. Providing greater access to information is fundamental to what we do, and there are always more ways we can help people access the information they need. That’s what pushes us forward to continue to make Search better for our users. And that’s why our work here is never done.

    Posted by Ben Gomes, VP, Search, News and Assistant

    Keeping people safe with AI-enabled flood forecasting

    https://storage.googleapis.com/gweb-uniblog-publish-prod/original_images/Flood_Forecast.gif
    For 20 years, Google Search has provided people with the information they need, and in times of crisis, access to timely, actionable information is often crucial. Last year we launched SOS Alerts on Search and Maps to make emergency information more accessible. Since then, we’ve activated SOS Alerts in more than 200 crisis situations, in addition to tens of thousands of Google Public Alerts, which have been viewed more than 1.5 billion times.
    Floods are devastating natural disasters worldwide—it’s estimated that every year, 250 million people around the world are affected by floods, also costing billions of dollars in damages. Flood forecasting can help individuals and authorities better prepare to keep people safe, but accurate forecasting isn’t currently available in many areas. And the warning systems that do exist can be imprecise and non-actionable, resulting in far too many people being underprepared and under informed before a flood happens.
    To help improve awareness of impending floods, we're using AI and significant computational power to create better forecasting models that predict when and where floods will occur, and incorporating that information into Google Public Alerts. A variety of elements—from historical events, to river level readings, to the terrain and elevation of a specific area—feed into our models. From there, we generate maps and run up to hundreds of thousands of simulations in each location. With this information, we’ve created river flood forecasting models that can more accurately predict not only when and where a flood might occur, but the severity of the event as well.
    flood forecast
    These images depict a flood simulation of a river in Hyderabad, India. The left side uses publicly available data while the right side uses Google data and technology. Our models contain higher resolution, accuracy, and up-to-date information.


    We started these flood forecasting efforts in India, where 20 percent of global flood-related fatalities occur. We’re partnering with India’s Central Water Commission to get the data we need to roll out early flood warnings, starting with the Patna region. The first alert went out earlier this month after heavy rains in the region.
    alert
    Flood alert shown to users in the Patna region.


    We’re also looking to expand coverage to more countries, to help more people around the world get access to these early warnings, and help keep them informed and safe.

    Posted by Yossi Matias, VP, Engineering

    Keeping people safe with AI-enabled flood forecasting

    For 20 years, Google Search has provided people with the information they need, and in times of crisis, access to timely, actionable information is often crucial. Last year we launched SOS Alerts on Search and Maps to make emergency information more accessible. Since then, we’ve activated SOS Alerts in more than 200 crisis situations, in addition to tens of thousands of Google Public Alerts, which have been viewed more than 1.5 billion times.

    Floods are devastating natural disasters worldwide—it’s estimated that every year, 250 million people around the world are affected by floods, also costing billions of dollars in damages. Flood forecasting can help individuals and authorities better prepare to keep people safe, but accurate forecasting isn’t currently available in many areas. And the warning systems that do exist can be imprecise and non-actionable, resulting in far too many people being underprepared and underinformed before a flood happens.

    To help improve awareness of impending floods, we're using AI and significant computational power to create better forecasting models that predict when and where floods will occur, and incorporating that information into Google Public Alerts. A variety of elements—from historical events, to river level readings, to the terrain and elevation of a specific area—feed into our models. From there, we generate maps and run up to hundreds of thousands of simulations in each location. With this information, we’ve created river flood forecasting models that can more accurately predict not only when and where a flood might occur, but the severity of the event as well.

    flood forecast

    These images depict a flood simulation of a river in Hyderabad, India. The left side uses publicly available data while the right side uses Google data and technology. Our models contain higher resolution, accuracy, and up-to-date information.

    We started these flood forecasting efforts in India, where 20 percent of global flood-related fatalities occur. We’re partnering with India’s Central Water Commission to get the data we need to roll out early flood warnings, starting with the Patna region. The first alert went out earlier this month after heavy rains in the region.

    alert

    Flood alert shown to users in the Patna region.

    We’re also looking to expand coverage to more countries, to help more people around the world get access to these early warnings, and help keep them informed and safe.

    Source: Search