Chrome and Chrome OS release updates

We previously paused upcoming releases for Chrome and Chrome OS. Today we’re sharing an update as we’re now resuming releases with an adjusted schedule:
  • M83 will be released three weeks earlier than previously planned and will include all M82 work as we cancelled the M82 release (all channels).
  • Our Canary, Dev and Beta channels have or will resume this week, with M83 moving to Dev, and M81 continuing in Beta.
  • Our Stable channel will resume release next week with security and critical fixes in M80, followed by the release of M81 the week of April 7, and M83 ~mid-May.
  • We will share a future update on the timing of the M84 branch and releases.
We continue to closely monitor that Chrome and Chrome OS are stable, secure, and work reliably. We’ll keep everyone informed of any changes on our schedule on this blog and will share additional details on the schedule in the Chromium Developers group, as needed. You can also check our schedule page for specific dates for each milestone at any time.

Thanks everyone for the help and patience during this time.

Google Chrome 

Identifying vulnerabilities and protecting you from phishing

Google’s Threat Analysis Group (TAG) works to counter targeted and government-backed hacking against Google and the people who use our products. Following our November update, today we’re sharing the latest insights to fight phishing, and for security teams, providing more details about our work identifying attacks against zero-day vulnerabilities. 

Protecting you from phishing

We have a long-standing policy to send you a warning if we detect that your account is a target of government-backed phishing or malware attempts. In 2019, we sent almost 40,000 warnings, a nearly 25 percent drop from 2018. One reason for this decline is that our new protections are working—attackers' efforts have been slowed down and they’re more deliberate in their attempts, meaning attempts are happening less frequently as attackers adapt.

Distribution of the targets of government-backed phishing in 2019

Distribution of the targets of government-backed phishing in 2019.

We’ve detected a few emerging trends in recent months.

Impersonating news outlets and journalists is on the rise

Upon reviewing phishing attempts since the beginning of this year, we’ve seen a rising number of attackers, including those from Iran and North Korea, impersonating news outlets or journalists. For example, attackers impersonate a journalist to seed false stories with other reporters to spread disinformation. In other cases, attackers will send several benign emails to build a rapport with a journalist or foreign policy expert before sending a malicious attachment in a follow up email. Government-backed attackers regularly target foreign policy experts for their research, access to the organizations they work with, and connection to fellow researchers or policymakers for subsequent attacks. 

Heavily targeted sectors are (mostly) not surprising

Government-backed attackers continue to consistently target geopolitical rivals, government officials, journalists, dissidents and activists. The chart below details the Russian threat actor group SANDWORM’s targeting efforts (by sector) over the last three years.

Distribution of targets by sector by the Russian threat actor known as SANDWORM

Government-backed attackers repeatedly go after their targets

In 2019, one in five accounts that received a warning was targeted multiple times by attackers. If at first the attacker does not succeed, they’ll try again using a different lure, different account, or trying to compromise an associate of their target.

We’ve yet to see people successfully phished if they participate in Google’s Advanced Protection Program (APP), even if they are repeatedly targeted. APP provides the strongest protections available against phishing and account hijacking and is specifically designed for the highest-risk accounts. 

Finding attacks that leverage zero-day vulnerabilities

Zero-day vulnerabilities are unknown software flaws. Until they’re identified and fixed, they can be exploited by attackers. TAG actively hunts for these types of attacks because they are particularly dangerous and have a high rate of success, although they account for a small number of the overall total. When we find an attack that takes advantage of  a zero-day vulnerability, we report the vulnerability to the vendor and give them seven days to patch or produce an advisory or we release an advisory ourselves

We work across all platforms, and in 2019 TAG discovered zero-day vulnerabilities affecting Android, Chrome, iOS, Internet Explorer and Windows. Most recently, TAG was acknowledged in January 2020 for our contribution in identifying CVE-2020-0674, a remote code execution vulnerability in Internet Explorer. 

Last year, TAG discovered that a single threat actor was capitalizing on five zero-day vulnerabilities. Finding this many zero-day exploits from the same actor in a relatively short time frame is rare. The exploits were delivered via compromised legitimate websites (e.g. watering hole attacks), links to malicious websites, and email attachments in limited spear phishing campaigns. The majority of targets we observed were from North Korea or individuals who worked on North Korea-related issues.

For security teams interested in learning more, here are additional details about the exploits and our work in 2019:

The vulnerabilities underlying these exploits included:

The following technical details are associated with the exploits and can be used for teams interested in conducting further research on these attacks:

  • CVE-2018-8653, CVE-2019-1367 and CVE-2020-0674 are vulnerabilities inside jscript.dll, therefore all exploits enabled IE8 rendering and used JScript.Compact as JS engine.

  • In most Internet Explorer exploits, attackers abused the Enumerator object in order to gain remote code execution. 

  • To escape from the Internet Explorer EPM sandbox, exploits used a technique consisting of replaying the same vulnerability inside svchost by abusing Web Proxy Auto-Discovery (WPad) Service. Attackers abused this technique with CVE-2020-0674 on Firefox to escape the sandbox after exploiting CVE-2019-17026.

  • CVE-2019-0676 is a variant of CVE-2017-0022, CVE-2016-3298, CVE-2016-0162 and CVE-2016-3351 where the vulnerability resided inside the handling of “res://” URI scheme. Exploiting CVE-2019-0676 enabled attackers to reveal presence or non-presence of files on the victim’s computer; this information was later used to decide whether or not a second stage exploit should be delivered.

  • The attack vector for CVE-2019-1367 was rather atypical as the exploit was delivered from an Office document abusing the online video embedding feature to load an external URL conducting the exploitation.

Our Threat Analyst Group will continue to identify bad actors and share relevant information with others in the industry. Our goal is to bring awareness to these issues to protect you and fight bad actors to prevent future attacks. In a future update, we’ll provide details on attackers using lures related to COVID-19 and expected behavior we’re observing (all within the normal range of attacker activity). 

Identifying vulnerabilities and protecting you from phishing

Google’s Threat Analysis Group (TAG) works to counter targeted and government-backed hacking against Google and the people who use our products. Following our November update, today we’re sharing the latest insights to fight phishing, and for security teams, providing more details about our work identifying attacks against zero-day vulnerabilities. 

Protecting you from phishing

We have a long-standing policy to send you a warning if we detect that your account is a target of government-backed phishing or malware attempts. In 2019, we sent almost 40,000 warnings, a nearly 25 percent drop from 2018. One reason for this decline is that our new protections are working—attackers' efforts have been slowed down and they’re more deliberate in their attempts, meaning attempts are happening less frequently as attackers adapt.

Distribution of the targets of government-backed phishing in 2019

Distribution of the targets of government-backed phishing in 2019.

We’ve detected a few emerging trends in recent months.

Impersonating news outlets and journalists is on the rise

Upon reviewing phishing attempts since the beginning of this year, we’ve seen a rising number of attackers, including those from Iran and North Korea, impersonating news outlets or journalists. For example, attackers impersonate a journalist to seed false stories with other reporters to spread disinformation. In other cases, attackers will send several benign emails to build a rapport with a journalist or foreign policy expert before sending a malicious attachment in a follow up email. Government-backed attackers regularly target foreign policy experts for their research, access to the organizations they work with, and connection to fellow researchers or policymakers for subsequent attacks. 

Heavily targeted sectors are (mostly) not surprising

Government-backed attackers continue to consistently target geopolitical rivals, government officials, journalists, dissidents and activists. The chart below details the Russian threat actor group SANDWORM’s targeting efforts (by sector) over the last three years.

Distribution of targets by sector by the Russian threat actor known as SANDWORM

Government-backed attackers repeatedly go after their targets

In 2019, one in five accounts that received a warning was targeted multiple times by attackers. If at first the attacker does not succeed, they’ll try again using a different lure, different account, or trying to compromise an associate of their target.

We’ve yet to see people successfully phished if they participate in Google’s Advanced Protection Program (APP), even if they are repeatedly targeted. APP provides the strongest protections available against phishing and account hijacking and is specifically designed for the highest-risk accounts. 

Finding attacks that leverage zero-day vulnerabilities

Zero-day vulnerabilities are unknown software flaws. Until they’re identified and fixed, they can be exploited by attackers. TAG actively hunts for these types of attacks because they are particularly dangerous and have a high rate of success, although they account for a small number of the overall total. When we find an attack that takes advantage of  a zero-day vulnerability, we report the vulnerability to the vendor and give them seven days to patch or produce an advisory or we release an advisory ourselves

We work across all platforms, and in 2019 TAG discovered zero-day vulnerabilities affecting Android, Chrome, iOS, Internet Explorer and Windows. Most recently, TAG was acknowledged in January 2020 for our contribution in identifying CVE-2020-0674, a remote code execution vulnerability in Internet Explorer. 

Last year, TAG discovered that a single threat actor was capitalizing on five zero-day vulnerabilities. Finding this many zero-day exploits from the same actor in a relatively short time frame is rare. The exploits were delivered via compromised legitimate websites (e.g. watering hole attacks), links to malicious websites, and email attachments in limited spear phishing campaigns. The majority of targets we observed were from North Korea or individuals who worked on North Korea-related issues.

For security teams interested in learning more, here are additional details about the exploits and our work in 2019:

The vulnerabilities underlying these exploits included:

The following technical details are associated with the exploits and can be used for teams interested in conducting further research on these attacks:

  • CVE-2018-8653, CVE-2019-1367 and CVE-2020-0674 are vulnerabilities inside jscript.dll, therefore all exploits enabled IE8 rendering and used JScript.Compact as JS engine.

  • In most Internet Explorer exploits, attackers abused the Enumerator object in order to gain remote code execution. 

  • To escape from the Internet Explorer EPM sandbox, exploits used a technique consisting of replaying the same vulnerability inside svchost by abusing Web Proxy Auto-Discovery (WPad) Service. Attackers abused this technique with CVE-2020-0674 on Firefox to escape the sandbox after exploiting CVE-2019-17026.

  • CVE-2019-0676 is a variant of CVE-2017-0022, CVE-2016-3298, CVE-2016-0162 and CVE-2016-3351 where the vulnerability resided inside the handling of “res://” URI scheme. Exploiting CVE-2019-0676 enabled attackers to reveal presence or non-presence of files on the victim’s computer; this information was later used to decide whether or not a second stage exploit should be delivered.

  • The attack vector for CVE-2019-1367 was rather atypical as the exploit was delivered from an Office document abusing the online video embedding feature to load an external URL conducting the exploitation.

Our Threat Analyst Group will continue to identify bad actors and share relevant information with others in the industry. Our goal is to bring awareness to these issues to protect you and fight bad actors to prevent future attacks. In a future update, we’ll provide details on attackers using lures related to COVID-19 and expected behavior we’re observing (all within the normal range of attacker activity). 

Discover podcasts you’ll love with Google Podcasts, now on iOS

It took me a decade to find the podcasts I love most. When I lived in Chicago, I started downloading podcasts for traffic-filled drives to soccer practice. One that stood out during the ninety-minute commute was Planet Money, which became a ritual for me as I studied economics. Since then I've gradually gathered a list of favorites for all activities from long runs to cooking dinner—Acquired, PTI, and More Perfect are a few. Building my podcast library took years and a number of friends introducing me to their favorite shows and episodes, such as a particularly memorable Radiolab about CRISPR.

But you should be able to find new favorites in minutes, not years. We’ve redesigned the Google Podcasts app to make it easier to discover podcasts you’ll love, build your list of go-to podcasts, and customize your listening. To support listeners on more platforms, we’re also bringing Google Podcasts to iOS for the first time and adding support for subscriptions on Google Podcasts for Web. Regardless of the platform you’re using, your listening progress will sync across devices, and you’ll be able to pick up right where you left off.

The new app is organized around three tabs: Home, Explore and Activity. The Home tab features a feed of new episodes and gives you quick access to your subscribed shows. When you select an episode you want to listen to, you’ll now see topics or people covered in that podcast, and you can easily jump to Google Search to learn more.

podcastsapp.jpg

In the Explore tab, “For you” displays new show and episode recommendations related to your interests, and you can browse popular podcasts in categories such as comedy, sports, and news. You’ll be able to control personalized recommendations from the Google Podcasts settings, which are accessible right from the Explore tab.


Android_google_podcast_touch_blank.gif

As you listen and subscribe to more podcasts, the Activity tab will display your listen history, queued up episodes, and downloads. For each show in your subscriptions, you can now enable automatic downloading and/or push notifications for when new episodes come out.

The new Google Podcasts is available on iOS today and rolling out to Android this week. Try it out and discover your next favorite show.

Learn from our mobility experts at Android OnAir

To support Android Enterprise customers with their mobility initiatives, we’ve created a series of webinars at Android OnAir that offer best practices in deploying and managing devices. Each webinar tackles an essential subject that is top of mind for IT decision makers and admins. Participants can join a live Q&A during the broadcast to get answers directly from Google. If you can’t make the live broadcast, webinars are all available on-demand.

Our current catalogue of on-demand webinars cover important topics like deployment strategies and Android security updates. Check out the upcoming schedule and register today to reserve your spot.

Google security services on Android 

April 15:Android devices are backed by industry-leading security to help keep devices safe. Learn how Google Play Protect, Safe Browsing, SafetyNet and other Google Security Services help safeguard company data and employee privacy, and discover strategies to incorporate them into your mobility initiative.

Using mobile to improve business continuity 

May 13: Android can transform how your teams connect with each other and work more efficiently, no matter where they are. Learn how you can take mobile devices beyond traditional use cases and give employees more convenience with access to internal services like private apps, corporate sites and key services to extend business continuity to any device.

How Google mandates Android security standards

June 17:Consistent security and management standards give companies the confidence to use a mix of devices from different OEMs to support various business use cases. Find out more about how Google works closely with device manufacturers and developers to implement security systems that are deployed on enterprise devices.

Preventing enterprise data loss on Android

July 15: Data loss can be catastrophic for any business. Learn how Android Enterprise management features give IT admins the tools to mandate secure app and data usage practices that help prevent leaks and guard against attacks from bad actors. Discover Android management strategies to give employees the level of access you want while helping protect critical company data.

Equip your frontline workers for success with Android

August 12: Frontline workers like sales associates, warehouse managers, delivery drivers and others perform critical tasks that drive customer success. However, mobile investment in these employees remains low. Businesses can use mobile devices to empower these teams with data-driven decisions and real-time access to company resources. Learn how business can use Android device diversity to provide the right device for each digital use case.

Explaining Android Enterprise
Recommended and security requirements

September 16: Android Enterprise Recommended simplifies mobility choices for businesses with a Google-approved shortlist of devices, services, and partners that meet our strict enterprise requirements. Find out how this initiative can help your team select devices with consistent security and software requirements and find validated Enterprise Mobility Management and Managed Service Provider partners.


A Neural Weather Model for Eight-Hour Precipitation Forecasting



Predicting weather from minutes to weeks ahead with high accuracy is a fundamental scientific challenge that can have a wide ranging impact on many aspects of society. Current forecasts employed by many meteorological agencies are based on physical models of the atmosphere that, despite improving substantially over the preceding decades, are inherently constrained by their computational requirements and are sensitive to approximations of the physical laws that govern them. An alternative approach to weather prediction that is able to overcome some of these constraints uses deep neural networks (DNNs): instead of encoding explicit physical laws, DNNs discover patterns in the data and learn complex transformations from inputs to the desired outputs using parallel computation on powerful specialized hardware such as GPUs and TPUs.

Building on our previous research into precipitation nowcasting, we present “MetNet: A Neural Weather Model for Precipitation Forecasting,” a DNN capable of predicting future precipitation at 1 km resolution over 2 minute intervals at timescales up to 8 hours into the future. MetNet outperforms the current state-of-the-art physics-based model in use by NOAA for prediction times up to 7-8 hours ahead and makes a prediction over the entire US in a matter of seconds as opposed to an hour. The inputs to the network are sourced automatically from radar stations and satellite networks without the need for human annotation. The model output is a probability distribution that we use to infer the most likely precipitation rates with associated uncertainties at each geographical region. The figure below provides an example of the network’s predictions over the continental United States.
MetNet model predictions compared to the ground truth as measured by the NOAA multi-radar/multi-sensor system (MRMS). The MetNet model (top) displays the probabilities for 1 mm/hr precipitation predicted from 2 minutes to 480 minutes ahead, whereas the MRMS data (bottom) shows the areas receiving at least 1 mm/hr of precipitation over that same time period.
Neural Weather Model
MetNet does not rely on explicit physical laws describing the dynamics of the atmosphere, but instead learns by backpropagation to forecast the weather directly from observed data. The network uses precipitation estimates derived from ground based radar stations comprising the multi-radar/multi-sensor system (MRMS) and measurements from NOAA’s Geostationary Operational Environmental Satellite system that provides a top down view of clouds in the atmosphere. Both data sources cover the continental US and provide image-like inputs that can be efficiently processed by the network.

The model is executed for every 64 km x 64 km square covering the entire US at 1 km resolution. However, the actual physical coverage of the input data corresponding to each of these output regions is much larger, since it must take into account the possible motion of the clouds and precipitation fields over the time period for which the prediction is made. For example, assuming that clouds move up to 60 km/h, in order to make informed predictions that capture the temporal dynamics of the atmosphere up to 8 hours ahead, the model needs 60 x 8 = 480 km of spatial context in all directions. So, to achieve this level of context, information from a 1024 km x 1024 km area is required for predictions being made on the center 64 km x 64 km patch.
Size of the input patch containing satellite and radar images (large, 1024 x 1024 km square) and of the output predicted radar image (small, 64 x 64 km square).
Because processing a 1024 km x 1024 km area at full resolution requires a significant amount of memory, we use a spatial downsampler that decreases memory consumption by reducing the spatial dimension of the input patch, while also finding and keeping the relevant weather patterns in the input. A temporal encoder (implemented with a convolutional LSTM that is especially well suited for sequences of images) is then applied along the time dimension of the downsampled input data, encoding seven snapshots from the previous 90 minutes of input data, in 15-minute segments. The output of the temporal encoder is then passed to a spatial aggregator, which uses axial self-attention to efficiently capture long range spatial dependencies in the data, with a variable amount of context based on the input target time, to make predictions over the 64 km x 64 km output.

The output of this architecture is a discrete probability distribution estimating the probability of a given rate of precipitation for each square kilometer in the continental United States.
The architecture of the neural weather model, MetNet. The input satellite and radar images first pass through a spatial downsampler to reduce memory consumption. They are then processed by a convolutional LSTM at 15 minute intervals over the 90 minutes of input data. Then axial attention layers are used to make the network see the entirety of the input images.
Results
We evaluate MetNet on a precipitation rate forecasting benchmark and compare the results with two baselines — the NOAA High Resolution Rapid Refresh (HRRR) system, which is the physical weather forecasting model currently operational in the US, and a baseline model that estimates the motion of the precipitation field (i.e., optical flow), a method known to perform well for prediction times less than 2 hours.

A significant advantage of our neural weather model is that it is optimized for dense and parallel computation and well suited for running on specialty hardware (e.g., TPUs). This allows predictions to be made in parallel in a matter of seconds, whether it is for a specific location like New York City or for the entire US, whereas physical models such as HRRR have a runtime of about an hour on a supercomputer.

We quantify the difference in performance between MetNet, HRRR, and the optical flow baseline model in the plot below. Here, we show the performance achieved by the three models, evaluated using the F1-score at a precipitation rate threshold of 1.0 mm/h, which corresponds to light rain. The MetNet neural weather model is able to outperform the NOAA HRRR system at timelines less than 8 hours and is consistently better than the flow-based model.
Performance evaluated in terms of F1-score at 1.0 mm/h precipitation rate (higher is better). The neural weather model (MetNet) outperforms the physics-based model (HRRR) currently operational in the US for timescales up to 8 hours ahead.
Due to the stochastic nature of the atmosphere, the uncertainty about the exact future weather conditions increases with longer prediction times. Because MetNet is a probabilistic model, the uncertainty in the predictions is seen in the visualizations by the growing smoothness of the predictions as the forecast time is extended. In contrast, HRRR does not directly make probabilistic predictions, but instead predicts a single potential future. The figure below compares the output of the MetNet model to that of the HRRR model.
Comparison between the output from MetNet (top) and HRRR (bottom) to ground-truth (middle) as retrieved from the NOAA MRMS system. Notice that while the HRRR model predicts structure that appears to be more similar to that of the ground-truth, the structure predicted may be grossly incorrect.
The predictions from the HRRR physical model look sharper and more structured than that of the MetNet model, but the structure, specifically the exact time and location where the structure is predicted, is less accurate due to uncertainty in the initial conditions and the parameters of the model.
HRRR (left) predicts a single potential future outcome (in red) out of the many possible outcomes, whereas MetNet (right) directly accounts for uncertainty by assigning probabilities over the future outcomes.
A more thorough comparison between the HRRR and MetNet models can be found in the video below:
Future Directions
We are actively researching how to improve global weather forecasting, especially in regions where the impacts of rapid climate change are most profound. While we demonstrate the present MetNet model for the continental US, it could be extended to cover any region for which adequate radar and optical satellite data are available. The work presented here is a small stepping stone in this effort that we hope leads to even greater improvements through future collaboration with the meteorological community.

Acknowledgements
This project was done in collaboration with Lasse Espeholt, Jonathan Heek, Mostafa Dehghani, Avital Oliver, Tim Salimans, Shreya Agrawal and Jason Hickey. We would also like to thank Manoj Kumar, Wendy Shang, Dick Weissenborn, Cenk Gazen, John Burge, Stephen Hoyer, Lak Lakshmanan, Rob Carver, Carla Bromberg and Aaron Bell for useful discussions and Tom Small for help with the visualizations.

Source: Google AI Blog


Discover podcasts you’ll love with Google Podcasts, now on iOS

It took me a decade to find the podcasts I love most. When I lived in Chicago, I started downloading podcasts for traffic-filled drives to soccer practice. One that stood out during the ninety-minute commute was Planet Money, which became a ritual for me as I studied economics. Since then I've gradually gathered a list of favorites for all activities from long runs to cooking dinner—Acquired, PTI, and More Perfect are a few. Building my podcast library took years and a number of friends introducing me to their favorite shows and episodes, such as a particularly memorable Radiolab about CRISPR.

But you should be able to find new favorites in minutes, not years. We’ve redesigned the Google Podcasts app to make it easier to discover podcasts you’ll love, build your list of go-to podcasts, and customize your listening. To support listeners on more platforms, we’re also bringing Google Podcasts to iOS for the first time and adding support for subscriptions on Google Podcasts for Web. Regardless of the platform you’re using, your listening progress will sync across devices, and you’ll be able to pick up right where you left off.

The new app is organized around three tabs: Home, Explore and Activity. The Home tab features a feed of new episodes and gives you quick access to your subscribed shows. When you select an episode you want to listen to, you’ll now see topics or people covered in that podcast, and you can easily jump to Google Search to learn more.

podcastsapp.jpg

In the Explore tab, “For you” displays new show and episode recommendations related to your interests, and you can browse popular podcasts in categories such as comedy, sports, and news. You’ll be able to control personalized recommendations from the Google Podcasts settings, which are accessible right from the Explore tab.


Android_google_podcast_touch_blank.gif

As you listen and subscribe to more podcasts, the Activity tab will display your listen history, queued up episodes, and downloads. For each show in your subscriptions, you can now enable automatic downloading and/or push notifications for when new episodes come out.

The new Google Podcasts is available on iOS today and rolling out to Android this week. Try it out and discover your next favorite show.

Source: Search


Five things you (maybe) didn’t know about AI

While there’s plenty of information out there on artificial intelligence, it’s not always easy to distinguish fact from fiction or find explanations that are easy to understand. That’s why we’ve teamed up with Google to create The A to Z of AI. It’s a series of simple, bite-sized explainers to help anyone understand what AI is, how it works and how it’s changing the world around us. Here are a few things you might learn:

AI.jpg

A is for Artificial Intelligence

1. AI is already in our everyday lives. 

You’ve probably interacted with AI without even realizing it. If you’ve ever searched for a specific image in Google Photos, asked a smart speaker about the weather or been rerouted by your car’s navigation system, you’ve been helped by AI. Those examples might feel obvious, but there are many other ways it plays a role in your life you might not realize. AI is also helping solve some bigger, global challenges. For example, there are apps that use AI to help farmers identify issues with crops. And there are now systems that can examine citywide traffic information in real time to help people efficiently planning their driving routes.

Climate.jpg

C is for Climate

2. AI is being used to help tackle the global climate crisis. 

AI offers us the ability to process large volumes of data and uncover patterns—an invaluable aid when it comes to climate change. One common use case is AI-powered systems that help people regulate the amount of energy they use by turning off the heating and lights when they leave the house. AI is also helping to model glacier melt and predict rising sea levels so effective that action can be taken. Researchers are also considering the environmental impact of data centers and AI computing itself by exploring how to develop more energy efficient systems and infrastructures.

Datasets.jpg

D is for Datasets

3. AI learns from examples in the real world.

Just as a child learns through examples, the same is true of machine learning algorithms. And that’s what datasets are: large collections of examples, like weather data, photos or music, that we can use to train AI. Due to their scale and complexity (think of a dataset made up of extensive maps covering the whole of the known solar system), datasets can be very challenging to build and refine. For this reason, AI design teams often share datasets for the benefit of the wider scientific community, making it easier to collaborate and build on each other's research.

Fakes.jpg

F is for Fakes

4. AI can help our efforts to spot deepfakes.

“Deepfakes'' are AI-generated images, speech, music or videos that look real. They work by studying existing real-world imagery or audio, mapping them in detail, then manipulating them to create works of fiction that are disconcertingly true to life. However, there are often some telltale signs that distinguish them from reality; in a deepfake video, voices might sound a bit robotic, or characters may blink less or repeat their hand gestures. AI can help us spot these inconsistencies.

You.jpg

Y is for You

5. It’s impossible to teach AI what it means to be human. 

As smart as AI is (and will be), it won’t be able to understand everything that humans can. In fact, you could give an AI system all the data in the world and it still wouldn’t reflect, or understand, every human being on the planet. That’s because we’re complex, multidimensional characters that sit outside the data that machines use to make sense of things. AI systems are trained and guided by humans. And it’s up to each person to choose how they interact with AI systems and what information they feel comfortable sharing. You decide how much AI gets to learn about you.

For 22 more bite-sized definitions, visit https://atozofai.withgoogle.com

Five things you (maybe) didn’t know about AI

While there’s plenty of information out there on artificial intelligence, it’s not always easy to distinguish fact from fiction or find explanations that are easy to understand. That’s why we’ve teamed up with Google to create The A to Z of AI. It’s a series of simple, bite-sized explainers to help anyone understand what AI is, how it works and how it’s changing the world around us. Here are a few things you might learn:

AI.jpg

A is for Artificial Intelligence

1. AI is already in our everyday lives. 

You’ve probably interacted with AI without even realizing it. If you’ve ever searched for a specific image in Google Photos, asked a smart speaker about the weather or been rerouted by your car’s navigation system, you’ve been helped by AI. Those examples might feel obvious, but there are many other ways it plays a role in your life you might not realize. AI is also helping solve some bigger, global challenges. For example, there are apps that use AI to help farmers identify issues with crops. And there are now systems that can examine citywide traffic information in real time to help people efficiently planning their driving routes.

Climate.jpg

C is for Climate

2. AI is being used to help tackle the global climate crisis. 

AI offers us the ability to process large volumes of data and uncover patterns—an invaluable aid when it comes to climate change. One common use case is AI-powered systems that help people regulate the amount of energy they use by turning off the heating and lights when they leave the house. AI is also helping to model glacier melt and predict rising sea levels so effective that action can be taken. Researchers are also considering the environmental impact of data centers and AI computing itself by exploring how to develop more energy efficient systems and infrastructures.

Datasets.jpg

D is for Datasets

3. AI learns from examples in the real world.

Just as a child learns through examples, the same is true of machine learning algorithms. And that’s what datasets are: large collections of examples, like weather data, photos or music, that we can use to train AI. Due to their scale and complexity (think of a dataset made up of extensive maps covering the whole of the known solar system), datasets can be very challenging to build and refine. For this reason, AI design teams often share datasets for the benefit of the wider scientific community, making it easier to collaborate and build on each other's research.

Fakes.jpg

F is for Fakes

4. AI can help our efforts to spot deepfakes.

“Deepfakes'' are AI-generated images, speech, music or videos that look real. They work by studying existing real-world imagery or audio, mapping them in detail, then manipulating them to create works of fiction that are disconcertingly true to life. However, there are often some telltale signs that distinguish them from reality; in a deepfake video, voices might sound a bit robotic, or characters may blink less or repeat their hand gestures. AI can help us spot these inconsistencies.

You.jpg

Y is for You

5. It’s impossible to teach AI what it means to be human. 

As smart as AI is (and will be), it won’t be able to understand everything that humans can. In fact, you could give an AI system all the data in the world and it still wouldn’t reflect, or understand, every human being on the planet. That’s because we’re complex, multidimensional characters that sit outside the data that machines use to make sense of things. AI systems are trained and guided by humans. And it’s up to each person to choose how they interact with AI systems and what information they feel comfortable sharing. You decide how much AI gets to learn about you.

For 22 more bite-sized definitions, visit https://atozofai.withgoogle.com

Five things you (maybe) didn’t know about AI

While there’s plenty of information out there on artificial intelligence, it’s not always easy to distinguish fact from fiction or find explanations that are easy to understand. That’s why we’ve teamed up with Google to create The A to Z of AI. It’s a series of simple, bite-sized explainers to help anyone understand what AI is, how it works and how it’s changing the world around us. Here are a few things you might learn:

AI.jpg

A is for Artificial Intelligence

1. AI is already in our everyday lives. 

You’ve probably interacted with AI without even realizing it. If you’ve ever searched for a specific image in Google Photos, asked a smart speaker about the weather or been rerouted by your car’s navigation system, you’ve been helped by AI. Those examples might feel obvious, but there are many other ways it plays a role in your life you might not realize. AI is also helping solve some bigger, global challenges. For example, there are apps that use AI to help farmers identify issues with crops. And there are now systems that can examine citywide traffic information in real time to help people efficiently planning their driving routes.

Climate.jpg

C is for Climate

2. AI is being used to help tackle the global climate crisis. 

AI offers us the ability to process large volumes of data and uncover patterns—an invaluable aid when it comes to climate change. One common use case is AI-powered systems that help people regulate the amount of energy they use by turning off the heating and lights when they leave the house. AI is also helping to model glacier melt and predict rising sea levels so effective that action can be taken. Researchers are also considering the environmental impact of data centers and AI computing itself by exploring how to develop more energy efficient systems and infrastructures.

Datasets.jpg

D is for Datasets

3. AI learns from examples in the real world.

Just as a child learns through examples, the same is true of machine learning algorithms. And that’s what datasets are: large collections of examples, like weather data, photos or music, that we can use to train AI. Due to their scale and complexity (think of a dataset made up of extensive maps covering the whole of the known solar system), datasets can be very challenging to build and refine. For this reason, AI design teams often share datasets for the benefit of the wider scientific community, making it easier to collaborate and build on each other's research.

Fakes.jpg

F is for Fakes

4. AI can help our efforts to spot deepfakes.

“Deepfakes'' are AI-generated images, speech, music or videos that look real. They work by studying existing real-world imagery or audio, mapping them in detail, then manipulating them to create works of fiction that are disconcertingly true to life. However, there are often some telltale signs that distinguish them from reality; in a deepfake video, voices might sound a bit robotic, or characters may blink less or repeat their hand gestures. AI can help us spot these inconsistencies.

You.jpg

Y is for You

5. It’s impossible to teach AI what it means to be human. 

As smart as AI is (and will be), it won’t be able to understand everything that humans can. In fact, you could give an AI system all the data in the world and it still wouldn’t reflect, or understand, every human being on the planet. That’s because we’re complex, multidimensional characters that sit outside the data that machines use to make sense of things. AI systems are trained and guided by humans. And it’s up to each person to choose how they interact with AI systems and what information they feel comfortable sharing. You decide how much AI gets to learn about you.

For 22 more bite-sized definitions, visit https://atozofai.withgoogle.com