Run ARM apps on the Android Emulator

Posted by Michael Hazard

As part of the Android 11 developer preview we’ve released Android 11 system images, which are capable of executing ARM binaries with significantly improved performance. Previously, developers who were dependent on ARM libraries and could not build an x86 variant of their app either had to use system images with full ARM emulation, which are much slower than x86 system images when run on x86-based computers, or resort to physical devices. The new Android 11 system images are capable of translating ARM instructions to x86 without impacting the entire system. This allows the execution of ARM binaries for testing without the performance overhead of full ARM emulation.

The new Android 11 (Google APIs) x86 system image supports ARM ABIs, while the older Android Oreo system image does not

The new Android 11 (Google APIs) x86 system image supports ARM ABIs, while the older Android Oreo system image does not

Details

The significance of this may require a bit of context, especially if you build apps exclusively with Kotlin or the Java programming language. Unlike Kotlin or the Java programming language, both of which execute on the Android Runtime (ART), any C++ in your Android app compiles directly into machine instructions. This means that it needs to be compiled differently based on the architecture of the target device. Mobile phones tend to have ARM processors; consequently, many C++ dependencies you might add to your app, like a camera barcode scanner library, are only compatible with ARM processors. This is a problem if you develop on a computer with an x86-based processor, as it would prevent you from running your app.

Previously, if you wanted to get around this limitation and execute an app built for ARM on your x86 machine, you would have had to use an emulator system image with full ARM emulation. Due to the overhead of translating an entire system’s worth of ARM instructions to x86, emulator system images with full ARM emulation tend to run much slower than x86-based system images when run on x86 host machines. Additionally, emulator system images with full ARM emulation cannot take advantage of the hardware acceleration and CPU virtualization technologies provided by x86 processors.

The new ARM-compatible Android 11 system images allow the entire system to run x86 natively and take advantage of virtualization technologies as usual. When an app’s process requires an ARM binary, the binary is translated to x86 within that process exclusively. This allows the rest of the process to continue executing in x86, including the Android Runtime (ART), and other performance-critical libraries like libGLES and libvulkan. In addition to this, the translator avoids expensive memory access instrumentation and the associated performance hit by avoiding the execution of low-level hardware-specific libraries. These new emulator system images can be used both locally and on your own continuous integration infrastructure. This is possible thanks to collaboration with ARM Limited.

Going Forward

If you have previously chosen physical devices over the emulator due to the lack of performant ARM support, try out the Android 11 system images, which are now available alongside the Android 11 Developer Preview. These system images can be downloaded in Android Studio via either the SDK Manager or the Android Virtual Device Manager.

Using the Android Virtual Device Manager to create an AVD that runs Android 11

Using the Android Virtual Device Manager to create an AVD that runs Android 11

Once you get your app running on the emulator, consider adapting it for Chrome OS. Chrome OS also supports the execution of Android apps built for ARM on x86 laptops. Building for Chrome OS provides access to a substantial ecosystem of larger screen devices, allowing your application to reach even more users globally.

This technology should enable more developers to test with the Android Emulator. That said, we still recommend that developers publish both x86 and ARM ABI variants of their apps to achieve the best physical device performance and reach as many users as possible. Going forward, we plan to roll this technology out across a wider variety of API levels and ensure that it supports testing all use cases that a physical device would. Given that this is a new technology, please let us know of any problems via our Issue Tracker.

Note that the ARM to x86 translation technology enables the execution of intellectual property owned by Arm Limited. It will only be available on Google APIs and Play Store system images, and can only be used for application development and debug purposes on x86 desktop, laptop, customer on-premises servers, and customer-procured cloud-based environments. The technology should not be used in the provision of commercial hosted services.

Java is a registered trademark of Oracle and/or its affiliates.

How to pause your business online in Google Search

As the effects of the coronavirus grow, we've seen businesses around the world looking for ways to pause their activities online. With the outlook of coming back and being present for your customers, here's an overview of our recommendations of how to pause your business online and minimize impacts with Google Search. These recommendations are applicable to any business with an online presence, but particularly for those who have paused the selling of their products or services online. For more detailed information, also check our developer documentation.

Recommended: limit site functionality

If your situation is temporary and you plan to reopen your online business, we recommend keeping your site online and limiting the functionality. For example, you might mark items as out of stock, or restrict the cart and checkout process. This is the recommended approach since it minimizes any negative effects on your site's presence in Search. People can still find your products, read reviews, or add wishlists so they can purchase at a later time.

It's also a good practice to:

  • Disable the cart functionality: Disabling the cart functionality is the simplest approach, and doesn't change anything for your site's visibility in Search.
  • Tell your customers what's going on: Display a banner or popup div with appropriate information for your users, so that they're aware of the business's status. Mention any known and unusual delays, shipping times, pick-up or delivery options, etc. upfront, so that users continue with the right expectations. Make sure to follow our guidelines on popups and banners.
  • Update your structured data: If your site uses structured data (such as Products, Books, Events), make sure to adjust it appropriately (reflecting the current product availability, or changing events to cancelled). If your business has a physical storefront, update Local Business structured data to reflect current opening hours.
  • Check your Merchant Center feed: If you use Merchant Center, follow the best practices for the availability attribute.
  • Tell Google about your updates: To ask Google to recrawl a limited number of pages (for example, the homepage), use Search Console. For a larger number of pages (for example, all of your product pages), use sitemaps.

For more information, check our developers documentation.

Not recommended: disabling the whole website

As a last resort, you may decide to disable the whole website. This is an extreme measure that should only be taken for a very short period of time (a few days at most), as it will otherwise have significant effects on the website in Search, even when implemented properly. That’s why it’s highly recommended to only limit your site's functionality instead. Keep in mind that your customers may also want to find information about your products, your services, and your company, even if you're not selling anything right now.

If you decide that you need to do this (again, which we don't recommend), here are some options:

  • If you need to urgently disable the site for 1-2 days, then return an informational error page with a 503 HTTP result code instead of all content. Make sure to follow the best practices for disabling a site.
  • If you need to disable the site for a longer time, then provide an indexable homepage as a placeholder for users to find in Search by using the 200 HTTP status code.
  • If you quickly need to hide your site in Search while you consider the options, you can temporarily remove it from Search.

For more information, check our developers documentation.

Proceed with caution: To elaborate why we don't recommend disabling the whole website, here are some of the side effects:

  • Your customers won't know what's happening with your business if they can't find your business online at all.
  • Your customers can't find or read first-hand information about your business and its products & services. For example, reviews, specs, past orders, repair guides, or manuals won't be findable. Third-party information may not be as correct or comprehensive as what you can provide. This often also affects future purchase decisions.
  • Knowledge Panels may lose information, like contact phone numbers and your site's logo.
  • Search Console verification will fail, and you will lose all access to information about your business in Search. Aggregate reports in Search Console will lose data as pages are dropped from the index.
  • Ramping back up after a prolonged period of time will be significantly harder if your website needs to be reindexed first. Additionally, it's uncertain how long this would take, and whether the site would appear similarly in Search afterwards.

Other things to consider

Beyond the operation of your web site, there are other actions you might want to take to pause your online business in Google Search:

Also be sure to keep up with the latest by following updates on Twitter from Google Webmasters at @GoogleWMC and Google My Business at @GoogleMyBiz.

FAQs

What if I only close the site for a few weeks?

Completely closing a site even for just a few weeks can have negative consequences on Google's indexing of your site. We recommend limiting the site functionality instead. Keep in mind that users may also want to find information about your products, your services, and your company, even if you're currently not selling anything.

What if I want to exclude all non-essential products?

That's fine. Make sure that people can't buy the non-essential products by limiting the site functionality.

Can I ask Google to crawl less during this time?

Yes, you can limit crawling with Search Console, though it's not recommended for most cases. This may have some impact on the freshness of your results in Search. For example, it may take longer for Search to reflect that all of your products are currently not available. On the other hand, if Googlebot's crawling causes critical server resource issues, this is a valid approach. We recommend setting a reminder for yourself to reset the crawl rate once you start planning to go back in business.

How do I get a page indexed or updated quickly?

To ask Google to recrawl a limited number of pages (for example, the homepage), use Search Console. For a larger number of pages (for example, all of your product pages), use sitemaps.

What if I block a specific region from accessing my site?

Google generally crawls from the US, so if you block the US, Google Search generally won't be able to access your site at all. We don't recommend that you block an entire region from temporarily accessing your site; instead, we recommend limiting your site's functionality for that region.

Should I use the Removals Tool to remove out-of-stock products?

No. People won't be able to find first-hand information about your products on Search, and there might still be third-party information for the product that may be incorrect or incomplete. It's better to still allow that page, and mark it out of stock. That way people can still understand what's going on, even if they can't purchase the item. If you remove the product from Search, people don't know why it's not there.

--------

We realize that any business closure is a big and stressful step, and not everyone will know what to do. If you notice afterwards that you could have done something differently, everything's not lost: we try to make our systems robust so that your site will be back in Search as quickly as possible. Like you, we're hoping that this crisis finds an end as soon as possible. We hope that with this information, you're able to have your online business up & running quickly when that time comes. Should you run into any problems or questions along the way, please don't hesitate to use our public channels to get help.

Chrome and Chrome OS release updates

We previously paused upcoming releases for Chrome and Chrome OS. Today we’re sharing an update as we’re now resuming releases with an adjusted schedule:
  • M83 will be released three weeks earlier than previously planned and will include all M82 work as we cancelled the M82 release (all channels).
  • Our Canary, Dev and Beta channels have or will resume this week, with M83 moving to Dev, and M81 continuing in Beta.
  • Our Stable channel will resume release next week with security and critical fixes in M80, followed by the release of M81 the week of April 7, and M83 ~mid-May.
  • We will share a future update on the timing of the M84 branch and releases.
We continue to closely monitor that Chrome and Chrome OS are stable, secure, and work reliably. We’ll keep everyone informed of any changes on our schedule on this blog and will share additional details on the schedule in the Chromium Developers group, as needed. You can also check our schedule page for specific dates for each milestone at any time.

Thanks everyone for the help and patience during this time.

Google Chrome 

Identifying vulnerabilities and protecting you from phishing

Google’s Threat Analysis Group (TAG) works to counter targeted and government-backed hacking against Google and the people who use our products. Following our November update, today we’re sharing the latest insights to fight phishing, and for security teams, providing more details about our work identifying attacks against zero-day vulnerabilities. 

Protecting you from phishing

We have a long-standing policy to send you a warning if we detect that your account is a target of government-backed phishing or malware attempts. In 2019, we sent almost 40,000 warnings, a nearly 25 percent drop from 2018. One reason for this decline is that our new protections are working—attackers' efforts have been slowed down and they’re more deliberate in their attempts, meaning attempts are happening less frequently as attackers adapt.

Distribution of the targets of government-backed phishing in 2019

Distribution of the targets of government-backed phishing in 2019.

We’ve detected a few emerging trends in recent months.

Impersonating news outlets and journalists is on the rise

Upon reviewing phishing attempts since the beginning of this year, we’ve seen a rising number of attackers, including those from Iran and North Korea, impersonating news outlets or journalists. For example, attackers impersonate a journalist to seed false stories with other reporters to spread disinformation. In other cases, attackers will send several benign emails to build a rapport with a journalist or foreign policy expert before sending a malicious attachment in a follow up email. Government-backed attackers regularly target foreign policy experts for their research, access to the organizations they work with, and connection to fellow researchers or policymakers for subsequent attacks. 

Heavily targeted sectors are (mostly) not surprising

Government-backed attackers continue to consistently target geopolitical rivals, government officials, journalists, dissidents and activists. The chart below details the Russian threat actor group SANDWORM’s targeting efforts (by sector) over the last three years.

Distribution of targets by sector by the Russian threat actor known as SANDWORM

Government-backed attackers repeatedly go after their targets

In 2019, one in five accounts that received a warning was targeted multiple times by attackers. If at first the attacker does not succeed, they’ll try again using a different lure, different account, or trying to compromise an associate of their target.

We’ve yet to see people successfully phished if they participate in Google’s Advanced Protection Program (APP), even if they are repeatedly targeted. APP provides the strongest protections available against phishing and account hijacking and is specifically designed for the highest-risk accounts. 

Finding attacks that leverage zero-day vulnerabilities

Zero-day vulnerabilities are unknown software flaws. Until they’re identified and fixed, they can be exploited by attackers. TAG actively hunts for these types of attacks because they are particularly dangerous and have a high rate of success, although they account for a small number of the overall total. When we find an attack that takes advantage of  a zero-day vulnerability, we report the vulnerability to the vendor and give them seven days to patch or produce an advisory or we release an advisory ourselves

We work across all platforms, and in 2019 TAG discovered zero-day vulnerabilities affecting Android, Chrome, iOS, Internet Explorer and Windows. Most recently, TAG was acknowledged in January 2020 for our contribution in identifying CVE-2020-0674, a remote code execution vulnerability in Internet Explorer. 

Last year, TAG discovered that a single threat actor was capitalizing on five zero-day vulnerabilities. Finding this many zero-day exploits from the same actor in a relatively short time frame is rare. The exploits were delivered via compromised legitimate websites (e.g. watering hole attacks), links to malicious websites, and email attachments in limited spear phishing campaigns. The majority of targets we observed were from North Korea or individuals who worked on North Korea-related issues.

For security teams interested in learning more, here are additional details about the exploits and our work in 2019:

The vulnerabilities underlying these exploits included:

The following technical details are associated with the exploits and can be used for teams interested in conducting further research on these attacks:

  • CVE-2018-8653, CVE-2019-1367 and CVE-2020-0674 are vulnerabilities inside jscript.dll, therefore all exploits enabled IE8 rendering and used JScript.Compact as JS engine.

  • In most Internet Explorer exploits, attackers abused the Enumerator object in order to gain remote code execution. 

  • To escape from the Internet Explorer EPM sandbox, exploits used a technique consisting of replaying the same vulnerability inside svchost by abusing Web Proxy Auto-Discovery (WPad) Service. Attackers abused this technique with CVE-2020-0674 on Firefox to escape the sandbox after exploiting CVE-2019-17026.

  • CVE-2019-0676 is a variant of CVE-2017-0022, CVE-2016-3298, CVE-2016-0162 and CVE-2016-3351 where the vulnerability resided inside the handling of “res://” URI scheme. Exploiting CVE-2019-0676 enabled attackers to reveal presence or non-presence of files on the victim’s computer; this information was later used to decide whether or not a second stage exploit should be delivered.

  • The attack vector for CVE-2019-1367 was rather atypical as the exploit was delivered from an Office document abusing the online video embedding feature to load an external URL conducting the exploitation.

Our Threat Analyst Group will continue to identify bad actors and share relevant information with others in the industry. Our goal is to bring awareness to these issues to protect you and fight bad actors to prevent future attacks. In a future update, we’ll provide details on attackers using lures related to COVID-19 and expected behavior we’re observing (all within the normal range of attacker activity). 

Identifying vulnerabilities and protecting you from phishing

Google’s Threat Analysis Group (TAG) works to counter targeted and government-backed hacking against Google and the people who use our products. Following our November update, today we’re sharing the latest insights to fight phishing, and for security teams, providing more details about our work identifying attacks against zero-day vulnerabilities. 

Protecting you from phishing

We have a long-standing policy to send you a warning if we detect that your account is a target of government-backed phishing or malware attempts. In 2019, we sent almost 40,000 warnings, a nearly 25 percent drop from 2018. One reason for this decline is that our new protections are working—attackers' efforts have been slowed down and they’re more deliberate in their attempts, meaning attempts are happening less frequently as attackers adapt.

Distribution of the targets of government-backed phishing in 2019

Distribution of the targets of government-backed phishing in 2019.

We’ve detected a few emerging trends in recent months.

Impersonating news outlets and journalists is on the rise

Upon reviewing phishing attempts since the beginning of this year, we’ve seen a rising number of attackers, including those from Iran and North Korea, impersonating news outlets or journalists. For example, attackers impersonate a journalist to seed false stories with other reporters to spread disinformation. In other cases, attackers will send several benign emails to build a rapport with a journalist or foreign policy expert before sending a malicious attachment in a follow up email. Government-backed attackers regularly target foreign policy experts for their research, access to the organizations they work with, and connection to fellow researchers or policymakers for subsequent attacks. 

Heavily targeted sectors are (mostly) not surprising

Government-backed attackers continue to consistently target geopolitical rivals, government officials, journalists, dissidents and activists. The chart below details the Russian threat actor group SANDWORM’s targeting efforts (by sector) over the last three years.

Distribution of targets by sector by the Russian threat actor known as SANDWORM

Government-backed attackers repeatedly go after their targets

In 2019, one in five accounts that received a warning was targeted multiple times by attackers. If at first the attacker does not succeed, they’ll try again using a different lure, different account, or trying to compromise an associate of their target.

We’ve yet to see people successfully phished if they participate in Google’s Advanced Protection Program (APP), even if they are repeatedly targeted. APP provides the strongest protections available against phishing and account hijacking and is specifically designed for the highest-risk accounts. 

Finding attacks that leverage zero-day vulnerabilities

Zero-day vulnerabilities are unknown software flaws. Until they’re identified and fixed, they can be exploited by attackers. TAG actively hunts for these types of attacks because they are particularly dangerous and have a high rate of success, although they account for a small number of the overall total. When we find an attack that takes advantage of  a zero-day vulnerability, we report the vulnerability to the vendor and give them seven days to patch or produce an advisory or we release an advisory ourselves

We work across all platforms, and in 2019 TAG discovered zero-day vulnerabilities affecting Android, Chrome, iOS, Internet Explorer and Windows. Most recently, TAG was acknowledged in January 2020 for our contribution in identifying CVE-2020-0674, a remote code execution vulnerability in Internet Explorer. 

Last year, TAG discovered that a single threat actor was capitalizing on five zero-day vulnerabilities. Finding this many zero-day exploits from the same actor in a relatively short time frame is rare. The exploits were delivered via compromised legitimate websites (e.g. watering hole attacks), links to malicious websites, and email attachments in limited spear phishing campaigns. The majority of targets we observed were from North Korea or individuals who worked on North Korea-related issues.

For security teams interested in learning more, here are additional details about the exploits and our work in 2019:

The vulnerabilities underlying these exploits included:

The following technical details are associated with the exploits and can be used for teams interested in conducting further research on these attacks:

  • CVE-2018-8653, CVE-2019-1367 and CVE-2020-0674 are vulnerabilities inside jscript.dll, therefore all exploits enabled IE8 rendering and used JScript.Compact as JS engine.

  • In most Internet Explorer exploits, attackers abused the Enumerator object in order to gain remote code execution. 

  • To escape from the Internet Explorer EPM sandbox, exploits used a technique consisting of replaying the same vulnerability inside svchost by abusing Web Proxy Auto-Discovery (WPad) Service. Attackers abused this technique with CVE-2020-0674 on Firefox to escape the sandbox after exploiting CVE-2019-17026.

  • CVE-2019-0676 is a variant of CVE-2017-0022, CVE-2016-3298, CVE-2016-0162 and CVE-2016-3351 where the vulnerability resided inside the handling of “res://” URI scheme. Exploiting CVE-2019-0676 enabled attackers to reveal presence or non-presence of files on the victim’s computer; this information was later used to decide whether or not a second stage exploit should be delivered.

  • The attack vector for CVE-2019-1367 was rather atypical as the exploit was delivered from an Office document abusing the online video embedding feature to load an external URL conducting the exploitation.

Our Threat Analyst Group will continue to identify bad actors and share relevant information with others in the industry. Our goal is to bring awareness to these issues to protect you and fight bad actors to prevent future attacks. In a future update, we’ll provide details on attackers using lures related to COVID-19 and expected behavior we’re observing (all within the normal range of attacker activity). 

Discover podcasts you’ll love with Google Podcasts, now on iOS

It took me a decade to find the podcasts I love most. When I lived in Chicago, I started downloading podcasts for traffic-filled drives to soccer practice. One that stood out during the ninety-minute commute was Planet Money, which became a ritual for me as I studied economics. Since then I've gradually gathered a list of favorites for all activities from long runs to cooking dinner—Acquired, PTI, and More Perfect are a few. Building my podcast library took years and a number of friends introducing me to their favorite shows and episodes, such as a particularly memorable Radiolab about CRISPR.

But you should be able to find new favorites in minutes, not years. We’ve redesigned the Google Podcasts app to make it easier to discover podcasts you’ll love, build your list of go-to podcasts, and customize your listening. To support listeners on more platforms, we’re also bringing Google Podcasts to iOS for the first time and adding support for subscriptions on Google Podcasts for Web. Regardless of the platform you’re using, your listening progress will sync across devices, and you’ll be able to pick up right where you left off.

The new app is organized around three tabs: Home, Explore and Activity. The Home tab features a feed of new episodes and gives you quick access to your subscribed shows. When you select an episode you want to listen to, you’ll now see topics or people covered in that podcast, and you can easily jump to Google Search to learn more.

podcastsapp.jpg

In the Explore tab, “For you” displays new show and episode recommendations related to your interests, and you can browse popular podcasts in categories such as comedy, sports, and news. You’ll be able to control personalized recommendations from the Google Podcasts settings, which are accessible right from the Explore tab.


Android_google_podcast_touch_blank.gif

As you listen and subscribe to more podcasts, the Activity tab will display your listen history, queued up episodes, and downloads. For each show in your subscriptions, you can now enable automatic downloading and/or push notifications for when new episodes come out.

The new Google Podcasts is available on iOS today and rolling out to Android this week. Try it out and discover your next favorite show.

Learn from our mobility experts at Android OnAir

To support Android Enterprise customers with their mobility initiatives, we’ve created a series of webinars at Android OnAir that offer best practices in deploying and managing devices. Each webinar tackles an essential subject that is top of mind for IT decision makers and admins. Participants can join a live Q&A during the broadcast to get answers directly from Google. If you can’t make the live broadcast, webinars are all available on-demand.

Our current catalogue of on-demand webinars cover important topics like deployment strategies and Android security updates. Check out the upcoming schedule and register today to reserve your spot.

Google security services on Android 

April 15:Android devices are backed by industry-leading security to help keep devices safe. Learn how Google Play Protect, Safe Browsing, SafetyNet and other Google Security Services help safeguard company data and employee privacy, and discover strategies to incorporate them into your mobility initiative.

Using mobile to improve business continuity 

May 13: Android can transform how your teams connect with each other and work more efficiently, no matter where they are. Learn how you can take mobile devices beyond traditional use cases and give employees more convenience with access to internal services like private apps, corporate sites and key services to extend business continuity to any device.

How Google mandates Android security standards

June 17:Consistent security and management standards give companies the confidence to use a mix of devices from different OEMs to support various business use cases. Find out more about how Google works closely with device manufacturers and developers to implement security systems that are deployed on enterprise devices.

Preventing enterprise data loss on Android

July 15: Data loss can be catastrophic for any business. Learn how Android Enterprise management features give IT admins the tools to mandate secure app and data usage practices that help prevent leaks and guard against attacks from bad actors. Discover Android management strategies to give employees the level of access you want while helping protect critical company data.

Equip your frontline workers for success with Android

August 12: Frontline workers like sales associates, warehouse managers, delivery drivers and others perform critical tasks that drive customer success. However, mobile investment in these employees remains low. Businesses can use mobile devices to empower these teams with data-driven decisions and real-time access to company resources. Learn how business can use Android device diversity to provide the right device for each digital use case.

Explaining Android Enterprise
Recommended and security requirements

September 16: Android Enterprise Recommended simplifies mobility choices for businesses with a Google-approved shortlist of devices, services, and partners that meet our strict enterprise requirements. Find out how this initiative can help your team select devices with consistent security and software requirements and find validated Enterprise Mobility Management and Managed Service Provider partners.


A Neural Weather Model for Eight-Hour Precipitation Forecasting



Predicting weather from minutes to weeks ahead with high accuracy is a fundamental scientific challenge that can have a wide ranging impact on many aspects of society. Current forecasts employed by many meteorological agencies are based on physical models of the atmosphere that, despite improving substantially over the preceding decades, are inherently constrained by their computational requirements and are sensitive to approximations of the physical laws that govern them. An alternative approach to weather prediction that is able to overcome some of these constraints uses deep neural networks (DNNs): instead of encoding explicit physical laws, DNNs discover patterns in the data and learn complex transformations from inputs to the desired outputs using parallel computation on powerful specialized hardware such as GPUs and TPUs.

Building on our previous research into precipitation nowcasting, we present “MetNet: A Neural Weather Model for Precipitation Forecasting,” a DNN capable of predicting future precipitation at 1 km resolution over 2 minute intervals at timescales up to 8 hours into the future. MetNet outperforms the current state-of-the-art physics-based model in use by NOAA for prediction times up to 7-8 hours ahead and makes a prediction over the entire US in a matter of seconds as opposed to an hour. The inputs to the network are sourced automatically from radar stations and satellite networks without the need for human annotation. The model output is a probability distribution that we use to infer the most likely precipitation rates with associated uncertainties at each geographical region. The figure below provides an example of the network’s predictions over the continental United States.
MetNet model predictions compared to the ground truth as measured by the NOAA multi-radar/multi-sensor system (MRMS). The MetNet model (top) displays the probabilities for 1 mm/hr precipitation predicted from 2 minutes to 480 minutes ahead, whereas the MRMS data (bottom) shows the areas receiving at least 1 mm/hr of precipitation over that same time period.
Neural Weather Model
MetNet does not rely on explicit physical laws describing the dynamics of the atmosphere, but instead learns by backpropagation to forecast the weather directly from observed data. The network uses precipitation estimates derived from ground based radar stations comprising the multi-radar/multi-sensor system (MRMS) and measurements from NOAA’s Geostationary Operational Environmental Satellite system that provides a top down view of clouds in the atmosphere. Both data sources cover the continental US and provide image-like inputs that can be efficiently processed by the network.

The model is executed for every 64 km x 64 km square covering the entire US at 1 km resolution. However, the actual physical coverage of the input data corresponding to each of these output regions is much larger, since it must take into account the possible motion of the clouds and precipitation fields over the time period for which the prediction is made. For example, assuming that clouds move up to 60 km/h, in order to make informed predictions that capture the temporal dynamics of the atmosphere up to 8 hours ahead, the model needs 60 x 8 = 480 km of spatial context in all directions. So, to achieve this level of context, information from a 1024 km x 1024 km area is required for predictions being made on the center 64 km x 64 km patch.
Size of the input patch containing satellite and radar images (large, 1024 x 1024 km square) and of the output predicted radar image (small, 64 x 64 km square).
Because processing a 1024 km x 1024 km area at full resolution requires a significant amount of memory, we use a spatial downsampler that decreases memory consumption by reducing the spatial dimension of the input patch, while also finding and keeping the relevant weather patterns in the input. A temporal encoder (implemented with a convolutional LSTM that is especially well suited for sequences of images) is then applied along the time dimension of the downsampled input data, encoding seven snapshots from the previous 90 minutes of input data, in 15-minute segments. The output of the temporal encoder is then passed to a spatial aggregator, which uses axial self-attention to efficiently capture long range spatial dependencies in the data, with a variable amount of context based on the input target time, to make predictions over the 64 km x 64 km output.

The output of this architecture is a discrete probability distribution estimating the probability of a given rate of precipitation for each square kilometer in the continental United States.
The architecture of the neural weather model, MetNet. The input satellite and radar images first pass through a spatial downsampler to reduce memory consumption. They are then processed by a convolutional LSTM at 15 minute intervals over the 90 minutes of input data. Then axial attention layers are used to make the network see the entirety of the input images.
Results
We evaluate MetNet on a precipitation rate forecasting benchmark and compare the results with two baselines — the NOAA High Resolution Rapid Refresh (HRRR) system, which is the physical weather forecasting model currently operational in the US, and a baseline model that estimates the motion of the precipitation field (i.e., optical flow), a method known to perform well for prediction times less than 2 hours.

A significant advantage of our neural weather model is that it is optimized for dense and parallel computation and well suited for running on specialty hardware (e.g., TPUs). This allows predictions to be made in parallel in a matter of seconds, whether it is for a specific location like New York City or for the entire US, whereas physical models such as HRRR have a runtime of about an hour on a supercomputer.

We quantify the difference in performance between MetNet, HRRR, and the optical flow baseline model in the plot below. Here, we show the performance achieved by the three models, evaluated using the F1-score at a precipitation rate threshold of 1.0 mm/h, which corresponds to light rain. The MetNet neural weather model is able to outperform the NOAA HRRR system at timelines less than 8 hours and is consistently better than the flow-based model.
Performance evaluated in terms of F1-score at 1.0 mm/h precipitation rate (higher is better). The neural weather model (MetNet) outperforms the physics-based model (HRRR) currently operational in the US for timescales up to 8 hours ahead.
Due to the stochastic nature of the atmosphere, the uncertainty about the exact future weather conditions increases with longer prediction times. Because MetNet is a probabilistic model, the uncertainty in the predictions is seen in the visualizations by the growing smoothness of the predictions as the forecast time is extended. In contrast, HRRR does not directly make probabilistic predictions, but instead predicts a single potential future. The figure below compares the output of the MetNet model to that of the HRRR model.
Comparison between the output from MetNet (top) and HRRR (bottom) to ground-truth (middle) as retrieved from the NOAA MRMS system. Notice that while the HRRR model predicts structure that appears to be more similar to that of the ground-truth, the structure predicted may be grossly incorrect.
The predictions from the HRRR physical model look sharper and more structured than that of the MetNet model, but the structure, specifically the exact time and location where the structure is predicted, is less accurate due to uncertainty in the initial conditions and the parameters of the model.
HRRR (left) predicts a single potential future outcome (in red) out of the many possible outcomes, whereas MetNet (right) directly accounts for uncertainty by assigning probabilities over the future outcomes.
A more thorough comparison between the HRRR and MetNet models can be found in the video below:
Future Directions
We are actively researching how to improve global weather forecasting, especially in regions where the impacts of rapid climate change are most profound. While we demonstrate the present MetNet model for the continental US, it could be extended to cover any region for which adequate radar and optical satellite data are available. The work presented here is a small stepping stone in this effort that we hope leads to even greater improvements through future collaboration with the meteorological community.

Acknowledgements
This project was done in collaboration with Lasse Espeholt, Jonathan Heek, Mostafa Dehghani, Avital Oliver, Tim Salimans, Shreya Agrawal and Jason Hickey. We would also like to thank Manoj Kumar, Wendy Shang, Dick Weissenborn, Cenk Gazen, John Burge, Stephen Hoyer, Lak Lakshmanan, Rob Carver, Carla Bromberg and Aaron Bell for useful discussions and Tom Small for help with the visualizations.

Source: Google AI Blog


Five things you (maybe) didn’t know about AI

While there’s plenty of information out there on artificial intelligence, it’s not always easy to distinguish fact from fiction or find explanations that are easy to understand. That’s why we’ve teamed up with Google to create The A to Z of AI. It’s a series of simple, bite-sized explainers to help anyone understand what AI is, how it works and how it’s changing the world around us. Here are a few things you might learn:

AI.jpg

A is for Artificial Intelligence

1. AI is already in our everyday lives. 

You’ve probably interacted with AI without even realizing it. If you’ve ever searched for a specific image in Google Photos, asked a smart speaker about the weather or been rerouted by your car’s navigation system, you’ve been helped by AI. Those examples might feel obvious, but there are many other ways it plays a role in your life you might not realize. AI is also helping solve some bigger, global challenges. For example, there are apps that use AI to help farmers identify issues with crops. And there are now systems that can examine citywide traffic information in real time to help people efficiently planning their driving routes.

Climate.jpg

C is for Climate

2. AI is being used to help tackle the global climate crisis. 

AI offers us the ability to process large volumes of data and uncover patterns—an invaluable aid when it comes to climate change. One common use case is AI-powered systems that help people regulate the amount of energy they use by turning off the heating and lights when they leave the house. AI is also helping to model glacier melt and predict rising sea levels so effective that action can be taken. Researchers are also considering the environmental impact of data centers and AI computing itself by exploring how to develop more energy efficient systems and infrastructures.

Datasets.jpg

D is for Datasets

3. AI learns from examples in the real world.

Just as a child learns through examples, the same is true of machine learning algorithms. And that’s what datasets are: large collections of examples, like weather data, photos or music, that we can use to train AI. Due to their scale and complexity (think of a dataset made up of extensive maps covering the whole of the known solar system), datasets can be very challenging to build and refine. For this reason, AI design teams often share datasets for the benefit of the wider scientific community, making it easier to collaborate and build on each other's research.

Fakes.jpg

F is for Fakes

4. AI can help our efforts to spot deepfakes.

“Deepfakes'' are AI-generated images, speech, music or videos that look real. They work by studying existing real-world imagery or audio, mapping them in detail, then manipulating them to create works of fiction that are disconcertingly true to life. However, there are often some telltale signs that distinguish them from reality; in a deepfake video, voices might sound a bit robotic, or characters may blink less or repeat their hand gestures. AI can help us spot these inconsistencies.

You.jpg

Y is for You

5. It’s impossible to teach AI what it means to be human. 

As smart as AI is (and will be), it won’t be able to understand everything that humans can. In fact, you could give an AI system all the data in the world and it still wouldn’t reflect, or understand, every human being on the planet. That’s because we’re complex, multidimensional characters that sit outside the data that machines use to make sense of things. AI systems are trained and guided by humans. And it’s up to each person to choose how they interact with AI systems and what information they feel comfortable sharing. You decide how much AI gets to learn about you.

For 22 more bite-sized definitions, visit https://atozofai.withgoogle.com

Five things you (maybe) didn’t know about AI

While there’s plenty of information out there on artificial intelligence, it’s not always easy to distinguish fact from fiction or find explanations that are easy to understand. That’s why we’ve teamed up with Google to create The A to Z of AI. It’s a series of simple, bite-sized explainers to help anyone understand what AI is, how it works and how it’s changing the world around us. Here are a few things you might learn:

AI.jpg

A is for Artificial Intelligence

1. AI is already in our everyday lives. 

You’ve probably interacted with AI without even realizing it. If you’ve ever searched for a specific image in Google Photos, asked a smart speaker about the weather or been rerouted by your car’s navigation system, you’ve been helped by AI. Those examples might feel obvious, but there are many other ways it plays a role in your life you might not realize. AI is also helping solve some bigger, global challenges. For example, there are apps that use AI to help farmers identify issues with crops. And there are now systems that can examine citywide traffic information in real time to help people efficiently planning their driving routes.

Climate.jpg

C is for Climate

2. AI is being used to help tackle the global climate crisis. 

AI offers us the ability to process large volumes of data and uncover patterns—an invaluable aid when it comes to climate change. One common use case is AI-powered systems that help people regulate the amount of energy they use by turning off the heating and lights when they leave the house. AI is also helping to model glacier melt and predict rising sea levels so effective that action can be taken. Researchers are also considering the environmental impact of data centers and AI computing itself by exploring how to develop more energy efficient systems and infrastructures.

Datasets.jpg

D is for Datasets

3. AI learns from examples in the real world.

Just as a child learns through examples, the same is true of machine learning algorithms. And that’s what datasets are: large collections of examples, like weather data, photos or music, that we can use to train AI. Due to their scale and complexity (think of a dataset made up of extensive maps covering the whole of the known solar system), datasets can be very challenging to build and refine. For this reason, AI design teams often share datasets for the benefit of the wider scientific community, making it easier to collaborate and build on each other's research.

Fakes.jpg

F is for Fakes

4. AI can help our efforts to spot deepfakes.

“Deepfakes'' are AI-generated images, speech, music or videos that look real. They work by studying existing real-world imagery or audio, mapping them in detail, then manipulating them to create works of fiction that are disconcertingly true to life. However, there are often some telltale signs that distinguish them from reality; in a deepfake video, voices might sound a bit robotic, or characters may blink less or repeat their hand gestures. AI can help us spot these inconsistencies.

You.jpg

Y is for You

5. It’s impossible to teach AI what it means to be human. 

As smart as AI is (and will be), it won’t be able to understand everything that humans can. In fact, you could give an AI system all the data in the world and it still wouldn’t reflect, or understand, every human being on the planet. That’s because we’re complex, multidimensional characters that sit outside the data that machines use to make sense of things. AI systems are trained and guided by humans. And it’s up to each person to choose how they interact with AI systems and what information they feel comfortable sharing. You decide how much AI gets to learn about you.

For 22 more bite-sized definitions, visit https://atozofai.withgoogle.com