Make “work from home” work for you

In my job at Google, I advise people on how to use their time as efficiently as possible. When working from home, my productivity strategies are even more important because I don’t have the ordinary structure of a day at the office, like commuting to work, walking to meetings, or running into coworkers. When your house becomes your office, you need to learn a whole new routine. 

Getting work done when your teammates aren’t physically with you has been the norm at Google for a while (in fact 39 percent of meetings at Google involve employees from two or more cities). But it might not be for everyone, and many people around the world are now finding themselves in new work situations. So I put together some of my go-to productivity tips—no matter where you’re working—and a few things I’ve learned about how to get it all done from home.

Designate your “spot” where you work (and where you don’t)

It’s easy to pull your computer up to your kitchen table or plop on the couch and start working. But a consistent room, spot, desk or chair that you “go to” every day to work helps your brain associate that spot (smells, sights and sounds) with getting work done. Put up some things you had at your desk, like pictures of your friends or family. Get a new mousepad you love. Stock your go-to snacks on a little shelf. And just as important as creating your "work spot" is determining the areas where you don’t work. Maybe you never bring your computer upstairs or into your bedroom. This helps create mental distance and allows you to relax often even though your work is at home with you.

Use Hangouts Meet like a pro. 

You’ll probably be spending more time on video chat—in our case, Hangouts Meet. Here are a few tricks for Meet at home: lower your video quality when you’re experiencing bandwidth restrictions or delays, dial into a video call but get audio through your phone, andcaption your meetings to make sure everyone can follow. If you’re needing some (virtual) human interaction, set up an agenda-less video chat with your team or friends in the office—it’s not a formal meeting, just time to chat and check in with each other.

Practice “one tab working.” 

If you don’t have a large monitor or your usual screen setup at home, it’s even more important to focus on one Chrome tab at a time. If you’re on a video call from your laptop, minimize all other tabs and focus on the conversation—just like you would put away your phone or close your laptop in a meeting to stay engaged.

Act the part. 

Resist the urge to wake up and start working in bed—it doesn’t help your brain get in the “mood” of being productive. Stick to your usual routines like waking up, getting dressed, eating breakfast, then “commuting” to your new work space. Staying in your pajamas, while comfortable, will make you feel less like it’s a regular workday and make it harder to get things done.

Play around with your schedule and energy.

The good news about working from home? No commute. Think of this as a time to experiment with alternate schedules and finding your “biological prime time.” If you’re a morning person, try waking up and working on something for a bit, then taking a break mid-morning. If you’re a night owl who prefers to sleep a little later, shift your schedule to get more work done in the later afternoon when you may have been commuting home. Productivity is not just about what you’re doing, but more importantly when you’re doing it.

Working from home does not mean working all the time. 

One of the hardest things about working from home is setting boundaries. Leave your computer in your workspace and only work when you’re in that spot. Pick a time when you’re “done for the day” by setting working hours in Google Calendar to remind people when you’re available. Take mental breaks the way you would in the office—instead of walking to a meeting, walk outside or call a friend.

Create your daily to-do list the day before. 

Part of staying on track and setting a work schedule at home is listing out what you have to do in a day. I created a daily plan template (you can use it too!) that helps me create an hour-by-hour plan of what I intend to do. If you fill it out the night before,  you’ll wake up in the mindset of what you need to do that day.

Finish that one thing you’ve been meaning to do.  

Working in the office can be go-go-go and rarely leaves alone time or downtime to get things done. Working from home is a chance to catch up on some of your individual to-do’s—-finish those expenses, brainstorm that long term project or read the article you bookmarked forever ago. Set up an ongoing list in Google Keep and refer back to it when you have pockets of downtime. 

Cut yourself (and others) some slack

Some people only have a one bedroom studio and are spending their days there. Some people have spouses who are working from home, kids at home, or dogs at home (I have all three!). Connectivity might be slower and there might be some barking in the background, but just remember everyone is doing their best to make working from home work for them.

Introducing Dreamer: Scalable Reinforcement Learning Using World Models



Research into how artificial agents can choose actions to achieve goals is making rapid progress in large part due to the use of reinforcement learning (RL). Model-free approaches to RL, which learn to predict successful actions through trial and error, have enabled DeepMind's DQN to play Atari games and AlphaStar to beat world champions at Starcraft II, but require large amounts of environment interaction, limiting their usefulness for real-world scenarios.

In contrast, model-based RL approaches additionally learn a simplified model of the environment. This world model lets the agent predict the outcomes of potential action sequences, allowing it to play through hypothetical scenarios to make informed decisions in new situations, thus reducing the trial and error necessary to achieve goals. In the past, it has been challenging to learn accurate world models and leverage them to learn successful behaviors. While recent research, such as our Deep Planning Network (PlaNet), has pushed these boundaries by learning accurate world models from images, model-based approaches have still been held back by ineffective or computationally expensive planning mechanisms, limiting their ability to solve difficult tasks.

Today, in collaboration with DeepMind, we present Dreamer, an RL agent that learns a world model from images and uses it to learn long-sighted behaviors. Dreamer leverages its world model to efficiently learn behaviors via backpropagation through model predictions. By learning to compute compact model states from raw images, the agent is able to efficiently learn from thousands of predicted sequences in parallel using just one GPU. Dreamer achieves a new state-of-the-art in performance, data efficiency and computation time on a benchmark of 20 continuous control tasks given raw image inputs. To stimulate further advancement of RL, we are releasing the source code to the research community.

How Does Dreamer Work?
Dreamer consists of three processes that are typical for model-based methods: learning the world model, learning behaviors from predictions made by the world model, and executing its learned behaviors in the environment to collect new experience. To learn behaviors, Dreamer uses a value network to take into account rewards beyond the planning horizon and an actor network to efficiently compute actions. The three processes, which can be executed in parallel, are repeated until the agent has achieved its goals:
The three processes of the Dreamer agent. The world model is learned from past experience. From predictions of this model, the agent then learns a value network to predict future rewards and an actor network to select actions. The actor network is used to interact with the environment.
Learning the World Model
Dreamer leverages the PlaNet world model, which predicts outcomes based on a sequence of compact model states that are computed from the input images, instead of directly predicting from one image to the next. It automatically learns to produce model states that represent concepts helpful for predicting future outcomes, such as object types, positions of objects, and the interaction of the objects with their surroundings. Given a sequence of images, actions, and rewards from the agent's dataset of past experience, Dreamer learns the world model as shown:
Dreamer learns a world model from experience. Using past images (o1–o3) and actions (a1–a2), it computes a sequence of compact model states (green circles) from which it reconstructs the images (ô1–ô3) and predicts the rewards (r̂1–r̂3).
An advantage to using the PlaNet world model is that predicting ahead using compact model states instead of images greatly improves the computational efficiency. This enables the model to predict thousands of sequences in parallel on a single GPU. The approach can also facilitate generalization, leading to accurate long-term video predictions. To gain insights into how the model works, we can visualize the predicted sequences by decoding the compact model states back into images, as shown below for a task of the DeepMind Control Suite and for a task of the DeepMind Lab environment:
Predicting ahead using compact model states enables long-term predictions in complex environments. Shown here are two sequences that the agent has not encountered before. Given five input images, the model reconstructs them and predicts the future images up to time step 50.
Efficient Behavior Learning
Previously developed model-based agents typically select actions either by planning through many model predictions or by using the world model in place of a simulator to reuse existing model-free techniques. Both designs are computationally demanding and do not fully leverage the learned world model. Moreover, even powerful world models are limited in how far ahead they can accurately predict, rendering many previous model-based agents shortsighted. Dreamer overcomes these limitations by learning a value network and an actor network via backpropagation through predictions of its world model.

Dreamer efficiently learns the actor network to predict successful actions by propagating gradients of rewards backwards through predicted state sequences, which is not possible for model-free approaches. This tells Dreamer how small changes to its actions affect what rewards are predicted in the future, allowing it to refine the actor network in the direction that increases the rewards the most. To consider rewards beyond the prediction horizon, the value network estimates the sum of future rewards for each model state. The rewards and values are then backpropagated to refine the actor network to select improved actions:
Dreamer learns long-sighted behaviors from predicted sequences of model states. It first learns the long-term value (v̂2–v̂3) of each state, and then predicts actions (â1–â2) that lead to high rewards and values by backpropagating them through the state sequence to the actor network.
Dreamer differs from PlaNet in several ways. For a given situation in the environment, PlaNet searches for the best action among many predictions for different action sequences. In contrast, Dreamer side-steps this expensive search by decoupling planning and acting. Once its actor network has been trained on predicted sequences, it computes the actions for interacting with the environment without additional search. In addition, Dreamer considers rewards beyond the planning horizon using a value function and leverages backpropagation for efficient planning.

Performance on Control Tasks
We evaluated Dreamer on a standard benchmark of 20 diverse tasks with continuous actions and image inputs. The tasks include balancing and catching objects, as well as locomotion of various simulated robots. The tasks are designed to pose a variety of challenges to the RL agent, including difficult to predict collisions, sparse rewards, chaotic dynamics, small but relevant objects, high degrees of freedom, and 3D perspectives:
Dreamer learns to solve 20 challenging continuous control tasks with image inputs, 5 of which are displayed here. The visualizations show the same 64x64 images that the agent receives from the environment.
We compare the performance of Dreamer to that of PlaNet, the previous best model-based agent, the popular model-free agent, A3C, as well as the current best model-free agent on this benchmark, D4PG, which combines several advances of model-free RL. The model-based agents learn efficiently in under 5 million frames, corresponding to 28 hours inside the simulation. The model-free agents learn more slowly and require 100 million frames, corresponding to 23 days inside the simulation.

On the benchmark of 20 tasks, Dreamer outperforms the best model-free agent (D4PG) with an average score of 823 compared to 786, while learning from 20 times fewer environment interactions. Moreover, it exceeds the final performance of the previously best model-based agent (PlaNet) across almost all of the tasks. The computation time of 16 hours for training Dreamer is less than the 24 hours required for the other methods. The final performance of the four agents is shown below:
Dreamer outperforms the previous best model-free (D4PG) and model-based (PlaNet) methods on the benchmark of 20 tasks in terms of final performance, data efficiency, and computation time.
In addition to our main experiments on continuous control tasks, we demonstrate the generality of Dreamer by applying it to tasks with discrete actions. For this, we select Atari games and DeepMind Lab levels that require both reactive and long-sighted behavior, spatial awareness, and understanding of visually more diverse scenes. The resulting behaviors are visualized below, showing that Dreamer also efficiently learns to solve these more challenging tasks:
Dreamer learns successful behaviors on Atari games and DeepMind Lab levels, which feature discrete actions and visually more diverse scenes, including 3D environments with multiple objects.
Conclusion
Our work demonstrates that learning behaviors from sequences predicted by world models alone can solve challenging visual control tasks from image inputs, surpassing the performance of previous model-free approaches. Moreover, Dreamer demonstrates that learning behaviors by backpropagating value gradients through predicted sequences of compact model states is successful and robust, solving a diverse collection of continuous and discrete control tasks. We believe that Dreamer offers a strong foundation for further pushing the limits of reinforcement learning, including better representation learning, directed exploration with uncertainty estimates, temporal abstraction, and multi-task learning.

Acknowledgements
This project is a collaboration with Timothy Lillicrap, Jimmy Ba, and Mohammad Norouzi. We further thank everybody in the Brain Team and beyond who commented on our paper draft and provided feedback at any point throughout the project.

Source: Google AI Blog


Android 11: Developer Preview 2

Posted by Dave Burke, VP of Engineering

Android 11 Dial logo

It’s been a difficult few months for many around the world. The Android team at Google is a global one, and we, like many of you, are learning how to adapt to these extraordinary times. We want to thank you, our developer community, who have given us valuable feedback on Android 11 amidst these circumstances. We hope you, your families and colleagues are all staying well.

Just as many of you are trying to press on with work where possible, we wanted to share the next milestone release of Android 11 for you to try. It’s still an early build, but you can start to see how the OS is enabling new experiences in this release, from seamless 5G connectivity to wrapping your UI around the latest screens, to a smarter keyboard and faster messaging experience.

There’s a lot to check out in Developer Preview 2 - read on for a few highlights and visit the Android 11 developer site for details. Today’s release is for developers only and not intended for daily or consumer use, so we’re making it available by manual download and flash only for Pixel 2, 3, 3a, or 4 devices. To make flashing a bit easier, you can optionally get today’s release from the Android Flash Tool. For those already running Developer Preview 1 or 1.1, we’re also offering an over-the-air (OTA) update to today’s release.

Let us know what you think, and thank you to everyone who has shared such great feedback so far.

New experiences

5G state API - DP2 adds a 5G state API to let you quickly check whether the user is currently on a 5G New Radio or Non-Standalone network. You can use this to highlight your app’s 5G experience or branding when the user is connected. You can use this API together with the 5G dynamic meteredness API and bandwidth estimator API, as well as existing connectivity APIs, to take advantage of 5G’s improved speeds and latency.

Hinge angle for foldables - A top request for foldable devices has been an API to get the angle of the device screen surfaces. Android 11 now supports a hinge angle sensor that lets apps query directly or through a new AndroidX API for the precise hinge angle, to create adaptive experiences for foldables.

Call screening service improvements - To help users manage robocalls, we’re adding new APIs to let call-screening apps do more to help users. In addition to verifying an incoming call’s STIR/SHAKEN status (standards that protect against caller ID spoofing) as part of its call details, call-screening apps can report a call rejection reason. Apps can also customize a system-provided post call screen to let users perform actions such as marking a call as spam or adding to contacts. We’ll have more to share on this soon.

New ops and controls in Neural Networks API - Activation functions control the output of nodes within a neural network. At Google AI, we discovered a swish activation function allowing for faster training time and higher accuracy across a wide variety of tasks. In Android 11, we’re adding a computationally efficient version of this function, the hard-swish op. This is key to accelerating next-generation on-device vision models such as MobileNetV3 which forms the base model for many transfer learning use cases. Another major addition is the Control ops enabling more advanced machine learning models that support branching and loops. Finally, we’ve also added new execution controls to help you minimize latency for common use cases: Asynchronous Command Queue APIs reduce the overhead when running small chained models. See the NDK sample code for examples using these new APIs.

Privacy and security

We’re adding several more features to help keep users secure and increase transparency and control. Give these a try with your apps right away and let us know what you think.

Foreground service types for camera and microphone - in Android 10 we introduced the manifest attribute foregroundServiceType as a way to help ensure more accountability for specific use-cases. Initially apps could choose from “location” and several others. Now in Android 11 we’re adding two new types - “camera” and “microphone”. If your app wants to access camera or mic data from a foreground service, you need to add the foregroundServiceType value to your manifest.

Scoped storage updates- We’re continuing to iterate on our work to better protect app and user data on external storage. In this release we’ve made further improvements and changes, such as support to migrate files from the legacy model to the new scoped storage model, and better management of cached files. Read more here and watch for more enhancements in subsequent updates.

Read more about these and other Android 11 privacy features here.

Polish and quality

Synchronized IME transitions - A new set of APIs let you synchronize your app’s content with the IME (input method editor, aka soft keyboard) and system bars as they animate on and offscreen, making it much easier to create natural, intuitive and jank-free IME transitions. For frame-perfect transitions, a new insets animation listener notifies apps of per-frame changes to insets while the system bars or the IME animate. Additionally, apps can take control of the IME and system bar transitions through the WindowInsetsAnimationController API. For example, app-driven IME experiences let apps control the IME in response to overscrolling the app UI. Give these new IME transitions a try and let us know what other transitions are important to you.

Synchronized IME transition through  insets animation listener. App-driven IME experience through WindowInsetsAnimationController.

Synchronized IME transition through insets animation listener.

App-driven IME experience through WindowInsetsAnimationController.

Variable refresh rate - Apps and games can now set a preferred frame rate for their windows. Most Android devices refresh the display at 60Hz refresh rate, but some devices support multiple refresh rates, such as 90Hz as well as 60Hz, with runtime switching. On these devices, the system uses the app’s preferred frame rate to choose the best refresh rate for the app. The API is available in both the SDK and NDK. See the details here.

Resume on reboot - Android 11 improves the experience of scheduled overnight over-the-air software updates. Like in previous versions of Android, the device must still reboot to apply the OTA update, but with resume on reboot, apps are now able to access Credential Encrypted (CE) storage after the OTA reboot, without the user unlocking the device. This means apps can resume normal function and receive messages right away - important since OTA updates can be scheduled overnight while the device might be unattended. Apps can still support Direct Boot to access Device Encrypted (DE) immediately after all types of reboot. Give resume on reboot a try by tapping “Restart after 2AM” with your next Developer Preview OTA update, more details here.

Camera support in Emulator - The Android emulator now supports front and back emulated camera devices. The back camera supports Camera2 API HW Level 3 (includes YUV reprocessing, RAW capture). It’s a fully CTS-compliant LEVEL_3 device that you can use to test advanced features like ZSL and RAW/DNG support. The front camera supports FULL level with logical camera support (one logical device with two underlying physical devices). This camera emphasizes logical camera support, and the physical camera devices include narrow and wide field of view cameras. With this emulated camera support, you can build and test with any of the camera features added in Android 11. More details coming soon.

App compatibility

We’re working to make updates faster and smoother by prioritizing app compatibility as we roll out new platform versions. In Android 11 we’ve added new processes, tools, and release milestones to minimize the impact of platform updates and make them easier for developers.

With Developer Preview 2, we’re well into the release and getting closer to Beta. so now is the time to start your compatibility testing and identify any work you’ll need to do. We recommend doing the work early, so you can release a compatible update by Android 11 Beta 1. This lets you get feedback from the larger group of Android 11 Beta users.

timeline

When we reach Platform Stability, system behaviors, non-SDK greylists, and APIs are finalized. At this time, plan on doing your final compatibility testing and releasing your fully compatible app, SDK, or library as soon as possible so that it is ready for the final Android 11 release. More on the timeline for developers is here.

You can start compatibility testing on a Pixel 2, 3, 3a, or 4 device, or you can use the Android Emulator. Just flash the latest build, install your current production app, and test all of the user flows. There’s no need to change the app’s targetSdkVersion at this time. Make sure to review the behavior changes that could affect your app and test for impacts.

To help you with testing, we’ve made many of the breaking changes toggleable, so you can force-enable or disable them individually from Developer options or adb. Check out the details here. Also see the greylists of restricted non-SDK interfaces, which can also be enabled/disabled.

App compatibility toggles in Developer Options.

App compatibility toggles in Developer Options.

Get started with Android 11

Developer Preview has everything you need to try the Android 11 features, test your apps, and give us feedback. Just download and flash a device system image to a Pixel 2 / 2 XL, Pixel 3 / 3 XL, Pixel 3a / 3a XL, or Pixel 4 / 4 XL device, or set up the Android Emulator through Android Studio. Next, update your Android Studio environment with the Android 11 Preview SDK and tools, see the set up guide for details.

As always, your feedback is crucial, so please continue to let us know what you think — the sooner we hear from you, the more of your feedback we can integrate. When you find issues, please report them here.

Upcoming Chrome and Chrome OS releases

Due to adjusted work schedules at this time, we are pausing upcoming Chrome and Chrome OS releases. Our primary objectives are to ensure they continue to be stable, secure, and work reliably for anyone who depends on them. We’ll continue to prioritize any updates related to security, which will be included in Chrome 80. Please, follow this blog for updates.

Google Chrome

Sandeep Ahuja is comfortable confronting convention

In 2018, women received only 2.2 percent of all venture capital funding. Women Techmakers, Google’s program to build visibility, community and resources for women in technology, is committed to changing this narrative. Founded is a new web series that shares the stories of women founders using tech to solve some of the world’s challenges. For our first season, we’re taking our viewers to Atlanta, home of one of the largest technology hubs in the U.S., to highlight the stories of four women of color entrepreneurs.

Today, we’re releasing our second episode, an interview with Sandeep Ahuja. Sandeep is the co-founder of cove.tool, a software platform that helps architects and engineers model energy efficient buildings. We had the chance to talk to the Atlanta-based entrepreneur about her international upbringing, how she creates community for women in tech and how it felt to make Forbes “30 Under 30” list. 

Can you explain what cove.tool is to someone who’s not in tech?

Buildings contribute to 40 percent of total carbon emissions, and while developers and owners don’t mind doing the “right thing” for the planet, no one has unlimited budgets to spend on green building design. We still have to make things affordable and that’s exactly what cove.tool’s smart optimization does. We want to make it easier to build sustainable and green energy efficient buildings.

What originally inspired your interest in fighting climate change?

As a daughter of a diplomat, I traveled the world seeing the remarkable homogeneity of buildings in climates as diverse as Riyadh and Moscow. Given the outsized contribution buildings make to climate change, I was deeply troubled by the lack of architectural response. I wanted to disrupt this idea, and for me, given that I moved to a different country every four years, I’ve always felt comfortable with change and with confronting entrenched beliefs.  For me, there was no such thing as conforming to conventions. 

What was it like to be named to the Forbes “30 under 30” list? 

It’s both exciting and humbling; so many people reached out to express support and congratulations. It was exciting to see so many  strong women on the list, as well as so many immigrants, including myself! 

Cove.tool is meant to help architecture and engineering professionals fight climate change, but how can everyone else help? 

Getting politically active and pushing business and political leaders to take action is the key. Multinational corporations, investment firms and government regulations account for the vast majority of emissions. A good place to start in America is to join grassroots efforts like Citizens Climate Lobby, a bi-partisan organization tackling climate change. Collaborating with them is a great way to organize, volunteer and raise awareness. Writing letters to your local representative, congressperson and voting for fighting climate change candidates also makes a big difference. 

Why do you think it’s important for women in the entrepreneur and tech worlds to create community? 

Being a data driven person, the data clearly answers the "why.” Women only receive 2 percent of VC funding and make up only 11 percent of leadership in tech; this is creating a world of systematic bias. This needs to change and the change can start with me, you and everyone else. I drive change by making sure that cove.tool maintains a strong gender and diversity ratio and that we put  women in leadership roles. Our first non-founder team member was a woman, and the second was a woman, too, and they weren’t hired for any other reason aside from the fact that they deserved those roles and had the best skillsets. I also volunteer, coach and hopefully inspire other women founders and architects.

New malware protections for Advanced Protection users

Advanced Protection safeguards the personal or business Google Accounts of anyone at risk of targeted attacks—like political campaign teams, journalists, activists and business leaders. It’s Google's strongest security for those who need it most, and is available across desktops, laptops, smartphones and tablets. 

One of the many benefits of Advanced Protection is that it constantly evolves to defend against emerging threats, automatically protecting your personal information from potential attackers. Today we're announcing new ways that Advanced Protection is defending you from malware on Android devices. 

Play Protect app scanning is automatically turned on

Google Play Protect is Google's built-in malware protection for Android. It scans and verifies 100 billion apps each day to keep your device, data and apps safe. Backed by Google's machine learning algorithms, it’s constantly evolving to match changing threats. To ensure that people enrolled in our Advanced Protection Program benefit from the added security that Google Play Protect provides, we’re now automatically turning it on for all devices with a Google Account enrolled in Advanced Protection and will require that it remain enabled. 

Limiting apps from outside the Play Store

Advanced Protection is committed to keeping harmful apps off of enrolled users’ devices. All apps on the Google Play Store undergo rigorous testing, but apps outside of Google Play can potentially pose a risk to users’ devices. As an added protection, we’re now blocking the majority of these non-Play apps from being installed on any devices with a Google Account enrolled in Advanced Protection. You can still install non-Play apps through app stores that were pre-installed by the device manufacturer and through Android Debug Bridge. Any apps that you’ve already installed from sources outside of Google Play will not be removed and can still be updated.

G Suite users enrolled in the Advanced Protection Program will not get these new Android  protections for now; however, equivalent protections are available as part of endpoint management. See this help center article for a full list of Android device policies, specifically: “Verify apps,” which prevent users from turning off Google Play Protect, and “Unknown apps,” which prevent users from installing apps from outside the Play Store.

When will these changes roll out?

Starting today, these changes for Android will gradually roll out for Google Accounts that are enrolled in Advanced Protection. We’ll also be rolling out new malware protections for Chrome later this year, building upon the risky download protections we announced in 2019. 

You can learn more about Advanced Protection on Android here, and to enroll in Google's Advanced Protection, visit g.co/advancedprotection.

Source: Google Chrome


New malware protections for Advanced Protection users

Advanced Protection safeguards the personal or business Google Accounts of anyone at risk of targeted attacks—like political campaign teams, journalists, activists and business leaders. It’s Google's strongest security for those who need it most, and is available across desktops, laptops, smartphones and tablets. 

One of the many benefits of Advanced Protection is that it constantly evolves to defend against emerging threats, automatically protecting your personal information from potential attackers. Today we're announcing new ways that Advanced Protection is defending you from malware on Android devices. 

Play Protect app scanning is automatically turned on

Google Play Protect is Google's built-in malware protection for Android. It scans and verifies 100 billion apps each day to keep your device, data and apps safe. Backed by Google's machine learning algorithms, it’s constantly evolving to match changing threats. To ensure that people enrolled in our Advanced Protection Program benefit from the added security that Google Play Protect provides, we’re now automatically turning it on for all devices with a Google Account enrolled in Advanced Protection and will require that it remain enabled. 

Limiting apps from outside the Play Store

Advanced Protection is committed to keeping harmful apps off of enrolled users’ devices. All apps on the Google Play Store undergo rigorous testing, but apps outside of Google Play can potentially pose a risk to users’ devices. As an added protection, we’re now blocking the majority of these non-Play apps from being installed on any devices with a Google Account enrolled in Advanced Protection. You can still install non-Play apps through app stores that were pre-installed by the device manufacturer and through Android Debug Bridge. Any apps that you’ve already installed from sources outside of Google Play will not be removed and can still be updated.

G Suite users enrolled in the Advanced Protection Program will not get these new Android  protections for now; however, equivalent protections are available as part of endpoint management. See this help center article for a full list of Android device policies, specifically: “Verify apps,” which prevent users from turning off Google Play Protect, and “Unknown apps,” which prevent users from installing apps from outside the Play Store.

When will these changes roll out?

Starting today, these changes for Android will gradually roll out for Google Accounts that are enrolled in Advanced Protection. We’ll also be rolling out new malware protections for Chrome later this year, building upon the risky download protections we announced in 2019. 

You can learn more about Advanced Protection on Android here, and to enroll in Google's Advanced Protection, visit g.co/advancedprotection.

Cultivating digital “oases” in the local news landscape

Editor’s Note: Susan Leath is the director of the Center for Innovation and Sustainability in Local Media at UNC Hussman School of Journalism.

In my 30 years of working in journalism, I’ve seen first-hand the value of trusted news sources that help citizens connect and engage. As a local publisher for McClatchy, and later regional president with Gannett, I’ve also experienced the significant challenges facing local news in the wake of rapidly changing technology and consumer behavior. 

At UNC Hussman School of Journalism and Media’s Center for Innovation and Sustainability in Local Media (UNC CISLM), our mission is to help local news organizations retool for the digital age. We continuously hear from publishers that achieving long-term sustainability requires a fundamental mindset shift, and new types of resources to help them succeed.

That’s why we’re partnering with the Google News Initiative (GNI), LION Publishers and Douglas K. Smith on Project Oasis: a research initiative focused on helping local news organizations navigate the complex choices they face in establishing and growing their digital business. The first step is to develop a database that maps the current landscape of digital native local news publishers in the U.S. and Canada. Then, through in-depth interviews with these local news site founders at key stages of growth, we will develop resources to help others grow, including a “Starter Pack” for aspiring entrepreneurs.

This initiative responds to UNC’sNews Deserts Project, led by UNC Knight Professor of Journalism and Digital Media Economics Penny Abernathy, which highlighted the rise of rural and urban communities where residents have limited access to the credible and comprehensive news and information that feeds democracy at the grassroots level. 

This research showed that by 2018, we had lost 1,800—nearly 25 percent—of local newspapers that had existed in 2004. In just two years since, that number has jumped to 2,100, while cost-cutting by many remaining newspapers has rendered them ghosts of their former selves.

Despite these stark numbers, we’re starting to see the evidence that local news digital startups can thrive in communities and fill these gaps. Penny’s research has shown that a positive response to the loss of local newspapers has come from the several hundred digital news outlets that now span the country, most of them started in the past decade. Project Oasis will build on a range of programs at UNC CISLM to arm these local news publishers with sustainable practices to help strengthen their digital business models and strategies.

For this project, LION will help us focus our research on the most pressing and relevant questions, and engage with the right news organizations. The GNI will bring digital expertise and inform our research with lessons from the GNI Local Experiments project, where they’re working with global partners to create new digital local news organizations. We’re also partnering with Doug Smith, the founder of Media Transformation Challenge and architect of Table Stakes, which has supported the growth of more than 150 local news organizations in the U.S. and Europe.

This month, we will begin surveying digital native local news organizations in the U.S. and Canada to shine a light on the business strategies that have set some apart from others. If you run a digital native local news publication, we invite you to complete the survey, which we’re making available until the end of April 2020.

I believe local news is an essential element of a strong democracy. These information outlets build trust, inspire civic engagement and bring communities together. Through new research and resources, we believe this project has the potential to help shape a bright future for local news.

New properties for virtual, postponed, and canceled events

In the current environment and status of COVID-19 around the world, many events are being canceled, postponed, or moved to an online-only format. Google wants to show users the latest, most accurate information about your events in this fast-changing environment, and so we've added some new, optional properties to our developer documentation to help. These properties apply to all regions and languages. This is one part of our overall efforts in schema updates to support publishers and users. Here are some important tips on keeping Google up to date on your events.

Update the status of the event

The schema.org eventStatus property sets the status of the event, particularly when the event has been canceled, postponed, or rescheduled. This information is helpful because it allows Google to show users the current status of an event, instead of dropping the event from the event search experience altogether.
  • If the event has been canceled: Set the eventStatus property to EventCancelled and keep the original date in the startDate of the event.
  • If the event has been postponed (but the date isn't known yet): Keep the original date in the startDate of the event until you know when the event will take place and update the eventStatus to EventPostponed. The startDate property is required to help identify the unique event, and we need the date original startDate until you know the new date. Once you know the new date information, change the eventStatus to EventRescheduled and update the startDate and endDate with the new date information.
  • If the event has been rescheduled to a later date: Update the startDate and endDate with the relevant new dates. Optionally, you can also mark the eventStatus field as EventRescheduled and add the previousStartDate.
  • If the event has moved from in-person to online-only: Optionally update the eventStatus field to indicate the change with EventMovedOnline

For more information on how to implement the eventStatus property, refer to the developer documentation.

Mark events as online only

More events are shifting to online only, and we're actively working on a way to show this information to people on Google Search. If your event is happening only online, make sure to use the following properties:

For more information on how to implement the VirtualLocation type, refer to the developer documentation.
Note: You can start using VirtualLocation and eventAttendanceMode even though they are still under development on Schema.org.

Update Google when your event changes


After you make changes to your markup, make sure you update Google. We recommend that you make your sitemap available automatically through your server. This is the best way to make sure that your new and updated content is highlighted to search engines as quickly as possible.
If you have any questions, let us know through the Webmasters forum or on Twitter.
Posted by Emily Fifer, Event Search Product Manager

Works With Chromebook helps you find Chromebook accessories

A charger that gives you power when you need it, cables that ensure you can make important connections, a mouse that helps you work more efficiently—these accessories make it easier to work and play on your Chromebook. To help you find your next accessory, look for the Works With Chromebook logo on products in stores and online.

Chromebook and accessories

You’ll begin to see the Works With Chromebook badge on certified accessories in the U.S., Canada and Japan. We’ve tested these accessories to ensure they comply with Chromebook’s compatibility standards. Once you see the badge, you can be sure the product works seamlessly with your Chromebook.

Works With Chromebook certified accessories come from leading brands—including AbleNet, Anker, Belkin, Brydge, Cable Matters, Elecom, Hyper, Kensington, Logitech, Plugable, Satechi, StarTech, and Targus. Find Works With Chromebook accessories at Amazon.com, Best Buy (U.S. and Canada), Walmart.com, and Bic Camera (Japan), with other retailers and countries coming soon.

For more information about Works With Chromebook, check out the Chromebook website.