An Interspecies Assembly in New York

Biodiversity is collapsing, sea levels are rising, and weather is becoming more extreme and unpredictable. Human activity is now unequivocally linked to climate change and its consequences. The question stands: Is the planet’s ecological turmoil a result of our unwillingness to listen to the wants and needs of other species? 


This year at the United Nations General Assembly, the annual gathering of world leaders and representatives across humanity to address and act on our planet’s most urgent crises, we decided to open the first ever Interspecies Assembly. Created in collaboration with eight cultural organizations, the new interactive digital hub on Google Arts & Culture brings the representatives and voices of other life forms into the discussions and decisions around the changing environment. The simple mission: to foster friendly relationships among species, in the hopes of paving a way for a truly safe and sustainable future. 


Interspecies Assembly by SUPERFLEX for ART 2030 will be presented in two parts. In Central Park, there will be a gathering site also entitled Interspecies Assembly, marked by a series of pink stone sculptures arranged in a broken circle. It invites visitors of all species to enter and exit from any direction. Engraved across the sculptures is an “Interspecies Contract,” which suggests a new code of conduct for humans, based on what we call Interspecies Ethics. It asks the human participant, as an example, to give a moment of their time to other species, done by ‘idling’ for five minutes to cultivate awareness for the many different life forms that surround us every day.
Image of the UN building at night with a colorful projection on its side.

Visualisation of Vertical Migration , SUPERFLEX, 2021. UN Photo/Eskinder Debebe

Hosting an interspecies discussion

In addition, we have invited the first non-human representative directly to the United Nations to participate in the high-level discussions conducted during the film Vertical Migration. The protagonist of this film is a computer-generated siphonophore, an order of marine animals from the deep sea. These are fascinating creatures that are unfamiliar to many of us — they vary wildly in size, from the slightness of a fingernail to the length of a whale and look nothing like what we find on land. They also have bodies quite different to what we know: they are composed of many individual zoids that work in harmony as a society to survive. Perhaps, if we can see and feel from the perspective of a siphonophore, we can also find inspiration for how we approach the world around us.

Image of a gleaming blue jellyfish breaking a water surface

Bluebottle I | Matty Smith | Underwater Earth

This delegation will also extend their voyage, from the depths of the oceans to the United Nations to you. Together with Google Arts & Culture, ART 2030 and Kollision, we are making the siphonophore from Vertical Migration come to life through Augmented Reality, for an intimate encounter with this marine species.

An grey amorphous AR model hovering over a wooden table

AR model | Vertical Migration's Siphonophore | SUPERFLEX in collaboration with Kollision | ART 2030

Check out the online exhibition

Also on Google Arts & Culture, a new digital exhibitionis bringing together contributions from museums and scientific institutions and encouraging you to learn about what makes this creature so special and the importance of biodiversity. You can explore it atg.co/siphonophore

Faced with the spectre of the siphonophore, we hope it sparks a recognition that we are all connected, that our actions affect each other and that we all share a common fate.


Beta Channel Update for Chrome OS

The Beta channel has been updated to 94.0.4606.50 (Platform version: 14150.32.0) for most Chrome OS devices. This build contains a number of bug fixes, security updates and feature enhancements. 


If you find issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using 'Report an issue...' in the Chrome menu (3 vertical dots in the upper right corner of the browser).


Matt Nelson


Google Chrome OS

Design your own custom themes in new Google Sites

What’s changing 

You can now create highly customized themes in new Google Sites that align with your organization’s brand guidelines or your own personal style. 




Who’s impacted 

End users 


Why you’d use it 

Currently, there are six pre-created themes with limited customization options, which are helpful for quickly creating a consistent look and feel for sites. With the addition of custom themes, you have greater control over things like: 
  • Fonts and text style, 
  • Colors, 
  • Brand images, 
  • Navigation settings, 
  • Style of components such as buttons, and more. 

We hope that custom themes and custom templates allow you to create and share sites that best match your brand guidelines or specific style. 


Getting started 


Rollout pace 

  • Rapid Release domains: Extended rollout (potentially longer than 15 days for feature visibility starting on September 16, 2021 
  • Scheduled Release domains: Full rollout (1-3 days for feature visibility) starting on October 6, 2021 

Availability 

  • Available to all Google Workspace customers, as well as G Suite Basic and Business customers Available to users with personal Google Accounts 

Resources 

Toward Fast and Accurate Neural Networks for Image Recognition

As neural network models and training data size grow, training efficiency is becoming an important focus for deep learning. For example, GPT-3 demonstrates remarkable capability in few-shot learning, but it requires weeks of training with thousands of GPUs, making it difficult to retrain or improve. What if, instead, one could design neural networks that were smaller and faster, yet still more accurate?

In this post, we introduce two families of models for image recognition that leverage neural architecture search, and a principled design methodology based on model capacity and generalization. The first is EfficientNetV2 (accepted at ICML 2021), which consists of convolutional neural networks that aim for fast training speed for relatively small-scale datasets, such as ImageNet1k (with 1.28 million images). The second family is CoAtNet, which are hybrid models that combine convolution and self-attention, with the goal of achieving higher accuracy on large-scale datasets, such as ImageNet21 (with 13 million images) and JFT (with billions of images). Compared to previous results, our models are 4-10x faster while achieving new state-of-the-art 90.88% top-1 accuracy on the well-established ImageNet dataset. We are also releasing the source code and pretrained models on the Google AutoML github.

EfficientNetV2: Smaller Models and Faster Training
EfficientNetV2 is based upon the previous EfficientNet architecture. To improve upon the original, we systematically studied the training speed bottlenecks on modern TPUs/GPUs and found: (1) training with very large image sizes results in higher memory usage and thus is often slower on TPUs/GPUs; (2) the widely used depthwise convolutions are inefficient on TPUs/GPUs, because they exhibit low hardware utilization; and (3) the commonly used uniform compound scaling approach, which scales up every stage of convolutional networks equally, is sub-optimal. To address these issues, we propose both a training-aware neural architecture search (NAS), in which the training speed is included in the optimization goal, and a scaling method that scales different stages in a non-uniform manner.

The training-aware NAS is based on the previous platform-aware NAS, but unlike the original approach, which mostly focuses on inference speed, here we jointly optimize model accuracy, model size, and training speed. We also extend the original search space to include more accelerator-friendly operations, such as FusedMBConv, and simplify the search space by removing unnecessary operations, such as average pooling and max pooling, which are never selected by NAS. The resulting EfficientNetV2 networks achieve improved accuracy over all previous models, while being much faster and up to 6.8x smaller.

To further speed up the training process, we also propose an enhanced method of progressive learning, which gradually changes image size and regularization magnitude during training. Progressive training has been used in image classification, GANs, and language models. This approach focuses on image classification, but unlike previous approaches that often trade accuracy for improved training speed, can slightly improve the accuracy while also significantly reducing training time. The key idea in our improved approach is to adaptively change regularization strength, such as dropout ratio or data augmentation magnitude, according to the image size. For the same network, small image size leads to lower network capacity and thus requires weak regularization; vice versa, a large image size requires stronger regularization to combat overfitting.

Progressive learning for EfficientNetV2. Here we mainly focus on three types of regularizations: data augmentation, mixup, and dropout.

We evaluate the EfficientNetV2 models on ImageNet and a few transfer learning datasets, such as CIFAR-10/100, Flowers, and Cars. On ImageNet, EfficientNetV2 significantly outperforms previous models with about 5–11x faster training speed and up to 6.8x smaller model size, without any drop in accuracy.

EfficientNetV2 achieves much better training efficiency than prior models for ImageNet classification.

CoAtNet: Fast and Accurate Models for Large-Scale Image Recognition
While EfficientNetV2 is still a typical convolutional neural network, recent studies on Vision Transformer (ViT) have shown that attention-based transformer models could perform better than convolutional neural networks on large-scale datasets like JFT-300M. Inspired by this observation, we further expand our study beyond convolutional neural networks with the aim of finding faster and more accurate vision models.

In “CoAtNet: Marrying Convolution and Attention for All Data Sizes”, we systematically study how to combine convolution and self-attention to develop fast and accurate neural networks for large-scale image recognition. Our work is based on an observation that convolution often has better generalization (i.e., the performance gap between training and evaluation) due to its inductive bias, while self-attention tends to have greater capacity (i.e., the ability to fit large-scale training data) thanks to its global receptive field. By combining convolution and self-attention, our hybrid models can achieve both better generalization and greater capacity.

Comparison between convolution, self-attention, and hybrid models. Convolutional models converge faster, ViTs have better capacity, while the hybrid models achieve both faster convergence and better accuracy.

We observe two key insights from our study: (1) depthwise convolution and self-attention can be naturally unified via simple relative attention, and (2) vertically stacking convolution layers and attention layers in a way that considers their capacity and computation required in each stage (resolution) is surprisingly effective in improving generalization, capacity and efficiency. Based on these insights, we have developed a family of hybrid models with both convolution and attention, named CoAtNets (pronounced “coat” nets). The following figure shows the overall CoAtNet network architecture:

Overall CoAtNet architecture. Given an input image with size HxW, we first apply convolutions in the first stem stage (S0) and reduce the size to H/2 x W/2. The size continues to reduce with each stage. Ln refers to the number of layers. Then, the early two stages (S1 and S2) mainly adopt MBConv building blocks consisting of depthwise convolution. The later two stages (S3 and S4) mainly adopt Transformer blocks with relative self-attention. Unlike the previous Transformer blocks in ViT, here we use pooling between stages, similar to Funnel Transformer. Finally, we apply a classification head to generate class prediction.

CoAtNet models consistently outperform ViT models and its variants across a number of datasets, such as ImageNet1K, ImageNet21K, and JFT. When compared to convolutional networks, CoAtNet exhibits comparable performance on a small-scale dataset (ImageNet1K) and achieves substantial gains as the data size increases (e.g. on ImageNet21K and JFT).

Comparison between CoAtNet and previous models after pre-training on the medium sized ImageNet21K dataset. Under the same model size, CoAtNet consistently outperforms both ViT and convolutional models. Noticeably, with only ImageNet21K, CoAtNet is able to match the performance of ViT-H pre-trained on JFT.

We also evaluated CoAtNets on the large-scale JFT dataset. To reach a similar accuracy target, CoAtNet trains about 4x faster than previous ViT models and more importantly, achieves a new state-of-the-art top-1 accuracy on ImageNet of 90.88%.

Comparison between CoAtNets and previous ViTs. ImageNet top-1 accuracy after pre-training on JFT dataset under different training budget. The four best models are trained on JFT-3B with about 3 billion images.

Conclusion and Future Work
In this post, we introduce two families of neural networks, named EfficientNetV2 and CoAtNet, which achieve state-of-the-art performance on image recognition. All EfficientNetV2 models are open sourced and the pretrained models are also available on the TFhub. CoAtNet models will also be open-sourced soon. We hope these new neural networks can benefit the research community and the industry. In the future we plan to further optimize these models and apply them to new tasks, such as zero-shot learning and self-supervised learning, which often require fast models with high capacity.

Acknowledgements
Special thanks to our co-authors Hanxiao Liu and Quoc Le. We also thank the Google Research, Brain Team and the open source contributors.

Source: Google AI Blog


AdSense Management API v1.4 Sunset Reminder

This is the final reminder that v1.4 of the AdSense Management API will sunset on October 12, 2021. Any requests made this this version will stop working on this date. If you haven’t already migrated to v2 of the AdSense Management API, now is the time to do so.

For help with migrating, see our migration overview guide. We also have updated examples for all five of our client libraries: Java, Python, Ruby, PHP, and .NET. For a full list of changes, see the release notes.

As always, feel free to reach out to us on the AdSense API forum with any API-related questions.

Open source SystemVerilog tools in ASIC design

Open source hardware is undeniably undergoing a renaissance whose origin can be traced to the establishment of RISC-V Foundation (later redubbed RISC-V International). The open ISA and ecosystem, in which Antmicro participated since the beginning as a Founding member, has sparked many open source CPU implementations, new tooling, methodologies, and trends which allow for more collaborative and software driven design.

Many of those broader open hardware activities have been finding a home in CHIPS Alliance, an open source organization we participate in as a Platinum member alongside Google, Intel, Western Digital, SiFive and others, whose goals explicitly encompass:
  • creating and maintaining open source ASIC and FPGA design tools (digital and analog)
  • open source core and uncore IP
  • interconnects, interoperability specs and more
This is in perfect alignment with Antmicro’s mission—as we’ve been heavily involved with many of the projects inside of and related to CHIPS providing commercial support, engineering services, and assistance in practical adoption for enterprise deployments.

As of this time, a range of everyday design, development, testing, and verification tasks are already possible using open source tools and components and are part of our and our customer’s everyday workflow. Other developments are within reach given a reasonable amount of development, which we can provide based on specific scenarios. Others still are much further away, but with dedicated efforts inside CHIPS in which we are involved together with partners like Google and Western Digital, there is a pathway towards a completely open hardware design and verification ecosystem. This will eventually unlock incredible potential in new design methodologies, vertical integration capabilities, and education and business opportunities. Until then, Antmicro can help you with extracting practical value for many scenarios such as simulation, linting, formatting, synthesis, continuous integration and more.

Building a SystemVerilog ecosystem in CHIPS

Some of the challenges towards practical adoption of open source in ASIC design have been related to the fact that a significant proportion of advanced ASIC design is done in SystemVerilog, a fairly complex and powerful language in its own right, which used to be poorly supported in the open source tooling ecosystem. Partial solutions like SystemVerilog to Verilog converters or paid plugins existed, but direct support lagged behind, making open source tools for SystemVerilog a difficult sell previously.

This has been fortunately changing rapidly with a dedicated development effort spearheaded by Google and Antmicro. Projects in this space include Verible, Surelog, UHDM and sv-tests that we have been developing, as well as integrating with existing tools like Yosys, Verilator under the umbrella of the SymbiFlow open source FPGA project, and which are now officially being transferred into the CHIPS Alliance to increase awareness and build a broader SystemVerilog ecosystem.

In this note, we will walk you through the state of the art in new SystemVerilog capabilities in open source projects, and invite you to reach out to see how CHIPS Alliance’s SystemVerilog projects can be useful to you today or in the near future.

A walk through the state of the art in new SystemVerilog capabilities in open source projects

Verible

The Verible project originated at Google; its main mission is to make SystemVerilog easily and quickly parsable for a wide variety of applications mostly focusing on developer tools.

Verible is a set of tools based on a common SystemVerilog parsing engine, providing a command line interface which makes integration with other tools for daily usage or CI systems for automatic testing and deployment a breeze.

Antmicro has been involved in the development of Verible since its initial open source release and we now provide a significant portion of current development efforts, helping adapt it for use in various open source projects or commercial environments that use SystemVerilog. One notable user is the security-focused OpenTitan project, which has driven many interesting developments and provides a good showcase of the capabilities being completely open source, well documented, fairly complex, and used in real applications.

Linter

One of the most common use cases for Verible is linting. The linter analyzes code for patterns and constructs that are deemed undesirable according to the implemented lint rules. The rules follow authoritative style guides that can be enforced on a project or company level in various SystemVerilog projects.

The rules range from simple ones like making sure the module name matches the file name to more sophisticated like checking variable naming conventions (all caps, snake case, specific prefix or suffix etc.) or making sure the labels after the begin and end statements match.

A full list of rules can be found in the Verible lint documentation and is constantly growing. Usage is very simple:

$ verible-verilog-lint --ruleset all core.sv 

core.sv:3:11: Interface names must use lower_snake_case naming convention and end with _if. [Style: interface-conventions] [interface-name-style]


The output of the linter is easy to understand, as the way issues are reported to the user is modeled after popular programming language compilers.

The linter is highly configurable. It is possible to select the rules for which the compliance will be checked, some rules allow for detailed configuration (e.g. max line length).

Rules can also be selectively waived in specific files or at specific lines or even by regex matching. In addition, some rules can be automatically fixed by the linter itself.

Formatter

The Verible formatter is a complementary tool for the linter. It is used to automatically detect various formatting issues like improper indentation or alignment. As opposed to the linter, it only detects and fixes issues that have no lexical impact on the source code.

The formatter also comes with useful helper scripts for selective and interactive reformatting (e.g. only format files that changed according to git, ask before applying changes to each chunk).

A toolset that consists of both the linter and the formatter can effectively remove all the discussions about styling, preferences and conventions from all pull requests. Developers can then focus solely on the technical aspects of the proposed changes.

$ cat sample.sv

typedef struct {

bit first;

        bit second;

bit

   third

        ;

  bit fourth;

bit fifth; bit sixth;

}

 foo_t;



$ verible-verilog-format sample.sv

typedef struct {

  bit first;

  bit second;

  bit third;

  bit fourth;

  bit fifth;

  bit sixth;

} foo_t;

Indexer

The Verible parser itself can be relatively easily used to perform many other tasks. One of the interesting use cases is generating a Kythe compatible indexing database.

Indexing a SystemVerilog project makes it very easy to collaborate on a project remotely. It is possible to navigate through the source code using nothing else than just a web browser.

The Kythe integration can be served on an arbitrary server, can be deployed after every commit in a project, etc. A showcase of the indexing mechanism can be found in our GitHub repository. The demo downloads the latest version of the Ibex core, indexes it, and deploys it to be viewed on a remote machine. The results can be viewed on the example index webpage.

The demo downloads the latest version of the Ibex core, indexes it, and deploys it to be viewed on a remote machine. The results can be viewed on the example index webpage.

Indexing is widely adopted for many larger open source software projects.

Thanks to Verible, it is now possible to do the same in the world of open source HDL designs, and of course private, company-wide deployments like this are also possible.

Surelog and UHDM

SystemVerilog is a powerful language but also complex. So far no open source tools have been able to support it in full. Implementing it separately for each project such as the Yosys synthesis tool or the Verilator simulator would take a colossal amount of time, and that’s where Surelog and UHDM come in.

Surelog, originally created and led by Alain Dargelas, aims to be a fully-featured SystemVerilog 2017 preprocessor, parser, and elaborator. It’s a modern tool and thus follows the current version of the SV standard without unnecessary deviations or legacy baggage.

What’s interesting is that Surelog is only a language frontend designed to integrate well with other tools—it outputs an elaborated design in an intermediate format called UHDM.

UHDM stands for Universal Hardware Data Model, and it’s both a file format for storing hardware designs and a library able to manipulate this format. A client application can access the data using VPI, which is a standard programming interface for SystemVerilog.

What this means is that the work required to create a SystemVerilog parser only needs to be done once, and other tools can use that parser via UHDM. This is much easier than implementing a full SystemVerilog parser within each tool. What’s more, any improvements in the unified parser will provide benefits for all client applications. Finally, any other parser is free to emit UHDM as well, so in the future we might see e.g. a UHDM backend for Verible.

Just like in Verible’s case, both Surelog and UHDM have recently been contributed into CHIPS Alliance to drive a broader adoption. We are actively contributing to both projects, especially around the integrations with tooling such as Yosys and Verilator, and practical use in open source and customer projects.

Recent Antmicro contributions adding UHDM frontends for Yosys and Verilator enabled Ibex synthesis and simulation. The complete OpenTitan project is the next milestone.

The Surelog/UHDM/Yosys flow enabling SystemVerilog synthesis without the necessity of converting the HDL code to Verilog is a great improvement for open source ASIC build flows such as OpenROAD’s OpenLane flow (which we also support commercially). Removing the code conversion step enables the developers to perform e.g. circuit equivalence validation to check the correctness of the design.

More information about Surelog/UHDM and Verible can be found in a dedicated CHIPS Alliance presentation that was recently given by Henner Zeller, Google’s Verible lead.

UVM is in the picture

No open source ASIC design toolkit can be complete without support for Universal Verification Methodology, or UVM, which is one of the most widespread verification methodologies for large-scale ASIC design. This has also been an underrepresented area in open source tooling and changing that is an enormous undertaking, but working together with our customers, most notably Western Digital, we have been making progress on that front as well.

Across the ASIC development landscape, UVM verification is currently performed with proprietary simulators, but a more easily distributable, collaborative and open ecosystem is needed to close the feedback loop between (emerging) open source design approaches and verification. Verilator is an extremely popular choice for other system development use cases but it has historically not focused on UVM-style verification. Other styles of verification, such as the very interesting and popular Python-based cocotb framework maintained by FOSSi Foundation, have been enabled in Verilator. But support for UVM, partly due to the size and complexity of the methodology, has been notably absent.

One of the features missing from Verilator but needed for UVM is SystemVerilog stratified scheduling, which is a set of rules specified in the standard that govern the way time progresses in a simulation, as well as the order of operations. A SystemVerilog simulation is divided into smaller steps called time slots, and each time slot is further divided into multiple regions. Specific events can only happen in certain regions, and some regions can reoccur in a single time slot.

Until recently, Verilator had implemented only a small subset of these rules, as all scheduling was being done at compilation time. Spearheading a long-standing development effort within CHIPS Alliance, in collaboration with the maintainer of Verilator, Wilson Snyder, we have built is a proof-of-concept version of Verilator with a dynamic scheduler, which manages the occurrence of certain events at runtime, extending the stratified scheduling support. More details can be found in Antmicro’s presentation for the inaugural CHIPS Alliance Deep Dive Cafe Talk.

Another feature required for UVM is constrained randomization, which allows generating random inputs to feed to a design in order to thoroughly test it. Unlike unconstrained randomization, which is already provided by Verilator, it allows the user to specify some rules for input generation, thus limiting the possible value space and making sure that the input makes sense. Work on adding this to Verilator has already started, although the feature is still in its infancy. There are many other features on the roadmap which will eventually enable practical UVM support—stay tuned with our CHIPS Alliance events to follow that development.

What next?

Support for SystemVerilog parsers, for the intermediate format, and for their respective backends and integrations with various tooling, as well as for UVM is now under heavy development. If you would like to see more effort put into a specific area, reach out to us at [email protected]. Antmicro offers commercial support services to extend the flows we’ve briefly presented here to various practical applications and designs, and to effectively integrate this approach into people’s workflows.

Adding to this our cloud expertise, Antmicro customers can benefit from a complete and industry-proven methodology scalable between teams and across on-premise and cloud installations, transforming chip design workflows to be more software-driven and collaborative. To take advantage of open source solutions with tools like Verilator, Yosys, OpenROAD and others - tell us about your use case and we will see what can be done today.

If you are interested in collaborating on the development of SystemVerilog-focused and other open hardware tooling, join CHIPS Alliance and participate in our workgroups and help us push innovation in ASIC design forward.

Originally posted on the Antmicro blog.

By guest author Michael Gielda, Antmicro, and Tim Ansell, Software Engineer

Dev Channel Update for Desktop

 The Dev channel has been updated to 95.0.4638.10 for Windows, Linux and Mac

A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.



Prudhvikumar Bommana

Google Chrome

App performance to drive app excellence

Posted by Maru Ahues Bouza, Director Android Developer Relations

hand drawing shapes on a tablet

In our previous blog post in this series, we defined app excellence as “creating an app that provides consistent, effortless, and seamless app user experiences. It is high performing and provides a great experience, no matter the device being used.” Let’s focus on the concept of app performance — what are the features of high performing apps, and how do you achieve app excellence through strong performance?

From a user’s perspective, high-performing apps “just work.” However, the process of creating a high performing app is not always straightforward. To break things down, here are the main dimensions of high performance:

Stability

An app should be robust and reliable. It should not freeze (application not responding, or “ANR”) or crash. Before you launch your app, check out Google Play’s pre-launch report to identify potential stability issues. After deployment, pay attention to the Android Vitals page in the Google Play developer console. Specifically, ANRs are caused by threading issues. The ANR troubleshooting guide can help you diagnose and resolve any ANRs that exist in your app.

Quick loading

Imagine the first experience a user has of your app is…..waiting. At some point, they are going to get distracted or bored, and you have lost a new user. Your app should either load quickly or provide some sort of feedback onscreen such as a progress indicator. You can use data from Android vitals to quantify any issues you may have with start up times. Android vitals considers excessive start up times as:

  • Cold startup takes 5 seconds or longer.
  • Warm startup takes 2 seconds or longer.
  • Hot startup takes 1.5 seconds or longer.

However, these are relatively conservative numbers. We recommend you aim for lower. Here are some great tips on how to test start up performance.

Fast rendering

High quality frame rendering is not just for games. Smooth visual experiences that don’t stall or act sluggish are also important for apps. At a minimum aim to render frames every 16ms to achieve 60 frames per second, but bear in mind there are devices in the market with faster refresh rates. To monitor performance as you test, use the Profile HWUI rendering option. Here are tools to help diagnose rendering issues.

Economical with battery usage

As soon as a user realizes your app is draining their battery, they are going to consider uninstalling. Your app can drain battery through stuck partial wake locks, excessive wakeups, background Wifi scans, or background network usage. Use the Android Studio energy profiler combined with planned background work, to diagnose unexpected battery use. For apps that need to execute background tasks that require a guarantee that the system will run them even if the app exits, WorkManager is a battery friendly Android Jetpack library that runs deferrable, guaranteed background work when the work’s constraints are satisfied.

Using up-to-date SDKs

For both security and performance, it’s important that any Google or third-party SDKs used are up-to-date. Improvements to these SDKs, such as stability, compatibility, or security, should be available to users in a timely manner. You are responsible for the entire code base, including any third party SDKs you may utilize. For Google SDKs, consider using SDKs powered by Google Play services, when available. These SDKs are backward compatible, receive automatic updates, reduce your app package size, and make efficient use of on-device resources.

To learn more, please visit the Android app excellence webpage, where you will find case studies, practical tips, and the opportunity to sign up for our App Excellence summit..

In our next blog posts, we will talk about seamless user experiences across devices. Sign up to the Android developer newsletter here to be notified of the next installment and get news and insights from the Android team.