Monthly Archives: January 2021

Quicksave: The latest from Google Play Pass

Google Play Pass helps you connect with awesome digital content: It’s your pass to hundreds of apps and games without ads and in-app purchases. It’s been a pretty busy year for Play Pass, so let’s take a moment to spotlight a few of the games and developers we think you’ll enjoy.

Program updates

This past year, Play Pass…

  • Celebrated its first birthday

  • Expanded to 42 countries

  • Added more than 300 new apps and games, including more than 100 teacher-approved kids’ titles

New games coming and recent additions

An image from the game Giant Dancing Plushies


Giant Dancing Plushies (Rogue Games, Inc.):

Help huge, adorable stuffed animals conquer the planet in this adorable (yet… terrifying) take on the rhythm game genre. Jam to the great in-game tracks or Kaiju it up to your own favorite music and get ready to stomp on the city! 

An image from the game Figment


Figment (Bedtime Digital Games):

Venture into the whimsical, dream-like world of the human mind. Solve puzzles to restore the peace and rediscover the courage that's been lost–all while beating back the nightmarish fears that threaten to take over! If you’re looking for a mind-blowing weekend playlist, we recommend checking out Figment, Samorost 3Old Man’s Journey and The Gardens Between (all included with your Play Pass subscription). Can you identify the theme that links them?

The logo for the game The Legend of Bum-Bo


The Legend of Bum-Bo (The Label Limited):

Help Bum-Bo recover his lost coin in this edgy, puzzle-based, rogue-like prequel to the Binding of Isaac.  We won’t give away too much, but this combo of turn-based combat and poop (yes, poop) makes for one unforgettable gaming experience.

Titles we can’t get enough of

The logo from the game The Escapists

Everything by Team17: Bust out of a life behind bars, save some sheep and battle your way to worm domination. Almost every live Android title from this renowned publisher will be joining Play Pass. From the Escapists series, Flockers, to every Worms game, Team17 sure knows how to bring it and we’re all here for it.

The Escapists: Prison Escape

The Escapists 2: Pocket Breakout

and many more

An image from the game Basketball Club Story

Basketball Club Story (Kairosoft): Create your own basketball team, recruit a cast of zany players and compete against other teams in the league! You’re the coach taking the team to victory in this sim game from Japanese developer Kairosoft.  Keep an eye out for more from them soon.

An image from the video game Grand Mountain Adventure

Grand Mountain Adventure: Snowboard Premiere (Toppluva AB):The new Winter 2021 Expansion adds a bunch of new mountains and challenging excitement to this local multiplayer. If you can’t hit the slopes this winter, everything you need (including an avalanche of recently added content) is included in this game for you. Well… everything except the après-ski festivities.

The logo of the video game Hole Down

Holedown (grapefrukt games):Shoot balls, break blocks, upgrade all the things. How deep can you go? We love this game so much and are excited to have just welcomed another grapefrukt game (rymdkapsel) to Play Pass.

The logo of the video game Evoland

Evoland(Playdigious):Embark on an epic action/adventure journey with plenty of humor and nods to the classics. Upgrade your graphics and gameplay as you advance on your quest. As we know, every great title has a sequel, so make sure to be on the lookout for more Evoland coming to Play Pass.

Meet 3 women who found community in India’s tech scene

From left to right: Dhruva Shastri, Varsha Jaiswal and Supriya Shashivasan.

Based on research Women Techmakers conducted in 2018, women only make up 34 percent of all technology sector employees in India. Thankfully, there’s a rising leadership of Indian women in tech working to make this industry more inclusive and equitable. 

Many of them are a part of our Women Techmakers community, which is at the forefront of this change. I recently had the chance to talk to Dhruva Shastri, Varsha Jaiswal and Supriya Shashivasan, three Women Techmakers Ambassadors from India, about their experiences in tech, and why they’re so motivated to do this work.

How would you explain your job to someone who isn't in tech?

Dhruva:  I’m a Flutter developer with a background in UX Design, so I’d say I create experience and tools for people who use Android phones, and that I pay extra attention to the design so that it’s fun and easy to use.

Varsha:I’m a web developer, so I would say I talk to people about how they want to use technology so that I can create the places on the internet that serve them with the information or tools they’re looking for. 

Supriya: I’m a front-end developer who takes amazing mockups and designs of websites and apps and converts them to live code so everyone can use them. I’m also pursuing research in security. So I’d say I’m looking into how best we can safeguard our assets, data and online details from hackers.

What made you want to work in this field?

Varsha:Since an early age, I was  interested in  technology and wrote my first code in first grade. I’ve always been passionate about solving problems and building solutions.

Supriya: I’ve always been curious about the mechanics of how things work. I’ve also loved building things on my own since I was a child. In college, I fell in love with technology and discovering ways it could make life easier. Solving problems by building innovative solutions with nothing but a laptop!? It's amazing.

Tech is such an evolving industry, how do you keep your technology skills current? 

Dhruva:The industry is constantly evolving. The internet is the easiest and best resource to learn new things and stay updated on my field. I learn from people and organizations I follow on Twitter, by reading blogs and newsletters and occasionally visiting forums like Stackoverflow, Quora, Reddit and so on. I also attend offline (and more recently due to the pandemic, online) meetups, take online courses, do pair-programming, create sample projects and talk with colleagues. 

Supriya:I spend a few hours a day studying and reading different blogs and forums. I’m also part of online and offline communities like Google Developer Groups, Hashnode, Quora and Stackoverflow where I can connect with other people who work in my field and we can talk, help, network and update each other. Attending online workshops, hackathons and meetups is also helpful.

Why is being part of the Women Techmakers community important to you?

Dhruva: This community provides a sense of belonging, safety and security. I remember  when I  joined the Google Developer Group here in Ahmedabad back in 2013, I was too shy to talk to anyone. And now I feel so much more confident. GDG and Women Techmakers brought out this transformation in me by providing a platform, resources, opportunities and connection. This inclusive space gives you the freedom to share your struggles, celebrate your achievements and build your support system. Now it gives me immense happiness to touch  the lives of women and non-binary groups and be a part of helping them find success. 

Supriya: I used to be so scared of speaking in front of more than five people. I would stutter and gasp for huge breaths of air. That all changed when I got involved with Women Techmakers. During Google Developer Days in 2019, in the community lounge, I watched women speak about the importance of community and how it helped them. I found myself raising my hand to share my experience, but I could barely manage to speak three sentences. Next thing I know I heard claps, and I saw smiles all around. I didn't feel scared anymore. I went on to become an ambassador for my own community.

What is one piece of advice you have for a woman interested in getting into tech?

Dhruva:Success always lies on the other side of our comfort zone. So when you don’t know how to do something, say yes. Take risks, learn something new, because the best way to get out of mediocrity is to keep shooting for excellence.

Varsha:Don’t hesitate, try and keep trying. Ask questions, explore more and trust yourself. And you’re not alone — we’re all together in this, helping each other grow and create a better future.

Supriya:Be fearless, bold, follow your dreams and speak your mind. Turn things to your advantage by forcing your way through any obstacles in your path.


Stable Channel Update for Chrome OS

 The Stable channel is being updated to 88.0.4324.109, (Platform version: 13597.66.0) for most Chrome OS devices. This build contains a number of bug fixes and security updates. Systems will be receiving updates over the next several days.

You can review new features here.

If you find new issues, please let us know by vising our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using 'Report an issue...' in the Chrome menu (3 vertical dots in the upper right corner of the browser).

Marina Kazatcker

Google Chrome OS

Chrome Beta for Android Update

Hi everyone! We've just released Chrome Beta 89 (89.0.4389.23) for Android: it's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Ben Mason
Google Chrome

Indirect membership visibility and membership hierarchy APIs now generally available

Quick launch summary 

We’re making it easier to identify, audit, and understand indirect group membership via the Cloud Identity Groups API. Specifically, we’re making the membership visibility and membership hierarchy APIs generally available. These were previously available in beta. 

Using “nested” groups to manage access to content and resources can help decrease duplication, simplify administration, and centralize access management. However, nested groups can create a complex hierarchy that can make it hard to understand who ultimately has access and why. These APIs help provide all of the information you need to understand complex group structures and hierarchies, and can help you make decisions about who to add to or remove from your groups. 

See our beta announcement for more information and use cases for the APIs


Getting started 


Rollout pace 


Availability 

  • Available to Google Workspace Enterprise Standard and Enterprise Plus, as well as G Suite Enterprise for Education and Cloud Identity Premium customers. 
  • Not available to Google Workspace Essentials, Business Starter, Business Standard, Business Plus, and Enterprise Essentials, as well as G Suite Basic, Business, Education, and Nonprofits customers 

Resources 

January update to Display & Video 360 API v1

Today we’re releasing an update to the Display & Video 360 API which includes the following features:
More detailed information about this update can be found in the Display & Video 360 API release notes.

Before using these new features, make sure to update your client library to the latest version.

If you run into issues or need help with these new features, please contact us using our support contact form.


Dev Channel Update for Desktop

 The Dev channel has been updated to 90.0.4400.8 for Windows,Linux and 90.0.4400.10 for Mac.

A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.


Google Chrome 

Srinivas Sista

Beta Channel Update for Desktop

The Chrome team is excited to announce the promotion of Chrome 89 to the Beta channel for Windows, Mac and Linux. Chrome 89.0.4389.23 contains our usual under-the-hood performance and stability tweaks, but there are also some cool new features to explore - please head to the Chromium blog to learn more!



A full list of changes in this build is available in the log. Interested in switching release channels? Find out how here. If you find a new issues, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.



Prudhvikumar Bommana

Sunset of the Ad Manager API v202002

On Monday, March 1, 2021, in accordance with the deprecation schedule, v202002 of the Ad Manager API will sunset. At that time, any requests made to this version will return errors.

If you’re still using v202002, now is the time to upgrade to a newer release and take advantage of additional functionality. For example, in v202011 we added the getTrafficData method to the ForecastService, which is capable of exploring historical and forecasted network-level data for a particular date range and targeting configuration.

When you’re ready to upgrade, check the full release notes to identify any breaking changes. Keep in mind that v202002 is the final version to use int types for Activity and ActivityGroup, and all later versions use type long. After you’ve considered all of the changes, grab the latest version of your client library and update your code.

As always, don't hesitate to reach out to us on the developer forum with any questions.

Improving Mobile App Accessibility with Icon Detection

Voice Access enables users to control their Android device hands free, using only verbal commands. In order to function properly, it needs on-screen user interface (UI) elements to have reliable accessibility labels, which are provided to the operating system’s accessibility services via the accessibility tree. Unfortunately, in many apps, adequate labels aren’t always available for UI elements, e.g. images and icons, reducing the usability of Voice Access.

The Voice Access app extracts elements from the view hierarchy to localize and annotate various UI elements. It can provide a precise description for elements that have an explicit content description. On the other hand, the absence of content description can result in many unrecognized elements undermining the ability of Voice Access to function with some apps.

Addressing this challenge requires a system that can automatically detect icons using only the pixel values displayed on the screen, regardless of whether icons have been given suitable accessibility labels. What little research exists on this topic typically uses classifiers, sometimes combined with language models to infer classes and attributes from UI elements. However, these classifiers still rely on the accessibility tree to obtain bounding boxes for UI elements, and fail when appropriate labels do not exist.

Here, we describe IconNet, a vision-based object detection model that can automatically detect icons on the screen in a manner that is agnostic to the underlying structure of the app being used, launched as part of the latest version of Voice Access. IconNet can detect 31 different icon types (to be extended to more than 70 types soon) based on UI screenshots alone. IconNet is optimized to run on-device for mobile environments, with a compact size and fast inference time to enable a seamless user experience. The current IconNet model achieves a mean average precision (mAP) of 94.2% running at 9 FPS on a Pixel 3A.

Voice Access 5.0: the icons detected by IconNet can now be referred to by their names.

Detecting Icons in Screenshots
From a technical perspective, the problem of detecting icons on app screens is similar to classical object detection, in that individual elements are labelled by the model with their locations and sizes. But, in other ways, it’s quite different. Icons are typically small objects, with relatively basic geometric shapes and a limited range of colors, and app screens widely differ from natural images in that they are more structured and geometrical.

A significant challenge in the development of an on-device UI element detector for Voice Access is that it must be able to run on a wide variety of phones with a range of performance performance capabilities, while preserving the user’s privacy. For a fast user experience, a lightweight model with low inference latency is needed. Because Voice Access needs to use the labels in response to an utterance from a user (e.g., “tap camera”, or “show labels”) inference time needs to be short (<150 ms on a Pixel 3A) with a model size less than 10 MB.

IconNet
IconNet is based on the novel CenterNet architecture, which extracts features from input images and then predicts appropriate bounding box centers and sizes (in the form of heatmaps). CenterNet is particularly suited here because UI elements consist of simple, symmetric geometric shapes, making it easier to identify their centers than for natural images. The total loss used is a combination of a standard L1 loss for the icon sizes and a modified CornerNet Focal loss for the center predictions, the latter of which addresses icon class imbalances between commonly occurring icons (e.g., arrow backward, menu, more, and star) and underrepresented icons (end call, delete, launch apps, etc.)..

After experimenting with several backbones (MobileNet, ResNet, UNet, etc), we selected the most promising server-side architecture — Hourglass — as a starting point for designing a backbone tailored for icon and UI element detection. While this architecture is perfectly suitable for server side models, vanilla Hourglass backbones are not an option for a model that will run on a mobile device, due to their large size and slow inference time. We restricted our on-device network design to a single stack, and drastically reduced the width of the backbone. Furthermore, as the detection of icons relies on more local features (compared to real objects), we could further reduce the depth of the backbone without adversely affecting the performance. Ablation studies convinced us of the importance of skip connections and high resolution features. For example, trimming skip connections in the final layer reduced the mAP by 1.5%, and removing such connections from both the final and penultimate layers resulted in a decline of 3.5% mAP.

IconNet analyzes the pixels of the screen and identifies the centers of icons by generating heatmaps, which provide precise information about the position and type of the different types of icons present on the screen. This enables Voice Access users to refer to these elements by their name (e.g., “Tap ‘menu”).

Model Improvements
Once the backbone architecture was selected, we used neural architecture search (NAS) to explore variations on the network architecture and uncover an optimal set of training and model parameters that would balance model performance (mAP) with latency (FLOPs). Additionally, we used Fine-Grained Stochastic Architecture Search (FiGS) to further refine the backbone design. FiGS is a differentiable architecture search technique that uncovers sparse structures by pruning a candidate architecture and discarding unnecessary connections. This technique allowed us to reduce the model size by 20% without any loss in performance, and by 50% with only a minor drop of 0.3% in mAP.

Improving the quality of the training dataset also played an important role in boosting the model performance. We collected and labeled more than 700K screenshots, and in the process, we streamlined data collection by using heuristics and auxiliary models to identify rarer icons. We also took advantage of data augmentation techniques by enriching existing screenshots with infrequent icons.

To improve the inference time, we modified our model to run using Neural Networks API (NNAPI) on a variety of Qualcomm DSPs available on many mobile phones. For this we converted the model to use 8-bit integer quantization which gives the additional benefit of model size reduction. After some experimentation, we used quantization aware training to quantize the model, while matching the performance of a server-side floating point model. The quantized model results in a 6x speed-up (700ms vs 110ms) and 50% size reduction while losing only ~0.5% mAP compared to the unquantized model.

Results
We use traditional object detection metrics (e.g., mAP) to measure model performance. In addition, to better capture the use case of voice controlled user actions, we define a modified version of a false positive (FP) detection, where we penalize more incorrect detections for icon classes that are present on the screen. For comparing detections with ground truth, we use the center in region of interest (CIROI), another metric we developed for this work, which returns in a positive match when the center of the detected bounding box lies inside the ground truth bounding box. This better captures the Voice Access mode of operation, where actions are performed by tapping anywhere in the region of the UI element of interest.

We compared the IconNet model with various other mobile compatible object detectors, including MobileNetEdgeTPU and SSD MobileNet v2. Experiments showed that for a fixed latency, IconNet outperformed the other models in terms of [email protected] on our internal evaluation set.

Model    [email protected]
IconNet (Hourglass)    96%
IconNet (HRNet)    89%
MobilenetEdgeTPU (AutoML)    91%
SSD Mobilenet v2    88%

The performance advantage of IconNet persists when considering quantized models and models for a fixed latency budget.

Models (Quantized)    [email protected]    Model size    Latency*
IconNet (Currently deployed)    94.20%    8.5 MB    107 ms
IconNet (XS)    92.80%    2.3 MB    102 ms
IconNet (S)    91.70%    4.4 MB    45 ms
MobilenetEdgeTPU (AutoML)    88.90%    7.8 MB    26 ms
*Measured on Pixel 3A.

Conclusion and Future Work
We are constantly working on improving IconNet. Among other things, we are interested in increasing the range of elements supported by IconNet to include any generic UI element, such as images, text, or buttons. We also plan to extend IconNet to differentiate between similar looking icons by identifying their functionality. On the application side, we are hoping to increase the number of apps with valid content descriptions by augmenting developer tools to suggest content descriptions for different UI elements when building applications.

Acknowledgements
This project is the result of joint work with Maria Wang, Tautvydas Misiūnas, Lijuan Liu, Ying Xu, Nevan Wichers, Xiaoxue Zang, Gabriel Schubiner, Abhinav Rastogi, Jindong (JD) Chen, Abhanshu Sharma, Pranav Khaitan, Matt Sharifi and Blaise Aguera y Arcas. We sincerely thank our collaborators Robert Berry, Folawiyo Campbell, Shraman Ray Chaudhuri, Nghi Doan, Elad Eban, Marybeth Fair, Alec Go, Sahil Goel, Tom Hume, Cassandra Luongo, Yair Movshovitz-Attias, James Stout, Gabriel Taubman and Anton Vayvod. We are very grateful to Tom Small for assisting us in preparing the post.

Source: Google AI Blog