Concepts users spend 70% more time using the app on tablets than on phones

Posted by the Android team

Concepts is a digital illustration app created by TopHatch that helps creative thinkers bring their visions to life. The app uses an infinitely-large canvas format, so its users can sketch, plan, and edit all of their big ideas without limitation, while its vector-based ink provides the precision needed to refine and reorganize their ideas as they go.

For Concepts, having more on-screen real estate means more comfort, more creative space, and a better user experience overall. That’s why the app was specifically designed with large screens in mind. Concepts’ designers and engineers are always exploring new ways to expand the app’s large screen capabilities on Android. Thanks to Android’s suite of developer tools and resources, that’s easier than ever.

Evaluating an expanding market of devices

Large screens are the fastest growing segment of Android users, with more than 270 million users on tablets, foldables, and ChromeOS devices. It’s no surprise then that Concepts, an app that benefits users by providing them with more screen space, was attracted to the format. The Concepts team was also excited about innovation with foldables because having the large screen experience with greater portability gives users more opportunities to use the app in the ways that are best for them.

The team at Concepts spends a lot of time evaluating new large screen technologies and experiences, trying to find what hardware or software features might benefit the app the most. The team imagines and storyboards several scenarios, shares the best ones with a close-knit beta group, and quickly builds prototypes to determine whether these updates improve the UX for its larger user base.

For instance, Concepts’ designers recently tested the Samsung Galaxy Fold and found that users benefited from having more screen space when the device was folded. With help from the Jetpack WindowManager library, Concepts’ developers implemented a feature to automatically collapse the UI when the Galaxy’s large screen was folded, allowing for more on-screen space than if the UI were expanded.

Foldable UI

Concepts’ first release for Android was optimized for ChromeOS and, because of this, supporting resizable windows was important to their user experience from the very beginning. Initially, they needed to use a physical device to test for various screen sizes. Now, the Concepts team can use Android’s resizeable emulator, which makes testing for different screen sizes much easier.

Android’s APIs and toolkit carry the workload

The developers’ goal with Concepts is to make the illustration experience feel as natural as putting pen to paper. For the Concepts team, this meant achieving as close to zero lag as possible between the stylus tip and the lines drawn on the Concepts canvas.

When Concepts’ engineers first created the app, they put a lot of effort into creating low-latency drawing themselves. Now, Android’s graphical APIs eliminate the complexity of creating efficient inking.

“The hardware to support low-latency inking with higher refresh rate screens and more accurate stylus data keeps getting better,” said David Brittain, co-founder and CEO of TopHatch, parent company of Concepts. “Android’s mature set of APIs make it easy.”

Concepts engineers also found that the core Android View APIs take care of most of the workload for supporting tablets and foldables and make heavy use of custom Views and ViewGroups in Concepts. The app’s color wheel, for example, is a custom View drawing to a Canvas, which uses Animators for the reveal animation. View, Canvas, and Animator are all classes from the Android SDK.

“Android’s tools and platform are making it easier to address the variety of screen sizes and input methods, with well-structured APIs for developing and increasing the number of choices for testing. Plus, Kotlin allows us to create concise, readable code,” said David.


Concepts’ users prefer large screens

Tablets and foldables represent the bulk of Concepts’ investments and user base, and the company doesn’t see that changing any time soon. Currently, tablets deliver 50% higher revenue per user than smartphone users. Tablets also account for eight of the top 10 most frequently used devices among Concepts’ users, with the other two being ChromeOS devices.

Additionally, Concepts’ monthly users spend 70% more time engaging with the app on tablets than on traditional smartphones. The application’s rating is also 0.3 stars higher on tablets.

“We’re looking forward to future improvements in platform usability and customization while increasing experimentation with portable form factors. Continued efforts in this area will ensure high user adoption well into the future,” said David.

Start developing for large screens today

Learn how you can reach a growing audience of users by increasing development for large screens and foldables today.

What it means to be an Android Google Developer Expert

Posted by Yasmine Evjen, Community lead, Android DevRel

The community of Android developers is at the heart of everything we do. Seeing the community come together to build new things, encourage each other, and share their knowledge encourages us to keep pushing the limits of Android.

At the core of this is our Android Google Developer Experts, a global community that comes together to share best practices through speaking, open-source contributions, workshops, and articles. This is a caring community that mentors, supports each other, and isn’t afraid to get their hands dirty with early access Android releases, providing feedback to make it the best release for developers across the globe.

We asked, “What do you love most about being in the #AndroidDev and Google Developer Expert community?”

Gema Socorro says, ”I love helping other devs in their Android journey,” and Jaewoog Eum shares the joy of “Learning, building, and sharing innovative Android technologies for everyone.”

Hear from the Google Developer Expert Community

We also sat down with Ahmed Tikiwa, Annyce Davis, Dinorah Tovar, Harun Wangereka, Madona S Wambua, and Zarah Dominguez - to hear about their journey as an Android Developer and GDE and what this role means to them - watch them on The Android Show below.

Annyce, VP Engineer Meetup shares, “the community is a great sounding board to solve problems, and helps me stay technical and keep learning.”

Does the community inspire you? Get involved by speaking at your local developer conferences, sharing your latest Android projects, and not being afraid to experiment with new technology. This year, we’re spotlighting community projects! Tag us in your blogs, videos, tips, and tricks to be featured in the latest #AndroidSpotlight.

Active in the #AndroidDev community? Become an Android Google Developer Expert.

A group of Android Developers and a baby, standing against a headge of lush greenery, smiling

Long Term Support Channel Update for ChromeOS

 LTS-108 is being updated in the LTS channel to 108.0.5359.224 (Platform Version: 15183.86.0) for most ChromeOS devices. Want to know more about Long Term Support? Click here.


This update contains multiple Security fixes, including:


1415366 Critical CVE-2023-0941 Use after free in Prompts
1417176 High CVE-2023-1215 Type Confusion in CSS
1413628 High CVE-2023-1218 Use after free in WebRTC
1415328 High CVE-2023-1219 Heap buffer overflow in Metrics
1417185 High CVE-2023-1220 Heap buffer overflow in UMA
1407701 High CVE-2023-0931 Use after free in Video


Giuliana Pritchard 

Google Chrome OS

Dev Channel Update for ChromeOS / ChromeOS Flex

The Dev channel is being updated to OS version: 15389.0.0, Browser version: 113.0.5650.0 for most ChromeOS devices.

If you find new issues, please let us know one of the following ways

  1. File a bug
  2. Visit our ChromeOS communities
    1. General: Chromebook Help Community
    2. Beta Specific: ChromeOS Beta Help Community
  3. Report an issue or send feedback on Chrome

Interested in switching channels? Find out how.

Matt Nelson,

Google ChromeOS

Chrome Dev for Android Update

Hi everyone! We've just released Chrome Dev 113 (113.0.5668.0) for Android. It's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Erhu Akpobaro
Google Chrome

Beta Channel Update for ChromeOS and Flex

The Beta channel is being updated to ChromeOS version: 15359.31.0 and Browser version: 112.0.5615.37 for most devices. It contains a number of bug fixes and security updates.

If you find new issues, please let us know one of the following ways

  1. File a bug
  2. Visit our ChromeOS communities
    1. General: Chromebook Help Community
    2. Beta Specific: ChromeOS Beta Help Community
  3. Report an issue or send feedback on Chrome

Interested in switching channels? Find out how.


Google ChromeOS.

Play Commerce prevented over $2 billion in fraudulent and abusive transactions in 2022

Posted by Sheenam Mittal, Product Manager, Google Play

Google Play Commerce enables you to monetize your apps and games at scale in over 170 markets, without the complexities and time consumption required to run your own global commerce platform. It enables you to easily transact with millions of users around the world and gives users trusted and safe ways to pay for your digital products and content. Ensuring developers and users have a secure purchase experience has been a key pillar of Play Commerce, and we achieve this by continuously preventing and monitoring for bad actors looking to defraud and abuse your apps.

Preventing fraud and securing purchases

In 2022, we prevented over $2 billion in fraudulent and abusive transactions. Bad actors looking to carry out abuse on apps implement an array of strategies across both one–time purchases as well as auto-renewing payments. For example, they may attempt to purchase an item in your app with a compromised form of payment, or request a refund for an in-app purchase that’s been already consumed or sold, or use scammed gift cards for purchases. When a combined or coordinated attempt is carried out by bad actors, it can result in large-scale abuse on your app. Preventing such fraud and abuse requires a comprehensive approach, consisting of automated solutions and an array of internal monitoring tools combined with human expertise.

Empower developers with tools to mitigate app abuse

Information asymmetry between Google Play and developers is commonly exploited by bad actors. Two of the most effective solutions that you can implement to help address this are Voided Purchases API and Obfuscated Account ID. Over 70% of our top 200 monetizing developers have integrated these solutions to reduce fraud and abuse on their apps.

  • Voided Purchases API provides you with a list of in-app and subscription orders for each user that have been voided. You can implement revocation that prevents the user from accessing products from those orders.
Diagram detailing Improve losses, preserve app economy, and secure game integrity as benfits of Voided Purchases API
Benefits of Voided Purchases API
  • Obfuscated Account ID helps Play detect fraudulent transactions, such as many devices making purchases on the same account in a short period of time.

You can also use Play Integrity API to protect your apps and games from potentially risky and fraudulent interactions, such as cheating and unauthorized access. You call the Play Integrity API at important moments to check that user actions or server requests are coming from your unmodified app, installed by Google Play, running on a genuine Android device. If something is wrong, your app’s backend server can respond with appropriate actions to prevent attacks and reduce abuse. Developers using the API have seen an average of over 50% reduction in unauthorized access of their apps and games. Stay tuned for new highly-requested feature updates.

Chart showing the flow of how Play Integrity API works from user action or server request to app request a Play Inegrity API verdict, to Play returns verdicts to backend server decides what to do next.
Flowchart of how Play Integrity API works

Looking forward

This month, we launched Purchases.product.consume, which allows you to consume in-app items using the Play Developer API, reducing the risk of client-side abuse by shifting more business logic to your secure backends. For example, if a bad actor purchases an item from your app but tampers with the client side, the purchase will be automatically refunded due to lack of acknowledgement after 3 days of purchase. Using server side consumption will prevent this type of app abuse.

Google Play Commerce is committed to providing developers and users a secure purchase experience. Learn more about how to prevent bad actors from harming users and abusing your app by visiting this guide, as well as other 2023 initiatives helping keep Android and Google Play safe.

Add or remove client-side encryption from a Google Doc

What’s changing 

You can now choose to add client-side encryption to an existing document or remove it from an already encrypted document (File > Make a copy > Add/Remove additional encryption). This update gives you the flexibility to control encryption as your documents and projects evolve and progress.



Getting started

Rollout pace


Availability

  • Available to Google Workspace Enterprise Plus, Education Standard and Education Plus customers

Resources


Visual language maps for robot navigation

People are excellent navigators of the physical world, due in part to their remarkable ability to build cognitive maps that form the basis of spatial memory — from localizing landmarks at varying ontological levels (like a book on a shelf in the living room) to determining whether a layout permits navigation from point A to point B. Building robots that are proficient at navigation requires an interconnected understanding of (a) vision and natural language (to associate landmarks or follow instructions), and (b) spatial reasoning (to connect a map representing an environment to the true spatial distribution of objects). While there have been many recent advances in training joint visual-language models on Internet-scale data, figuring out how to best connect them to a spatial representation of the physical world that can be used by robots remains an open research question.

To explore this, we collaborated with researchers at the University of Freiburg and Nuremberg to develop Visual Language Maps (VLMaps), a map representation that directly fuses pre-trained visual-language embeddings into a 3D reconstruction of the environment. VLMaps, which is set to appear at ICRA 2023, is a simple approach that allows robots to (1) index visual landmarks in the map using natural language descriptions, (2) employ Code as Policies to navigate to spatial goals, such as "go in between the sofa and TV" or "move three meters to the right of the chair", and (3) generate open-vocabulary obstacle maps — allowing multiple robots with different morphologies (mobile manipulators vs. drones, for example) to use the same VLMap for path planning. VLMaps can be used out-of-the-box without additional labeled data or model fine-tuning, and outperforms other zero-shot methods by over 17% on challenging object-goal and spatial-goal navigation tasks in Habitat and Matterport3D. We are also releasing the code used for our experiments along with an interactive simulated robot demo.



VLMaps can be built by fusing pre-trained visual-language embeddings into a 3D reconstruction of the environment. At runtime, a robot can query the VLMap to locate visual landmarks given natural language descriptions, or to build open-vocabulary obstacle maps for path planning.


Classic 3D maps with a modern multimodal twist

VLMaps combines the geometric structure of classic 3D reconstructions with the expression of modern visual-language models pre-trained on Internet-scale data. As the robot moves around, VLMaps uses a pre-trained visual-language model to compute dense per-pixel embeddings from posed RGB camera views, and integrates them into a large map-sized 3D tensor aligned with an existing 3D reconstruction of the physical world. This representation allows the system to localize landmarks given their natural language descriptions (such as "a book on a shelf in the living room") by comparing their text embeddings to all locations in the tensor and finding the closest match. Querying these target locations can be used directly as goal coordinates for language-conditioned navigation, as primitive API function calls for Code as Policies to process spatial goals (e.g., code-writing models interpret "in between" as arithmetic between two locations), or to sequence multiple navigation goals for long-horizon instructions.


# move first to the left side of the counter, then move between the sink and the oven, then move back and forth to the sofa and the table twice.
robot.move_to_left('counter')
robot.move_in_between('sink', 'oven')
pos1 = robot.get_pos('sofa')
pos2 = robot.get_pos('table')
for i in range(2):
robot.move_to(pos1)
robot.move_to(pos2)
# move 2 meters north of the laptop, then move 3 meters rightward.
robot.move_north('laptop')
robot.face('laptop')
robot.turn(180)
robot.move_forward(2)
robot.turn(90)
robot.move_forward(3)

VLMaps can be used to return the map coordinates of landmarks given natural language descriptions, which can be wrapped as a primitive API function call for Code as Policies to sequence multiple goals long-horizon navigation instructions.


Results

We evaluate VLMaps on challenging zero-shot object-goal and spatial-goal navigation tasks in Habitat and Matterport3D, without additional training or fine-tuning. The robot is asked to navigate to four subgoals sequentially specified in natural language. We observe that VLMaps significantly outperforms strong baselines (including CoW and LM-Nav) by up to 17% due to its improved visuo-lingual grounding.


Tasks    Number of subgoals in a row       Independent
subgoals

     
   1
2
3
4
  
LM-Nav    26 4 1 1       26   
CoW    42 15 7 3       36   
CLIP MAP    33 8 2 0       30   
VLMaps (ours)      59 34 22 15       59   
GT Map    91 78 71 67       85   

The VLMaps-approach performs favorably over alternative open-vocabulary baselines on multi-object navigation (success rate [%]) and specifically excels on longer-horizon tasks with multiple sub-goals.

A key advantage of VLMaps is its ability to understand spatial goals, such as "go in between the sofa and TV" or "move three meters to the right of the chair”. Experiments for long-horizon spatial-goal navigation show an improvement by up to 29%. To gain more insights into the regions in the map that are activated for different language queries, we visualize the heatmaps for the object type “chair”.

The improved vision and language grounding capabilities of VLMaps, which contains significantly fewer false positives than competing approaches, enable it to navigate zero-shot to landmarks using language descriptions.

Open-vocabulary obstacle maps

A single VLMap of the same environment can also be used to build open-vocabulary obstacle maps for path planning. This is done by taking the union of binary-thresholded detection maps over a list of landmark categories that the robot can or cannot traverse (such as "tables", "chairs", "walls", etc.). This is useful since robots with different morphologies may move around in the same environment differently. For example, "tables" are obstacles for a large mobile robot, but may be traversable for a drone. We observe that using VLMaps to create multiple robot-specific obstacle maps improves navigation efficiency by up to 4% (measured in terms of task success rates weighted by path length) over using a single shared obstacle map for each robot. See the paper for more details.

Experiments with a mobile robot (LoCoBot) and drone in AI2THOR simulated environments. Left: Top-down view of an environment. Middle columns: Agents’ observations during navigation. Right: Obstacle maps generated for different embodiments with corresponding navigation paths.

Conclusion

VLMaps takes an initial step towards grounding pre-trained visual-language information onto spatial map representations that can be used by robots for navigation. Experiments in simulated and real environments show that VLMaps can enable language-using robots to (i) index landmarks (or spatial locations relative to them) given their natural language descriptions, and (ii) generate open-vocabulary obstacle maps for path planning. Extending VLMaps to handle more dynamic environments (e.g., with moving people) is an interesting avenue for future work.


Open-source release

We have released the code needed to reproduce our experiments and an interactive simulated robot demo on the project website, which also contains additional videos and code to benchmark agents in simulation.


Acknowledgments

We would like to thank the co-authors of this research: Chenguang Huang and Wolfram Burgard.

Source: Google AI Blog