Tag Archives: User Experience

Using Deep Learning to Improve Usability on Mobile Devices



Tapping is the most commonly used gesture on mobile interfaces, and is used to trigger all kinds of actions ranging from launching an app to entering text. While the style of clickable elements (e.g., buttons) in traditional desktop graphical user interfaces is often conventionally defined, on mobile interfaces it can still be difficult for people to distinguish tappable versus non-tappable elements due to the diversity of styles. This confusion can lead to false affordances (e.g., a feature that could be mistaken for a button) and a lack of discoverability that can lead to user frustration, uncertainty, and errors. To avoid this, interface designers can conduct a study or a visual affordance test to help clarify the tappability of items in their interfaces. However, such studies are time-consuming and their findings are often limited to a specific app or interface design.

In our CHI'19 paper, "Modeling Mobile Interface Tappability Using Crowdsourcing and Deep Learning", we introduced an approach for modeling the usability of mobile interfaces at scale. We crowdsourced a task to study UI elements across a range of mobile apps to measure the perceived tappability by a user. Our model predictions were consistent with the user group at the ~90% level, demonstrating that a machine learning model can be effectively used to estimate the perceived tappability of interface elements in their design without the need for expensive and time consuming user testing.
Predicting Tappability with Deep Learning
Designers often use visual properties such as the color or depth of an element to signify its availability for interaction on interfaces, e.g., the blue color and underline of a link. While these common signifiers are useful, it is not always clear when to apply them in each specific design setting. Furthermore, with design trends evolving, traditional signifiers are constantly being altered and challenged, potentially causing user uncertainty and mistakes.

To understand how users perceive this changing landscape, we analyzed the potential signifiers affecting tappability in real mobile apps—element type (e.g., check boxes, text boxes, etc.), location, size, color, and words. We started by crowdsourcing volunteers to label the perceived clickability of ~20,000 unique interface elements from ~3,500 apps. With the exception of text boxes, type signifiers yielded low uncertainty in user perceived tappability. The location signifier refers to the position of a feature on the screen and is informed by the common layout design in mobile apps, as demonstrated in the figure below.
Heatmaps displaying the accuracy of tappable and non-tappable elements by location, where warmer colors represent areas of higher accuracy. Users labeled non-tappable elements more accurately towards the upper center of the interface, and tappable elements towards the bottom center of the interface.
The impact of element size was relatively weak, but did indicate confusion in the case of large non-tappable elements. Users showed a tendency to bright colors and short word counts for tappable elements, though word semantics also played a significant role.

We used these labels to train a simple deep neural network that predicts the likelihood that a user will perceive an interface element as tappable versus non-tappable. For a given element of the interface, the model uses a range of features, including the spatial context of the element on the screen (location), the semantics and functionality of the element (words and type), and the visual appearance (size as well as raw pixels). The neural network model applies a convolutional neural network (CNN) to extract features from raw pixels, and uses learned semantic embeddings to represent text content and element properties. The concatenation of all these features are then fed to a fully-connected network layer, the output of which produces a binary classification of an element's tappability.

Evaluation of the Model
The model allowed us to automatically diagnose mismatches between the tappability of each interface element as perceived by a user—predicted by our model—and the intended or actual tappable state of the element specified by the developer or designer. In the example below, our model predicts that there is a 73% chance that a user would think the labels such as "Followers" or "Following" are tappable, while these interface elements are in fact not programmed to be tappable.
To understand how our model behaves compared to human users, particularly when there is ambiguity in human perception, we generated a second, independent dataset by crowdsourcing an effort among 290 volunteers to label each of 2,000 unique interface elements with respect to their perceived tappability. Each element was labeled independently by five different users. We found that more than 40% of the elements in our sample were labeled inconsistently by volunteers. Our model matches this uncertainty in human perception quite well, as demonstrated in the figure below.
The scatterplot of the tappability probability predicted by the model (the Y axis) versus the consistency in the human user labels (the X axis) for each element in the consistency dataset.
When users agree an element's tappability, our model tends to give a more definite answer—a probability close to 1 for tappable and close to 0 for not tappable. When workers are less consistent on an element (towards the middle of the X axis), our model is also less certain about the decision. Overall, our model achieved reasonable accuracy of matching human perception in identifying tappable UI elements with a mean precision of 90.2% and recall of 87.0%.

Predicting tappability is merely one example of what we can do with machine learning to solve usability issues in user interfaces. There are many other challenges in interaction design and user experience research where deep learning models can offer a vehicle to distill large, diverse user experience datasets and advance scientific understandings about interaction behaviors.

Acknowledgements
This research was a joint work of Amanda Swangson, summer intern at Google, and Yang Li, a Research Scientist in Deep Learning and Human Computer Interaction.

Source: Google AI Blog


User experience tips to help you design your app to engage users and drive conversions

By Jenny Gove, Senior Staff UX Researcher, Google Play

We know you work hard to acquire users and grow your customer base, which can be challenging in a crowded market. That's why we've heard from many of you that you find tools like store listing experiments and universal app campaigns are valuable. It's equally important to keep customers engaged from the beginning. Great design and delightful user experiences are fundamental to doing just that.

We partnered with AnswerLab to conduct comprehensive user experience research across a variety of verticals; including e-commerce, insurance, travel, food ordering, ticket sales and services, and financial management. The resulting insights may help you increase engagement and conversion by providing guidance on useful and usable functionality.

The best app experiences seamlessly guide users through their tasks with efficient navigation, search, forms, registration and purchasing. They provide great e-commerce facilities and integrate effective ordering and payment systems. Ultimately, an engaging app begins with attention to usability in all of these areas. Learn tips on:

  • Navigation & Exploration
  • In-App Search
  • Commerce & Conversions
  • Registration
  • Form Entry
  • Usability and Comprehension

You can read the full article, design your app to drive conversions, on the Android Developers website, complete with links to developer resources. Also get the Playbook for Developers app to stay up-to-date with features and best practices that will help you grow a successful business on Google Play.

How useful did you find this blogpost?

Three ways AdMob make UX a priority with Rewarded video

At AdMob, we know how important user experience is to creating a great app. Non-intrusive ads make for happy users and higher ad engagement rates, and we think rewarded videos are the next step in meeting and exceeding a users expectations for a great experience. Here are three ways AdMob can help improve your user experience with rewarded video ads.

1. Consistency

With reliable UX patterns, you’ll form clear breaks in your apps where users are expecting to see engaging ads. And when it comes to rewarded video, ads that appear at just the right moment will provide a mutual benefit to both user and publisher. For example, in a gaming app, you might want to reward your user with extra lives in exchange for watching a video at a moment when the game would otherwise end. Your users will thank you for it!

2. No tricks

A rewarded video is still an ad, so it is important to make the value exchange between user and publisher transparent. Users are presented with a clear description of the action required from them and what they will get in return, before choosing whether to opt-in to view the rewarded video. And while advertisers pay for an install, the option to install an app as a result of watching a rewarded video is not incentivized: the user is rewarded for viewing the message and the action to install is optional.

3. Optimum engagement

We use Firebase Analytics to help you understand your audience better. With Firebase, A/B testing is simple and can ensure you are getting the most out of your exchange with the user. Optimizing reward value, frequency of ads, and ad placement in app all contribute to a successful rewarded strategy that results in happy users.

Until next time, be sure to stay connected on all things AdMob by following our Twitter, LinkedIn and Google+ pages.

Source: Inside AdMob


The Native Way: 4 Ways to Make UX a Priority with Native Ads

At AdMob, we know how important user experience is to creating a great app. Consistent patterns, refreshing simplicity and polished, thoughtful design make for happy users and potentially higher ad engagement rates. Native advertising is the next step in meeting user’s UX expectations. Native ads are less jarring than traditional ads and fit in with app content more naturally, providing a better user experience. Now, that’s smart business.

Here are four ways you can make UX a priority both within your app and ad experience.

1. Consistency

Sometimes it’s good to be predictable. By sticking to consistent UX patterns in your app, you can help users focus less on navigating and more on your app’s content and value. Don’t surprise your user with new elements – stick to a steady user flow like swiping left on a news app to exit an article or navigate the headline feed. Use consistent design elements like dedicated font sizes, colors, buttons, and screen sizes.

Same goes for your users’ ad experience. With reliable UX patterns, you’ll form clear breaks in your apps where users are expecting to see engaging ads. In the example of the news app, you might also want to use that swipe left feature to allow users to seamlessly dismiss ads. This also applies to ad styling. For example, if you use 14px, Lato, bold, dark grey for prominent text, then use that font for your ad’s headline, as well. The result? Users will expect that styling is dedicated to important text.

2. Clarity

Nobody likes clutter. Simplicity in your app says you’re not wasting your user’s time by throwing every possible option at them, without thinking about what they really need. Simplify your app by uncluttering your screen, writing concise copy, keeping design legible and well-spaced and providing single call-to-action buttons where possible. Likewise, clean, beautiful, single call-to-action ads will communicate that you understand your users and help gain their trust.

3. No tricks

Let’s be clear, a native ad is still an ad. Don’t try to distort or overlap ad components (there’s no quicker way to offend a user!). At Google, we care about building trust in the app advertising ecosystem. For instance, we have extended our accidental click protections to native ad formats (including fast clicks and edge clicks) in order to prevent users from a slip of the finger and deliver greater value to advertisers.

4. Thoughtful design

Thoughtful details in your app are important to let your users know you care. That means sharp imagery, curated fonts, specced margins and quick loading times. It’s important to use the same level of care and polish for small details in your native ads design. Simple, intuitive and well-designed native ads can help you say; ‘thanks’ to your valuable user base.

Until next time, be sure to stay connected on all things AdMob by following our Twitter, LinkedIn and Google+ pages.

Posted by Chris Jones, Social Team, AdMob.

Source: Inside AdMob


How to measure translation quality in your user interfaces



Worldwide, there are about 200 languages that are spoken by at least 3 million people. In this global context, software developers are required to translate their user interfaces into many languages. While graphical user interfaces have evolved substantially when compared to text-based user interfaces, they still rely heavily on textual information. The perceived language quality of translated user interfaces (UIs) can have a significant impact on the overall quality and usability of a product. But how can software developers and product managers learn more about the quality of a translation when they don’t speak the language themselves?

Key information in interaction elements and content are mostly conveyed through text. This aspect can be illustrated by removing text elements from a UI, as shown in the the figure below.
Three versions of the YouTube UI: (a) the original, (b) YouTube without text elements, and (c) YouTube without graphic elements. It gets apparent how the textless version is stripped of the most useful information: it is almost impossible to choose a video to watch and navigating the site is impossible.
In "Measuring user rated language quality: Development and validation of the user interface Language Quality Survey (LQS)", recently published in the International Journal of Human-Computer Studies, we describe the development and validation of a survey that enables users to provide feedback about the language quality of the user interface.

UIs are generally developed in one source language and translated afterwards string by string. The process of translation is prone to errors and might introduce problems that are not present in the source. These problems are most often due to difficulties in the translation process. For example, the word “auto” can be translated to French as automatique (automatic) or automobile (car), which obviously has a different meaning. Translators might chose the wrong term if context is missing during the process. Another problem arises from words that behave as a verb when placed in a button or as a noun if part of a label. For example, “access” can stand for “you have access” (as a label) or “you can request access” (as a button).

Further pitfalls are gender, prepositions without context or other characteristics of the source text that might influence translation. These problems sometimes even get aggravated by the fact that translations are made by different linguists at different points in time. Such mistranslations might not only negatively affect trustworthiness and brand perception, but also the acceptance of the product and its perceived usefulness.

This work was motivated by the fact that in 2012, the YouTube internationalization team had anecdotal evidence which suggested that some language versions of YouTube might benefit from improvement efforts. While expert evaluations led to significant improvements of text quality, these evaluations were expensive and time-consuming. Therefore, it was decided to develop a survey that enables users to provide feedback about the language quality of the user interface to allow a scalable way of gathering quantitative data about language quality.

The Language Quality Survey (LQS) contains 10 questions about language quality. The first five questions form the factor “Readability”, which describes how natural and smooth to read the used text is. For instance, one question targets ease of understanding (“How easy or difficult to understand is the text used in the [product name] interface?”). Questions 6 to 9 summarize the frequency of (in)consistencies in the text, called “Linguistic Correctness”. The full survey can be found in the publication.

Case study: applying the LQS in the field

As the LQS was developed to discover problematic translations of the YouTube interface and allow focused quality improvement efforts, it was made available in over 60 languages and data were gathered for all these versions of the YouTube interface. To understand the quality of each UI version, we compared the results for the translated versions to the source language (here: US-English). We inspected first the global item, in combination with Linguistic Correctness and Readability. Second, we inspected each item separately, to understand which notion of Linguistic Correctness or Readability showed worse (or better) values. Here are some results:
  • The data revealed that about one third of the languages showed subpar language quality levels, when compared to the source language.
  • To understand the source of these problems and fix them, we analyzed the qualitative feedback users had provided (every time someone selected the lower two end scale points, pointing at a problem in the language, a text box was surfaced, asking them to provide examples or links to illustrate the issues).
  • The analysis of these comments provided linguists with valuable feedback of various kinds. For instance, users pointed to confusing terminology, untranslated words that were missed during translation, typographical or grammatical problems, words that were translated but are commonly used in English, or screenshots in help pages that were in English but needed to be localized. Some users also pointed to readability aspects such as sections with old fashioned or too formal tone as well as too informal translations, complex technical or legal wordings, unnatural translations or rather lengthy sections of text. In some languages users also pointed to text that was too small or criticized the readability of the font that was used.
  • In parallel, in-depth expert reviews (so-called “language find-its”) were organized. In these sessions, a group of experts for each language met and screened all of YouTube to discover aspects of the language that could be improved and decided on concrete actions to fix them. By using the LQS data to select target languages, it was possible to reduce the number of language find-its to about one third of the original estimation (if all languages had been screened).
LQS has since been successfully adapted and used for various Google products such as Docs, Analytics, or AdWords. We have found the LQS to be a reliable, valid and useful tool to approach language quality evaluation and improvement. The LQS can be regarded as a small piece in the puzzle of understanding and improving localization quality. Google is making this survey broadly available, so that everyone can start improving their products for everyone around the world.