Artificial Intelligence · 2019-04-05

Study indicates machine learning can tackle usability issues in mobile user interfaces – AI

artificial intelligence perception

geralt / Pixabay

Ever wondered what artificial intelligence perception is all about? Mobile apps are built to be interactive, & element styling is meant to indicate to users which items are “tappable” compared to those which are not. Developers employ various perception-triggers such as color, size or position to denote an area, which can be tapped.

(Tapping is the most commonly used gesture on mobile interfaces, & is used to trigger all kinds of actions ranging from launching an app to entering text.)

However, designers are swayed by current trends as well as personal preferences on structuring their layouts; the problem is that there might be a difference between what a developer compared to a user regards as a clear indicator, especially if the app designer departs from well known methods of identification.

For instance, if the color theme of an app is grey, red & black, the designer might use grey to denote an area, which can be tapped rather that the more commonly identifiable blue. This perception of interactivity has often been a reason why apps fail. Therefore, developers & businesses go to great lengths to test their applications among groups of users who match the demographics they have in mind for potential consumers. However, this is a very costly & time-consuming process, so if machines could accurately test applications built for humans, effectively, it would ease these constraints.

The question is, can it be done?

A recent study comparing AI (machines) & human perception of interactive elements on mobile apps conducted by Yang Li, a research scientist in Deep Learning And Human Computer Interaction at Google concludes that it is possible to teach machines to predict user perception. In association with Amanda Swagson, Yang crowdsourced users to obtain feedback on their interactive perception of element styling on a broad selection of apps.

The study is published as a CHI’19 paper titled, “Modeling Mobile Interface Tappability Using Crowdsourcing and Deep Learning”, & sets out to determine whether or not machines can accurately predict elements, on app, that human users would consider tappable. Yang used a group of 20000 volunteers across 3500 real-world apps to access their perception of the clickability on elements such as the word ‘follow’ & ‘following’ as well as other common UI items, like checkboxes & images.

Thereafter, they proceeded to study machines, & ‘teach’ them to emulate a human’s predicative perception, using similar applications. Results suggested that users ‘see’ bright colors & comparatively, large elements on an interface as tappable, while, quieter colors & smaller items where considered non-interactive. They used these labels to train a deep neural network how to differentiate which elements are  considered by humans as tappable.

Next they embarked on comparative research to determine how well the AI computers fared, against the earmarks set by their human counterparts. Results obtained for the machines were quite remarkable, in that they scored as well as a 90% accuracy when their predictions were compared to the perceptions of their ‘actual’ human counterparts.

The team used heat maps to demonstrate the level of tappability perception by humans & machines, which offers a sound basis for the teams conclusion that computers can, indeed, be used to accurately determine how a human would interact with a mobile app, thereby, clearly, showing that this often expensive exercise can, effectively, be taken over by AI ‘brains’.


 

Click here to opt-out of Google Analytics