Abstract. Recent proliferation of mobile devices has made it important to provide automatic support for usability evaluation when people interact with mobile applications. In this paper, we discuss some specific aspects that need to be considered in remote usability of mobile Web applications, and introduce a novel environment that aims to address such issues.
Keywords:Remote Evaluation, Logging Tools, Mobile Usability.
IntroductionIn usability evaluation, automatic tools can provide various types of support in order to facilitate this activity and help developers and evaluators to gather various useful pieces of information. Several approaches have been put forward for this purpose. Some tools allow users to provide feedback on the considered applications through questionnaires or reporting critical incidents or other relevant information. Other proposals have been oriented to providing some automatic analysis of the user interface implementation in order to check its actual conformance to a set of guidelines. A different approach consists in gathering information on actual user behaviour and helping evaluators in analysing it in order to identify possible usability problems.In remote usability evaluation evaluators and users are separated in time and/or space. This is important in order to analyse users in their daily environments and decrease the costs of the evaluation by avoiding the need to use specific laboratories and to ask users to move.The purpose of this paper is to discuss the possibilities offered by remote usability evaluation of mobile applications based on logging user interactions and supporting the analysis of such data. We describe the novel issues raised by this type of approach and provide concrete indications about how they can be addressed, in particular when Web applications are accessed through mobile devices.In the paper we first discuss related work; next we provide a discussion of the important aspects that have to be considered when designing support for remote evaluation of mobile application; and then introduce examples of possible solutions to such issues provided by a novel version of a remote evaluation environment. Lastly, we draw some conclusions and provide indications for future work.