Model-based evaluation has been widely used in HCI. However, current predictive models are insufficient to evaluate Natural User Interfaces based on touchless hand gestures. The purpose of this paper is to present a model based on KLM to predict performance time for doing tasks using this interface type. The required model operators were defined considering the temporal structure of hand gestures (i.e. using gesture units) and performing a systematic bibliographic review. The times for these operators were estimated by a user study consisting of various parts. Finally, the model empirical evaluation gave acceptable results (root-mean-square error = 10%, R 2 = 0.936) when compared to similar models developed for other interaction styles. Thus, the proposed model should be helpful to software designers to carry out usability assessments by predicting performance time without user participation.
(Recibido: 2014/10/31 - Aceptado: 2014/12/15)La proliferación de nuevos dispositivos para detectar movimientos del cuerpo humano ha conducido a un incremento en el uso de interfaces basadas en gestos realizados con la mano y sin contacto. Este tipo de aplicaciones podrían ser utilizadas también en las salas de clase. Aunque muchos estudios de este tipo han sido realizados, la mayoría de ellos no están enfocados a las salas de clases. Por lo tanto, en este artículo se presenta una revisión bibliográfica de estudios relacionados con el objetivo de organizarlos y vincularlos con el diseño de interfaces para este tipo de escenario. Esta revisión discute algunas de las aplicaciones afines, la forma en que se reconocen los gestos del usuario, los aspectos de diseño a tener en cuenta y algunas formas de evaluar este tipo de interacción. Así, este trabajo puede servir como una guía referencial para que investigadores y diseñadores de software puedan desarrollar aplicaciones de este tipo y usarlas en las salas de clases.
Interfaces based on mid-air gestures often use a one-to-one mapping between gestures and commands, but most remain very basic. Actually, people exhibit inherent intrinsic variations for their gesture articulations because gestures carry dependency with both the person producing them and the specific context, social or cultural, in which they are being produced. We advocate that allowing applications to map many gestures to one command is a key step to give more flexibility, avoid penalizations, and lead to better user interaction experiences. Accordingly, this paper presents our results on mid-air gesture variability. We are mainly concerned with understanding variability in mid-air gesture articulations from a pure user-centric perspective. We describe a comprehensive investigation on how users vary the production of gestures under unconstrained articulation conditions. The conducted user study consisted in two tasks. The first one provides a model of user conception and production of gestures; from this study we also derive an embodied taxonomy of gestures. This taxonomy is used as a basis for the second experiment, in which we perform a fine grain quantitative analysis of gesture articulation variability. Based on these results, we discuss implications for gesture interface designs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.