The paper examines a computational design approach for improving user interface designs for people with sensorimotor and cognitive impairments. In ability-based optimization, designs are created by an optimizer and evaluated against model of an individual performing tasks. Alternative designs can be explored and adapted to an individual's abilities. In this paper, we explore text entry on touchscreen devices as the case. Individual abilities are parametrically expressed as part of a task-specific cognitive model, and the model estimates how the individual might adapt her interaction to the task. Optimized designs can potentially improve speed and reduce error for users with tremor and dyslexia. Ability-based optimization does not necessitate extensive data-collection and could be applied both automatically and manually by users, designers, or caretakers.
This study demonstrates how playing a well-designed multitasking motion video game in a virtual reality (VR) environment can positively impact the cognitive and physical health of older players. We developed a video game that combines cognitive and physical training in a VR environment. The impact of playing the game was measured through a four-week longitudinal experiment. Twenty healthy older adults were randomly assigned to either an intervention group (i.e., game training) or a control group (i.e., no contact). Participants played three 45-min sessions per week completing cognitive tests for attention, working memory, reasoning and a test for physical balance before and after the intervention. Results showed that compared to the control group, the game group showed significant improvements in working memory and a potential for enhancing reasoning and balance ability. Furthermore, while the older adults enjoyed playing the video game, ability enhancements were associated with their intrinsic motivation to play. Overall, cognitive training with multitasking VR motion video games has positive impacts on the cognitive and physical health of older adults.
Predicting how users learn new or changed interfaces is a longstanding objective in HCI research. This paper contributes to understanding of visual search and learning in text entry. With a goal of explaining variance in novices' typing performance that is attributable to visual search, a model was designed to predict how users learn to locate keys on a keyboard: initially relying on visual short-term memory but then transitioning to recall-based search. This allows predicting search times and visual search patterns for completely and partially new layouts. The model complements models of motor performance and learning in text entry by predicting change in visual search patterns over time. Practitioners can use it for estimating how long it takes to reach the desired level of performance with a given layout.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.