Rapid recognition of voluntary motions is crucial in human-computer interaction, but few studies compare the predictive abilities of different sensing technologies. This paper thus compares performances of different technologies when predicting targets of human reaching motions: electroencephalography (EEG), electrooculography, camera-based eye tracking, electromyography (EMG), hand position, and the user's preferences. Supervised machine learning is used to make predictions at different points in time (before and during limb motion) with each individual sensing modality. Different modalities are then combined using an algorithm that takes into account the different times at which modalities provide useful information. Results show that EEG can make predictions before limb motion onset, but requires subject-specific training and exhibits decreased performance as the number of possible targets increases. EMG and hand position give high accuracy, but only once the motion has begun. Eye tracking is robust and exhibits high accuracy at the very onset of limb motion. Several advantages of combining different modalities are also shown, including advantages of combining measurements with contextual data. Finally, some recommendations are given for sensing modalities with regard to different criteria and applications. The information could aid human-computer interaction designers in selecting and evaluating appropriate equipment for their applications.
The influence of numeracy on information processing of two risk communication formats (percentage and pictograph) was examined using an eye tracker. A sample from the general population (N = 159) was used. In intuitive and deliberative decision conditions, the participants were presented with a hypothetical scenario presenting a test result. The participants indicated their feelings and their perceived risk, evoked by a 17% risk level. In the intuitive decision condition, a significant correlation (r = .30) between numeracy and the order of information processing was found: the higher the numeracy, the earlier the processing of the percentage, and the lower the numeracy, the earlier the processing of the pictograph. This intuitive, initial focus on a format prevailed over the first half of the intuitive decision-making process. In the deliberative decision condition, the correlation between numeracy and order of information processing was not significant. In both decision conditions, high and low numerates processed pictograph and percentage formats with similar depths and derived similar meanings from them in terms of feelings and perceived risk. In both conditions numeracy had no effects on the degree of attention on the percentage or the pictograph (number of fixations on formats and transitions between them). The results suggest that pictographs attract low numerates’ attention, and percentages attract high numerates’ attention in the first, intuitive, phase of numeric information processing. Pictographs thus ensure low numerates’ further elaboration on numeric risk information, which is an important precondition of risk understanding and decision making.
Abstract-Human motion recognition is essential for many biomedical applications, but few studies compare the abilities of multiple sensing modalities. This paper thus evaluates the effectiveness of different modalities when predicting targets of human reaching movements. Electroencephalography, electrooculography, camera-based eye tracking, electromyography, hand tracking and the user's preferences are used to make predictions at different points in time. Prediction accuracies are calculated based on data from 10 subjects in within-subject crossvalidation. Results show that electroencephalography can make predictions before limb motion onset, but its accuracy decreases as the number of potential targets increases. Electromyography and hand tracking give high accuracy, but only after motion onset. Eye tracking is robust and gives high accuracy at limb motion onset. Combining multiple modalities can increase accuracy, though not always. While many studies have evaluated individual sensing modalities, this study provides quantitative data on many modalities at different points of time in a single setting. The information could help biomedical engineers choose the most appropriate equipment for a particular application.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.