Existing research shows that people can improve their decision skills by learning what experts paid attention to when faced with the same problem. However, in domains like financial education, effective instruction requires frequent, personalized feedback given at the point of decision, which makes it time‐consuming for experts to provide and thus, prohibitively costly. We address this by demonstrating an automated feedback mechanism that allows amateur decision‐makers to learn what information to attend to from one another, rather than from an expert. In the first experiment, eye movements of N = 100 subjects were recorded while they repeatedly performed a standard behavioral finance investment task. Consistent with previous studies, we found that a significant proportion of subjects were affected by decision bias. In the second experiment, a different group of N = 100 subjects faced the same task but, after each choice, they received individual, machine learning‐generated feedback on whether their pre‐decision eye movements resembled those made by Experiment 1 subjects prior to good decisions. As a result, Experiment 2 subjects learned to analyze information similarly to their successful peers, which in turn reduced their decision bias. Furthermore, subjects with low Cognitive Reflection Test scores gained more from the proposed form of process feedback than from standard behavioral feedback based on decision outcomes.
Recent studies reported that the attraction effect, whereby inferior decoys cause choice reversals, fails to replicate if the choice options are presented in a pictorial rather than abstract numerical form. We argue that the pictorial setting makes the similarity between decoy and target salient, while the abstract one emphasizes the inferiority relationship between them, crucial for the effect to occur. Thus, we used a novel experimental design in which both similarity and inferiority are equally easy to judge, their relative strength simple to manipulate, and choices incentivized rather than hypothetical. Using eye-tracking, we found that both the transfer of attention towards an undesirable target and choice reversal likelihood increase when the decoy is more strongly inferior but less similar to the target. This suggests that a key mechanism in the attraction effect is that, by virtue of its inferiority, a decoy projects a spotlight of attention towards the target, making it more attractive.
The aim of the study was not only to demonstrate whether eye-movement-based task decoding was possible but also to investigate whether eye-movement patterns can be used to identify cognitive processes behind the tasks. We compared eye-movement patterns elicited under different task conditions, with tasks differing systematically with regard to the types of cognitive processes involved in solving them. We used four tasks, differing along two dimensions: spatial (global vs. local) processing (Navon, Cognit Psychol, 9(3):353-383 1977) and semantic (deep vs. shallow) processing (Craik and Lockhart, J Verbal Learn Verbal Behav, 11(6):671-684 1972). We used eye-movement patterns obtained from two time periods: fixation cross preceding the target stimulus and the target stimulus. We found significant effects of both spatial and semantic processing, but in case of the latter, the effect might be an artefact of insufficient task control. We found above chance task classification accuracy for both time periods: 51.4% for the period of stimulus presentation and 34.8% for the period of fixation cross presentation. Therefore, we show that task can be to some extent decoded from the preparatory eye-movements before the stimulus is displayed. This suggests that anticipatory eye-movements reflect the visual scanning strategy employed for the task at hand. Finally, this study also demonstrates that decoding is possible even from very scant eye-movement data similar to Coco and Keller, J Vis 14(3):11-11 (2014). This means that task decoding is not limited to tasks that naturally take longer to perform and yield multi-second eye-movement recordings.
We compared scanpath similarity in response to repeated presentations of social and nonsocial images representing natural scenes in a sample of 30 participants with autism spectrum disorder and 32 matched typically developing individuals. We used scanpath similarity (calculated using ScanMatch) as a novel measure of attentional bias or preference, which constrains eye-movement patterns by directing attention to specific visual or semantic features of the image. We found that, compared with the control group, scanpath similarity of participants with autism was significantly higher in response to nonsocial images, and significantly lower in response to social images. Moreover, scanpaths of participants with autism were more similar to scanpaths of other participants with autism in response to nonsocial images, and less similar in response to social images. Finally, we also found that in response to nonsocial images, scanpath similarity of participants with autism did not decline with stimulus repetition to the same extent as in the control group, which suggests more perseverative attention in the autism spectrum disorder group. These results show a preferential fixation on certain elements of social stimuli in typically developing individuals compared with individuals with autism, and on certain elements of nonsocial stimuli in the autism spectrum disorder group, compared with the typically developing group.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.