Characteristics of perception and cognition in our daily lives can be elucidated through studying misdirection, a technique used by magicians to manipulate attention. Recent findings on the effects of social misdirection induced by joint attention have been disputed, and differences between deceived (failed to detect the magic trick) and undeceived (detected the magic trick) groups remain unclear. To examine how social misdirection affects deceived and undeceived groups, we showed participants movie clips of the “cups & balls,” a classic magic trick, and measured participants' eye positions (i.e. where participants looked while viewing the clips) using an eye tracker. We found that the undeceived group looked less at the magician's face than the deceived group. These results indicate that deceived individuals have difficulty trying not to allocate attention to the face. We conclude that social misdirection captures attention, influencing the emergence of deception.
Virtual reality (VR) is a new methodology for behavioral studies. In such studies, the millisecond accuracy and precision of stimulus presentation are critical for data replicability. Recently, Python, which is a widely used programming language for scientific research, has contributed to reliable accuracy and precision in experimental control. However, little is known about whether modern VR environments have millisecond accuracy and precision for stimulus presentation, since most standard methods in laboratory studies are not optimized for VR environments. The purpose of this study was to systematically evaluate the accuracy and precision of visual and auditory stimuli generated in modern VR head-mounted displays (HMDs) from HTC and Oculus using Python 2 and 3. We used the newest Python tools for VR and Black Box Toolkit to measure the actual time lag and jitter. The results showed that there was an 18-ms time lag for visual stimulus in both HMDs. For the auditory stimulus, the time lag varied between 40 and 60 ms, depending on the HMD. The jitters of those time lags were 1 ms for visual stimulus and 4 ms for auditory stimulus, which are sufficiently low for general experiments. These time lags were robustly equal, even when auditory and visual stimuli were presented simultaneously. Interestingly, all results were perfectly consistent in both Python 2 and 3 environments. Thus, the present study will help establish a more reliable stimulus control for psychological and neuroscientific research controlled by Python environments.
The word order that is easiest to understand in a language generally coincides with the word order most frequently used in that language. In Kaqchikel, however, there is a discrepancy between the two: the syntactically basic VOS incurs the least cognitive load, whereas SVO is most frequently employed. This suggests that processing load is primarily determined by grammatical processes, whereas word order selection is affected by additional conceptual factors. Thus, the agent could be conceptually more salient than other elements even for Kaqchikel speakers. This hypothesis leads us to the following expectations: (1) utterance latency should be shorter for SVO sentences than for VOS sentences;(2) Kaqchikel speakers should pay more attention to agents than to other elements during sentence production; and (3) despite these, the cognitive load during sentence production should be higher for SVO than for VOS. A Kaqchikel sentence production experiment confirmed all three expectations.
ARTICLE HISTORY
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.