Certain models of spoken-language processing, like those for many other perceptual and cognitive processes, posit continuous uptake of sensory input and dynamic competition between simultaneously active representations. Here, we provide compelling evidence for this continuity assumption by using a continuous response, hand movements, to track the temporal dynamics of lexical activations during real-time spoken-word recognition in a visual context. By recording the streaming x, y coordinates of continuous goal-directed hand movement in a spoken-language task, online accrual of acoustic-phonetic input and competition between partially active lexical representations are revealed in the shape of the movement trajectories. This hand-movement paradigm allows one to project the internal processing of spoken-word recognition onto a two-dimensional layout of continuous motor output, providing a concrete visualization of the attractor dynamics involved in language processing. dynamical systems ͉ psycholinguistics ͉ word recognition
Fitts's law is one of the most well-established principles in psychology. It captures the relation between speed and accuracy in performed and imagined movements. The aim of this study was to determine whether this law also holds during the perception of other people's actions. Subjects were shown apparent motion displays of a person moving his arm between two identical targets. Target width, the separation between targets, and movement speed were varied. Subjects reported whether the person could move at the perceived speed without missing the targets. The movement times reported as being just possible were exactly those predicted by Fitts's law (r(2)= .96). A subsequent experiment demonstrated the same lawful relation for the perception of a robot arm (r(2)= .93). To our knowledge, this makes Fitts's law the first motor principle that holds in imagery and the perception of biological and non-biological agents.
Asymmetries in posterior ERP components, such as the N1, are generally taken to reflect the visual processing of spatial information in absolute (fixation-based) coordinates. Yet, it is also well established that the position of an object can be coded relative to the position of other objects. To examine the ERP correlates of relative spatial coding, two experiments were conducted in which spatially neutral target stimuli were preceded, accompanied, or followed by laterally presented, task-irrelevant accessory stimuli. Targets presented simultaneously with a lateral accessory evoked, despite physical asymmetry, a bilateral, symmetric N1. Targets that followed the accessory evoked, despite physical symmetry, an asymmetric N1, with a maximum contralateral to the accessory N1. Thus, lateralizations in the N1 range already reflect relative spatial coding rather than just the processing of the absolute location of incoming information.
Because reaction time (RT) tasks are generally repetitive and temporally regular, participants may use timing strategies that affect response speed and accuracy. This hypothesis was tested in 3 serial choice RT experiments in which participants were presented with stimuli that sometimes arrived earlier or later than normal. RTs increased and errors decreased when stimuli came earlier than normal, and RTs decreased and errors increased when stimuli came later than normal. The results were consistent with an elaboration of R. Ratcliff's diffusion model (R. Ratcliff, 1978; R. Ratcliff & J. N. Rouder, 1998; R. Ratcliff, T. Van Zandt, & G. McKoon, 1999), supplemented by a hypothesis developed by D. Laming (1979a, 1979b), according to which participants initiate stimulus sampling before the onset of the stimulus at a time governed by an internal timekeeper. The success of this model suggests that timing is used in the service of decision making.
The online influence of movement production on motion perception was investigated. Participants were asked to move one of their hands in a certain direction while monitoring an independent stimulus motion. The stimulus motion unpredictably deviated in a direction that was either compatible or incompatible with the concurrent movement. Participants' task was to make a speeded response as soon as they detected the deviation. A reversed compatibility effect was obtained: Reaction times were slower under compatible conditions - that is, when motion deviations and movements went in the same direction. This reversal of a commonly observed facilitatory effect can be attributed to the concurrent nature of the perception-action task and to the fact that what was produced was functionally unrelated to what was perceived. Moreover, by employing an online measure, it was possible to minimize the contribution of short-term memory processes, which has potentially confounded the interpretation of related effects.
Coperformers in musical ensembles continuously adapt the timing of their actions to maintain interpersonal coordination. The current study used a dyadic finger-tapping task to investigate whether such mutual adaptive timing is predominated by assimilation (i.e., copying relative timing, akin to mimicry) or compensation (local error correction). Our task was intended to approximate the demands that arise when coperformers coordinate complementary parts with a rhythm section in an ensemble. In two experiments, paired musicians (the coperformers) were required to tap in alternation, in synchrony with an auditory pacing signal (the rhythm section). Serial dependencies between successive asynchronies produced by alternating individuals' taps relative to the pacing tones revealed greater evidence for temporal assimilation than compensation. By manipulating the availability of visual and auditory feedback across experiments, it was shown that this assimilation was strongest when coactors' taps triggered sounds, while the effects of visual information were negligible. These results suggest that interpersonal temporal assimilation was mediated by perception-action coupling in the auditory modality. Mutual temporal assimilation may facilitate coordination in musical ensembles by automatically increasing stylistic compatibility between coperformers, thereby assisting them to sound cohesive.
Recent studies have reported repulsion effects between the perception of visual motion and the concurrent production of hand movements. Two models, based on the notions of common coding and internal forward modeling, have been proposed to account for these phenomena. They predict that the size of the effects in perception and action should be monotonically related and vary with the amount of similarity between what is produced and perceived. These predictions were tested in four experiments in which participants were asked to make hand movements in certain directions while simultaneously encoding the direction of an independent stimulus motion. As expected, perceived directions were repelled by produced directions, and produced directions were repelled by perceived directions. However, contrary to the models, the size of the effects in perception and action did not covary, nor did they depend (as predicted) on the amount of perception–action similarity. We propose that such interactions are mediated by the activation of categorical representations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.