Certain models of spoken-language processing, like those for many other perceptual and cognitive processes, posit continuous uptake of sensory input and dynamic competition between simultaneously active representations. Here, we provide compelling evidence for this continuity assumption by using a continuous response, hand movements, to track the temporal dynamics of lexical activations during real-time spoken-word recognition in a visual context. By recording the streaming x, y coordinates of continuous goal-directed hand movement in a spoken-language task, online accrual of acoustic-phonetic input and competition between partially active lexical representations are revealed in the shape of the movement trajectories. This hand-movement paradigm allows one to project the internal processing of spoken-word recognition onto a two-dimensional layout of continuous motor output, providing a concrete visualization of the attractor dynamics involved in language processing. dynamical systems ͉ psycholinguistics ͉ word recognition
Two eye-tracking experiments examined spoken language processing in Russian-English bilinguals. The proportion of looks to objects whose names were phonologically similar to the name of a target object in either the same language (within-language competition), the other language (between-language competition), or both languages at the same time (simultaneous competition) was compared to the proportion of looks in a control condition in which no objects overlapped phonologically with the target. Results support previous findings of parallel activation of lexical items within and between languages, but suggest that the magnitude of the between-language competition effect may vary across first and second languages and may be mediated by a number of factors such as stimuli, language background, and language mode.
Bilingualism provides a unique opportunity for exploring hypotheses about how the human brain encodes language. For example, the "input switch" theory states that bilinguals can deactivate one language module while using the other. A new measure of spoken language comprehension, headband-mounted eyetracking, allows a firm test of this theory. When given spoken instructions to pick up an object, in a monolingual session, late bilinguals looked briefly at a distractor object whose name in the irrelevant language was initially phonetically similar to the spoken word more often than they looked at a control distractor object. This result indicates some overlap between the two languages in bilinguals, and provides support for parallel, interactive accounts of spoken word recognition in general.
It is hypothesized that eye movements are used to coordinate elements of a mental model with elements of the visual field. In two experiments, eye movements were recorded while observers imagined or recalled objects that were not present in the visual display. In both cases, observers spontaneously looked at particular blank regions of space in a systematic fashion, to manipulate and organize spatial relationships between mental and/or retinal images. These results contribute to evidence that interpreting a linguistic description of a visual scene requires a spatial (mental model) representation, and they support claims regarding the allocation of position markers in visual space for the manipulation of visual attention. More broadly, our results point to a concrete embodiment of cognition, in that a construction of a mental image is almost "acted out" by the eye movements, and a mental search of internal memory is accompanied by an ocolumotor search of external space.
It has been argued that the human cognitive system is capable of using spatial indexes or oculomotor coordinates to relieve working memory load (Ballard, D. Here we examine the use of such spatial information in memory for semantic information. Previous research has often focused on the role of task demands and the level of automaticity in the encoding of spatial location in memory tasks. We present ®ve experiments where location is irrelevant to the task, and participants' encoding of spatial information is measured implicitly by their looking behavior during recall. In a paradigm developed from Spivey and Geng (Spivey, M. J., & Geng, J. (2000). submitted for publication), participants were presented with pieces of auditory, semantic information as part of an event occurring in one of four regions of a computer screen. In front of a blank grid, they were asked a question relating to one of those facts. Under certain conditions it was found that during the question period participants made signi®cantly more saccades to the empty region of space where the semantic information had been previously presented. Our ®ndings are discussed in relation to previous research on memory and spatial location, the dorsal and ventral streams of the visual system, and the notion of a cognitive-perceptual system using spatial indexes to exploit the stability of the external world. q
The time course of categorization was investigated in four experiments, which revealed graded competitive effects in a categorization task. Participants clicked one of two categories (e.g., mammal or fish) in response to atypical or typical exemplars (e.g., whale or cat) in the form of words (Experiments 1 and 2) or pictures (Experiments 3 and 4). Streaming x, y coordinates of mouse movement trajectories were recorded. Normalized mean trajectories revealed a graded competitive process: Atypical exemplars produced trajectories with greater curvature toward the competing category than did typical exemplars. The experiments contribute to recent examination of the time course of categorization and carry implications for theories of representation in cognitive science.
Performance of bilingual Russian-English speakers and monolingual English speakers during auditory processing of competing lexical items was examined using eye tracking. Results revealed that both bilinguals and monolinguals experienced competition from English lexical items overlapping phonetically with an English target item (e.g., spear and speaker). However, only bilingual speakers experienced competition from Russian competitor items overlapping crosslinguistically with an English target (e.g., spear and spichki, Russian for matches). English monolinguals treated the Russian competitors as they did any other filler items. This difference in performance between bilinguals and monolinguals tested with exactly the same sets of stimuli suggests that eye movements to a crosslinguistic competitor are due to activation of the other language and to between-language competition rather than being an artifact of stimulus selection or experimental design.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.