A fundamental aspect of human cognition is the ability to parse our constantly unfolding experience into meaningful representations of dynamic events and to communicate about these events with others. How do we communicate about events we have experienced? Influential theories of language production assume that the formulation and articulation of a linguistic message is preceded by preverbal apprehension that captures core aspects of the event. Yet the nature of these preverbal event representations and the way they are mapped onto language are currently not well understood. Here, we review recent evidence on the link between event conceptualization and language, focusing on two core aspects of event representation, event roles and event boundaries. Empirical evidence in both domains shows that the cognitive representation of events aligns with the way these aspects of events are encoded in language, providing support for the presence of deep homologies between linguistic and cognitive event structure.
It has long been recognized that language interacts with visual and spatial processes. However, the nature and extent of these interactions are widely debated. The goal of this article is to review empirical findings across several domains to understand whether language affects the way speakers conceptualize the world even when they are not speaking or understanding speech. A second goal of the present review is to shed light on the mechanisms through which effects of language are transmitted. Across domains, there is growing support for the idea that although language does not lead to long‐lasting changes in mental representations, it exerts powerful influences during momentary mental computations by either modulating attention or augmenting representational power.
a b s t r a c tWhen monitoring the origins of their memories, people tend to mistakenly attribute memories generated from internal processes (e.g., imagination, visualization) to perception. Here, we ask whether speaking a language that obligatorily encodes the source of information might help prevent such errors. We compare speakers of English to speakers of Turkish, a language that obligatorily encodes information source (direct/perceptual vs. indirect/hearsay or inference) for past events. In our experiments, participants reported having seen events that they had only inferred from post-event visual evidence. In general, error rates were higher when visual evidence that gave rise to inferences was relatively close to direct visual evidence. Furthermore, errors persisted even when participants were asked to report the specific sources of their memories. Crucially, these error patterns were equivalent across language groups, suggesting that speaking a language that obligatorily encodes source of information does not increase sensitivity to the distinction between perception and inference in event memory.
Understanding and acquiring language involve mapping language onto conceptual representations. Nevertheless, several issues remain unresolved with respect to (a) how such mappings are performed, and (b) whether conceptual representations are susceptible to cross-linguistic influences. In this article, we discuss these issues focusing on the domain of evidentiality and sources of knowledge. Empirical evidence in this domain yields growing support for the proposal that linguistic categories of evidentiality are tightly linked to, build on, and reflect conceptual representations of sources of knowledge that are shared across speakers of different languages.
When monitoring the origins of their memories, people tend to mistakenly attribute memories generated from internal processes (e.g., imagination, visualization) to perception. Here, we ask whether speaking a language that obligatorily encodes the source of information might help prevent such errors. We compare speakers of English to speakers of Turkish, a language that obligatorily encodes information source (direct/perceptual vs. indirect/hearsay or inference) for past events. In our experiments, participants reported having seen events that they had only inferred from post-event visual evidence. In general, error rates were higher when visual evidence that gave rise to inferences was relatively close to direct visual evidence. Furthermore, errors persisted even when participants were asked to report the specific sources of their memories. Crucially, these error patterns were equivalent across language groups, suggesting that speaking a language that obligatorily encodes source of information does not increase sensitivity to the distinction between perception and inference in event memory.
Expressing Left-Right relations is challenging for speaking-children. Yet, this challenge was absent for signing-children, possibly due to iconicity in the visual-spatial modality of expression. We investigate whether there is also a modality advantage when speaking-children’s co-speech gestures are considered. Eight-year-old child and adult hearing monolingual Turkish speakers and deaf signers of Turkish-Sign-Language described pictures of objects in various spatial relations. Descriptions were coded for informativeness in speech, sign, and speech-gesture combinations for encoding Left-Right relations. The use of co-speech gestures increased the informativeness of speakers’ spatial expressions compared to speech-only. This pattern was more prominent for children than adults. However, signing-adults and children were more informative than child and adult speakers even when co-speech gestures were considered. Thus, both speaking- and signing-children benefit from iconic expressions in visual modality. Finally, in each modality, children were less informative than adults, pointing to the challenge of this spatial domain in development.
Although it is widely assumed that the linguistic description of events is based on a structured representation of event components at the perceptual/conceptual level, little empirical work has tested this assumption directly. Here, we test the connection between language and perception/cognition cross-linguistically, focusing on the relative salience of causative event components in language and cognition. We draw on evidence from preschoolers speaking English or Turkish. In a picture description task, Turkish-speaking 3-5-year-olds mentioned Agents less than their English-speaking peers (Turkish allows subject drop); furthermore, both language groups mentioned Patients more frequently than Goals, and Instruments less frequently than either Patients or Goals. In a change blindness task, both language groups were equally accurate at detecting changes to Agents (despite surface differences in Agent mentions). The remaining components also behaved similarly: both language groups were less accurate in detecting changes to Instruments than either Patients or Goals (even though Turkish-speaking preschoolers were less accurate overall than their Englishspeaking peers). To our knowledge, this is the first study offering evidence for a strong-even though not strict-homology between linguistic and conceptual event roles in young learners cross-linguistically.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.