According to theories of embodied cognition, visual stimuli can either facilitate or impede the retrieval of language meaning as multimodal perceptual simulations. Here, we introduced a novel experimental paradigm to test the hypothesis that moving stimuli (i.e., motion-defined objects) facilitate coordination comprehension. Participants read coordination descriptions and saw two colored lines that matched the descriptions. Two figures then selected the lines either by moving jointly along them or by standing each on a different line. Moving selections yielded high validation scores in conjunction trials and low validation scores in disjunction trials, whereas stationary selections yielded mitigated scores. The results demonstrate that jointly moving stimuli, which are effective cues to visual grouping, help retrieve and validate conjunction simulations composed of dependent stimuli as well as retrieve and invalidate disjunction simulations composed of independent stimuli. These findings challenge accounts based on truth-condition satisfaction that stimuli properties cannot affect language comprehension and thereby reasoning.
Reasoning, solving mathematical equations, or planning written and spoken sentences all must factor in stimuli perceptual properties. Indeed, thinking processes are inspired by and subsequently fitted to concrete objects and situations. It is therefore reasonable to expect that the mental representations evoked when people solve these seemingly abstract tasks should interact with the properties of the manipulated stimuli. Here, we investigated the mental representations evoked by conjunction and disjunction expressions in language-picture matching tasks. We hypothesised that, if these representations have been derived using key Gestalt principles, reasoners should use perceptual compatibility to gauge the goodness of fit between conjunction/disjunction descriptions (e.g., the purple and/ or the green) and corresponding binary visual displays. Indeed, the results of three experimental studies demonstrate that reasoners associate conjunction descriptions with perceptually-dependent stimuli and disjunction descriptions with perceptually-independent stimuli, where visual dependency status follows the key Gestalt principles of common fate, proximity, and similarity.
a b s t r a c tLanguage is more than a source of information for accessing higher-order conceptual knowledge. Indeed, language may determine how people perceive and interpret visual stimuli. Visual processing in linguistic contexts, for instance, mirrors language processing and happens incrementally, rather than through variously-oriented fixations over a particular scene. The consequences of this atypical visual processing are yet to be determined. Here, we investigated the integration of visual and linguistic input during a reasoning task. Participants listened to sentences containing conjunctions or disjunctions (Nancy examined an ant and/or a cloud) and looked at visual scenes containing two pictures that either matched or mismatched the nouns. Degree of match between nouns and pictures (referential anchoring) and between their expected and actual spatial positions (spatial anchoring) affected fixations as well as judgments. We conclude that language induces incremental processing of visual scenes, which in turn becomes susceptible to reasoning errors during the language-meaning verification process.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.