2013
DOI: 10.1016/j.neuropsychologia.2012.12.002
|View full text |Cite
|
Sign up to set email alerts
|

The contents of predictions in sentence comprehension: Activation of the shape of objects before they are referred to

Abstract: When comprehending concrete words, listeners and readers can activate specific visual information such as the shape of the words’ referents. In two experiments we examined whether such information can be activated in an anticipatory fashion. In Experiment 1, listeners’ eye movements were tracked while they were listening to sentences that were predictive of a specific critical word (e.g., “moon” in “In 1969 Neil Armstrong was the first man to set foot on the moon”). 500 ms before the acoustic onset of the crit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

6
86
3

Year Published

2014
2014
2018
2018

Publication Types

Select...
7
1
1

Relationship

3
6

Authors

Journals

citations
Cited by 75 publications
(95 citation statements)
references
References 47 publications
(76 reference statements)
6
86
3
Order By: Relevance
“…There is also evidence that visual properties of an object are anticipated (e.g., shape; Rommers, Meyer, Praamstra, & Huettig, 2013). Generating predictions is considered crucial to the language processing system, leading to faster and more efficient mental operations (e.g., Farmer, Brown, & Tanenhaus, 2013;Fine, Jaeger, Farmer, & Qian, 2013;Hale, 2003;Levy, 2008).…”
mentioning
confidence: 99%
“…There is also evidence that visual properties of an object are anticipated (e.g., shape; Rommers, Meyer, Praamstra, & Huettig, 2013). Generating predictions is considered crucial to the language processing system, leading to faster and more efficient mental operations (e.g., Farmer, Brown, & Tanenhaus, 2013;Fine, Jaeger, Farmer, & Qian, 2013;Hale, 2003;Levy, 2008).…”
mentioning
confidence: 99%
“…To explain this discrepancy, Experiment 1c suggests that the composition advantage is due to predictive processes that are outside the purview of models such as DORA and LISA. In particular, participants appear able to rapidly transform entirely novel compositional concepts into accurate predictions about visual stimuli (see also Rommers et al, 2013;Zwaan et al, 2002): When predictive strength is matched between one-word and two-word stimuli, participants perform in a manner consistent with the predictions of models such as DORA and LISA.…”
Section: Discussionmentioning
confidence: 96%
“…This difference could perhaps explain the composition advantage: Participants could more easily predict the correct picture for pink tree (pink trees at one of three orientations) than for tree (trees of six colours at one of three orientations). This explanation is given some prima facie plausibility by recent demonstrations that participants can rapidly translate linguistic information into predictions about the likely visual form of a referent (Rommers, Meyer, Praamstra, & Huettig, 2013;Zwaan, Stanfield, & Yaxley, 2002). However there is also an important reason for doubting it: there was no need for participants in our one-word condition to even attend to the color of the subsequent picture, and it is known that the shape of an object can be processed separately from its color (Garner & Felfoldy, 1970).…”
Section: Experiments 1cmentioning
confidence: 93%
“…In the crucial displays, some objects in the display have a relationship with a specific word in the spoken utterance. Results show that people spend more time fixating related than unrelated objects, whether this relationship is semantic (e.g., Huettig & Altmann, 2005;Yee & Sedivy, 2006;Yee, Overton, & Thompson-Schill, 2009) or visual in nature (e.g., Dahan & Tanenhaus, 2005;Dunabeitia, Aviles, Afonso, Scheepers, & Carreiras, 2009;Huettig & Altmann, 2007;Huettig & Altmann, 2011;Rommers, Meyer, & Huettig, 2015;Rommers, Meyer, Praamstra, & Huettig, 2013). In de Groot, Huettig, and Olivers (2016), we directly compared visual and semantic biases in the visual search and the visual world paradigm.…”
Section: Introductionmentioning
confidence: 99%