2011
DOI: 10.3389/fpsyg.2011.00376
|View full text |Cite
|
Sign up to set email alerts
|

Preferential Inspection of Recent Real-World Events Over Future Events: Evidence from Eye Tracking during Spoken Sentence Comprehension

Abstract: Eye-tracking findings suggest people prefer to ground their spoken language comprehension by focusing on recently seen events more than anticipating future events: When the verb in NP1-VERB-ADV-NP2 sentences was referentially ambiguous between a recently depicted and an equally plausible future clipart action, listeners fixated the target of the recent action more often at the verb than the object that hadn’t yet been acted upon. We examined whether this inspection preference generalizes to real-world events, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

6
31
0

Year Published

2013
2013
2019
2019

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 18 publications
(37 citation statements)
references
References 52 publications
6
31
0
Order By: Relevance
“…In this situation, listeners rapidly inspected the target of the recent action (e.g., a candelabra that had been polished) in preference to the target of a future polishing action (e.g., polishing crystal glasses, target-condition assignment was counterbalanced, Knoeferle & Crocker, 2007). The recent-event preference replicated with real-world events (Knoeferle, Carminati, Abashidze, & Essig, 2011a) and when the within-experiment frequency of future (relative to recent) actions was increased to 75 (vs. 25) percent (Abashidze, Knoeferle, & Carminati, 2013). …”
Section: Visually Situated Language Comprehension: Methodological Advmentioning
confidence: 90%
“…In this situation, listeners rapidly inspected the target of the recent action (e.g., a candelabra that had been polished) in preference to the target of a future polishing action (e.g., polishing crystal glasses, target-condition assignment was counterbalanced, Knoeferle & Crocker, 2007). The recent-event preference replicated with real-world events (Knoeferle, Carminati, Abashidze, & Essig, 2011a) and when the within-experiment frequency of future (relative to recent) actions was increased to 75 (vs. 25) percent (Abashidze, Knoeferle, & Carminati, 2013). …”
Section: Visually Situated Language Comprehension: Methodological Advmentioning
confidence: 90%
“…In their study with NBDs, Altmann and Kamide (2007) used few change-of-state verbs in their materials and, moreover, they showed pictures of two different objects, for example, a wine glass and a beer glass, so that no prototypical past picture was included. However, Knoeferle et al (2011) found that locations of recently seen events attract more eye gazes during verb processing than potential target locations of future events. This finding seems similar to the pastpicture preference observed in our study.…”
Section: Correctly Interpreted Time Reference In Nbds Versus Iwasmentioning
confidence: 94%
“…Altman and Kamide thus demonstrated that when healthy speakers of English interpret the time reference of a verb, they direct their gaze towards an object that is anticipated as the grammatical theme. Also in real world events (with participants watching events acted out by the experimenter), it has been found that time reference guides anticipatory eye movements to the location of the grammatical theme; however, gaze patterns between sentences referring to the past versus the future differed (Knoeferle et al, 2011). Participants listened to German sentences in either simple past with past time reference or simple present with future time reference as in (1) while inspecting a scene with two objects.…”
Section: Previous Eye-tracking Studies Manipulating Time Referencementioning
confidence: 98%
“…Overall, findings show that visual context information exerts a very rapid effect on language processing and influences, for instance, the resolution of syntactically ambiguous sentences [1,2]. Findings also show that information such as an object's size, color [3], or shape [4], depicted clipart events [2], real-world action events [5], action affordances [6], and the spatial location of objects [7] are all rapidly integrated during sentence comprehension and can affect a listener's visual attention within a few hundred milliseconds (for a recent review, see [8]). …”
Section: Introductionmentioning
confidence: 81%