2016
DOI: 10.2174/1874350101609010129
|View full text |Cite
|
Sign up to set email alerts
|

Priming Younger and Older Adults’ Sentence Comprehension: Insights from Dynamic Emotional Facial Expressions and Pupil Size Measures

Abstract: Background:Prior visual-world research has demonstrated that emotional priming of spoken sentence processing is rapidly modulated by age. Older and younger participants saw two photographs of a positive and of a negative event side-by-side and listened to a spoken sentence about one of these events. Older adults' fixations to the mentioned (positive) event were enhanced when the still photograph of a previously-inspected positive-valence speaker face was (vs. wasn't) emotionally congruent with the event/senten… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 65 publications
0
8
0
Order By: Relevance
“…In visual inspection behavior, younger adults showed a preference to inspect negative pictures and faces more than positive ones (i.e., the so-called “negativity bias”); older adults showed a preference toward positive pictures and faces (i.e., the so-called “positivity bias,” see e.g., Socioemotional Selectivity Theory: Carstensen et al, 2003 ; Isaacowitz et al, 2006 ). In a visual world eye-tracking study, Carminati and Knoeferle (2013 , 2016) and Münster et al (2014) asked whether this bias generalizes to language comprehension: Younger and older adults inspected a positive or negative emotional prime face of a speaker (3) followed by a negatively and a positively valenced event photograph (presented side by side) and a positively or negatively congruent sentence describing one of the two events ( Carminati and Knoeferle, 2013 , 2016 ; Münster et al, 2014 ). Older and younger participants fixated the corresponding emotionally congruent event photograph more when language and the speaker’s prime face matched (than mismatched) their emotional bias.…”
Section: Empirical Evidencementioning
confidence: 99%
See 2 more Smart Citations
“…In visual inspection behavior, younger adults showed a preference to inspect negative pictures and faces more than positive ones (i.e., the so-called “negativity bias”); older adults showed a preference toward positive pictures and faces (i.e., the so-called “positivity bias,” see e.g., Socioemotional Selectivity Theory: Carstensen et al, 2003 ; Isaacowitz et al, 2006 ). In a visual world eye-tracking study, Carminati and Knoeferle (2013 , 2016) and Münster et al (2014) asked whether this bias generalizes to language comprehension: Younger and older adults inspected a positive or negative emotional prime face of a speaker (3) followed by a negatively and a positively valenced event photograph (presented side by side) and a positively or negatively congruent sentence describing one of the two events ( Carminati and Knoeferle, 2013 , 2016 ; Münster et al, 2014 ). Older and younger participants fixated the corresponding emotionally congruent event photograph more when language and the speaker’s prime face matched (than mismatched) their emotional bias.…”
Section: Empirical Evidencementioning
confidence: 99%
“…However, even subtle aspects of language style can imperceptibly influence communication and social outcomes such as relationship success ( Niederhoffer and Pennebaker, 2002 ), lending some credence to postulating a link between social behavior and core comprehension processes also. Recent empirical evidence, moreover, suggests that a socially interpreted context (as e.g., provided by the speaker’s facial expression or voice) can modulate not just relationship success or tipping behavior but even real-time comprehension processes (i.e., Van Berkum et al, 2008 ; Carminati and Knoeferle, 2013 , 2016 ).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, a meta-analysis of Greenwald et al (2009) revealed that there is a considerable body of research showing an impact of old and recent experiences on both explicit and implicit measures, especially with regards to domains of stereotyping and prejudice. Furthermore, the effects of such characteristics of the comprehenders as age, education level, and knowledge of foreign languages, were successfully detected in language comprehension tasks using more implicit measurement methods, such as eye tracking ( Huettig et al, 2011 ; Mishra et al, 2012 ; Carminati and Knoeferle, 2013 , 2016 ; Ito et al, 2018 ). The second limitation is that the paradigms used in the present research do not provide a strong test between a propositional network view (e.g., Bower and Rinck, 2001 ) and a situation model view (e.g., Radvansky et al, 1998 ) that may explain the nature of the mental representation used to perform the tasks.…”
Section: Discussionmentioning
confidence: 99%
“…In contrast, recording eye movements provides moment‐to‐moment reading time measures, which can be used to understand what influence the manipulated variable has on individuals' reading behaviors, for example whether any anticipatory processes are involved or whether readers struggle with comprehending certain words/sentences by making regressions or having longer reading times [Rayner, Chace, Slattery, & Ashby, 2006]. More recently, a few studies have applied online measures, such as eye‐tracking and event‐related brain potentials (ERPs), to investigate how readers keep track of temporal and emotional shifts in stories, and have demonstrated that readers are sensitive to mismatches between a character's expected and described emotional states [Carminati & Knoeferle, 2013, 2016; Komeda & Kusumi, 2006; Leuthold, Filik, Murphy, & Mackenzie, 2012; Munster, Carminati, & Knoeferle, 2014; Ralph‐Nearman & Filik, 2018; Rinck & Bower, 2000; Vega, 1996; Zwaan, 1996]. Moreover, some researchers have examined the online processes underlying sarcasm comprehension using eye‐tracking [e.g., Au‐Yeung, Kaakinen, Liversedge, & Benson, 2015; Deliens, Antoniou, Clin, Ostashchenko, & Kissine, 2018; Filik, Howman, Ralph‐Nearman, & Giora, 2018; Filik, Leuthold, Wallington, & Page, 2014; Filik & Moxey, 2010; Kaakinen, Olkoniemi, Kinnari, & Hyönä, 2014; Olkoniemi, Ranta, & Kaakinen, 2016; Olkoniemi, Johander, & Kaakinen, 2019; Olkoniemi, Strömberg, & Kaakinen, 2019; Țurcan & Filik, 2016, 2017].…”
Section: Introductionmentioning
confidence: 99%