2014
DOI: 10.1121/1.4896464
|View full text |Cite
|
Sign up to set email alerts
|

Age mitigates the correlation between cognitive processing speed and audio-visual asynchrony detection in speech

Abstract: Cognitive processing speed, hearing acuity, and audio-visual (AV) experience have been suggested to influence AV asynchrony detection. Whereas the influence of hearing acuity and AV experience have been explored to some extent, the influence of cognitive processing speed on perceived AV asynchrony has not been directly tested. Therefore, the current study investigates the relationship between cognitive processing speed and AV asynchrony detection in speech and, with hearing acuity controlled, assesses whether … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
4
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 39 publications
0
4
0
Order By: Relevance
“…Although the participants were assessed as normal-hearing and had similar speech perception in quiet, small differences in audiometric thresholds were inferred to have resulted in the age-related differences in speech reception thresholds in the other background conditions. Such age dependent variations in the influence of noise may alter the way available cognitive resources are utilized in AV speech perception, for example, changing the relative processing of auditory and visual speech cues (e.g., Alm and Behne, 2014 ). Recent research also indicates that similar mechanism may be present for vision.…”
Section: Introductionmentioning
confidence: 99%
“…Although the participants were assessed as normal-hearing and had similar speech perception in quiet, small differences in audiometric thresholds were inferred to have resulted in the age-related differences in speech reception thresholds in the other background conditions. Such age dependent variations in the influence of noise may alter the way available cognitive resources are utilized in AV speech perception, for example, changing the relative processing of auditory and visual speech cues (e.g., Alm and Behne, 2014 ). Recent research also indicates that similar mechanism may be present for vision.…”
Section: Introductionmentioning
confidence: 99%
“…For example, SJ tasks use an index calculated based on the temporal distribution of the simultaneous response rate (usually a bell‐shaped Gaussian curve). Indexes include: (1) the interval between the two SOA values corresponding to a 75% simultaneous response rate (Marsicano et al., 2022 ; Venskus et al., 2021 ; Zerr et al., 2019 ); (2) the interval between the SOA value corresponding to the point of subjective simultaneity and the SOA value corresponding to a 75% simultaneous response rate (just noticeable difference, JND) (Christie et al., 2019 ; Li et al., 2021 ); (3) half of the interval between the two SOA values corresponding to a 50% simultaneous response rate ( δ ) (Chen et al., 2018 , 2021 ); (4) the standard deviation of the distribution (SD or σ ) (Yarrow et al., 2016 ; Zampini et al., 2005 ); (5) the interval between the two SOA values corresponding to 50% of the maximum rate (full width at half height, FWHH) (Alm & Behne, 2014 ; Roseboom & Arnold, 2011 ; Roseboom et al., 2009 ); and (6) half of the interval between the two SOA values corresponding to 50% of the maximum rate (half width at half height, HWHH) (Fujisaki & Nishida, 2009 ). The larger these values, the larger the TBW, resulting in a lower temporal resolution of synchrony perception.…”
Section: Introductionmentioning
confidence: 99%
“…For this quest, various approaches using different senses such as visuals, audio, and physiological senses have been used in the past [7][8][9][10][11]. However, the two basic senses that are primarily used in developing automatic emotion recognition models are audio and visuals, out of which audio senses are well known for stimulating emotion cues [12][13][14][15] since audio signals have higher augment for emotion cues with less computational power requirements than visual signals [13,16]. Further, detecting emotions from visual data has ethical challenges in comparison to audio signals.…”
Section: Introductionmentioning
confidence: 99%