2013
DOI: 10.3389/fpsyg.2013.00359
|View full text |Cite
|
Sign up to set email alerts
|

Gated audiovisual speech identification in silence vs. noise: effects on time and accuracy

Abstract: This study investigated the degree to which audiovisual presentation (compared to auditory-only presentation) affected isolation point (IPs, the amount of time required for the correct identification of speech stimuli using a gating paradigm) in silence and noise conditions. The study expanded on the findings of Moradi et al. (under revision), using the same stimuli, but presented in an audiovisual instead of an auditory-only manner. The results showed that noise impeded the identification of consonants and wo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

8
95
2

Year Published

2013
2013
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 49 publications
(109 citation statements)
references
References 62 publications
8
95
2
Order By: Relevance
“…However, Allen, Baddeley & Hitch (2006) from five behavioural experiments concluded that although the presence of visual cues demands attention similar to unimodal stimuli initially, but AV integration does not require additional attentional resources. Moradi, Lidestam and Rönnberg (2013) found that the AV speech recognition in the presence of noise for young adults with normal hearing is faster, more accurate and less effortful than auditory-only speech recognition, and inferred that AV presentation taxes cognitive resources to a lesser extent by reducing working memory load. Yovel and Belin (2013) suggested that despite sensory differences, the neurocognitive mechanisms engaged by perceiving faces and voices are highly similar, facilitating integration of visual and speech information.…”
Section: Visual Cuesmentioning
confidence: 99%
“…However, Allen, Baddeley & Hitch (2006) from five behavioural experiments concluded that although the presence of visual cues demands attention similar to unimodal stimuli initially, but AV integration does not require additional attentional resources. Moradi, Lidestam and Rönnberg (2013) found that the AV speech recognition in the presence of noise for young adults with normal hearing is faster, more accurate and less effortful than auditory-only speech recognition, and inferred that AV presentation taxes cognitive resources to a lesser extent by reducing working memory load. Yovel and Belin (2013) suggested that despite sensory differences, the neurocognitive mechanisms engaged by perceiving faces and voices are highly similar, facilitating integration of visual and speech information.…”
Section: Visual Cuesmentioning
confidence: 99%
“…In order to accomplish this with standard computer equipment, Matlab (2009b, 32-bit) and Psychophysics Toolbox 3 (Brainard, 1997;Pelli, 1997; Kleiner, Brainard, & Pelli, 2007) can be used. In Moradi et al (2013), 120 fps was used. However, the computer's processing speed determines the spatial resolution that can be used for video presentation at the desired rate.…”
Section: A Methods For Presenting Video Recordings As Fast As a Screenmentioning
confidence: 99%
“…Methods for accomplishing this comprise of recording audio and video separately using an exact synchronization signal, editing the recordings and finding exact synchronization points, and presenting the synchronized audiovisual stimuli with a desired frame-rate on a CRT display using Matlab and Psychophysics Toolbox 3. The methods from an empirical gating study (Moradi, Lidestam, & Rönnberg, 2013) are presented as an example of implementation of playback at 120 fps.Keywords: psychophysics, frame rate, audiovisual, synchronization, temporal resolution AUDIOVISUAL STIMULI AT HIGH FRAME-RATE 3 …”
mentioning
confidence: 99%
See 2 more Smart Citations