2013
DOI: 10.3389/fpsyg.2013.00735
|View full text |Cite
|
Sign up to set email alerts
|

Acoustic cues for the recognition of self-voice and other-voice

Abstract: Self-recognition, being indispensable for successful social communication, has become a major focus in current social neuroscience. The physical aspects of the self are most typically manifested in the face and voice. Compared with the wealth of studies on self-face recognition, self-voice recognition (SVR) has not gained much attention. Converging evidence has suggested that the fundamental frequency (F0) and formant structures serve as the key acoustic cues for other-voice recognition (OVR). However, little … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
45
2

Year Published

2014
2014
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 36 publications
(48 citation statements)
references
References 44 publications
0
45
2
Order By: Relevance
“…We found no association between a voice's acoustic properties and the amplitudes of both the MMN and P3a to an SGV, yet earlier evidence had demonstrated that both F0 and formant frequencies are critical acoustic cues underlying successful voice identity recognition (e.g., Latinus et al, 2013;Xu et al, 2013). As expected, this lack of association was plausibly due to the fact that our ERP analysis controlled for the physical differences between the voice stimuli by using a Blike from like^subtraction approach.…”
Section: Discussioncontrasting
confidence: 53%
See 2 more Smart Citations
“…We found no association between a voice's acoustic properties and the amplitudes of both the MMN and P3a to an SGV, yet earlier evidence had demonstrated that both F0 and formant frequencies are critical acoustic cues underlying successful voice identity recognition (e.g., Latinus et al, 2013;Xu et al, 2013). As expected, this lack of association was plausibly due to the fact that our ERP analysis controlled for the physical differences between the voice stimuli by using a Blike from like^subtraction approach.…”
Section: Discussioncontrasting
confidence: 53%
“…Even in these challenging situations, speakers automatically and effortlessly identify the voice they are hearing as their own (Keenan, Falk, & Gallup, 2003;Xu, Homae, Hashimoto, & Hagiwara, 2013).…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…The ability to discriminate between individuals solely from their vocalizations is found in many species (e.g., mouse‐eared bats, bottlenose dolphins) (Janik, Sayigh, & Wells, ; Yovel, Melcon, Franz, Denzinger, & Schnitzler, ). In humans, the recognition of individuals from voice alone is determined by a multidimensional suite of acoustic characteristics unique to the talker, including spectral envelope and its change over time, fluctuation in fundamental frequency and amplitude, moments of periodicity and aperiodicity, and long‐term averaged spectrum (e.g., Bricker & Pruzansky, ; Fant, ; Hecker, ; Hollien & Klepper, ; Xu, Homae, Hashimoto, & Hagiwara, ). As it is clear that the speech signal carries both linguistic (e.g., phonological, lexical) information (Allopenna, Magnuson, & Tanenhaus, ; Gaskell & Marslen‐Wilson, ) as well as acoustic correlates to talker identity (e.g., Creel, Aslin, & Tanenhaus, ; Nygaard, Sommers, & Pisoni, ), a growing psycholinguistic literature seeks to understand the interaction between levels of abstraction of the speech signal and the cognitive processes involved in speech perception, word recognition, and word learning.…”
Section: Words Get In the Way: Linguistic Effects On Talker Discriminmentioning
confidence: 99%
“…Although recordings of self-voice can produce a feeling of eeriness for listeners as compared to when spoken (Kimura et al, 2018), people nevertheless recognize recorded voice samples as their own (Nakamura et al, 2001;Kaplan et al, 2008;Rosa et al, 2008;Hughes & Nicholson, 2010;Xu et al, 2013;Candini et al, 2014;Pinheiro et al, 2016aPinheiro et al, , 2016bPinheiro et al, , 2019. However, in ambiguous conditions (i.e.…”
Section: Variability In Self-monitoring Thresholdsmentioning
confidence: 99%