2020
DOI: 10.1101/2020.03.03.975409
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Speech-evoked brain activity is more robust to competing speech when it is spoken by someone familiar

Abstract: People are much better at understanding speech when it is spoken by a familiar talker—such as a friend or partner—than when the interlocutor is unfamiliar. This provides an opportunity to examine the substrates of intelligibility and familiarity, independent of acoustics. Is the familiarity effect evident as early as primary auditory cortex, or only at later processing stages? Here, we presented sentences spoken by naturally familiar talkers (the participant’s friend or partner) and unfamiliar talkers (the fri… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 55 publications
(52 reference statements)
0
2
0
Order By: Relevance
“…4a and b, respectively). As in previous studies 67,84,85 , we used subsections to test our hypotheses about representational similarity across localizer and test runs. Specifically, to evaluate how representational dissimilarity depends on prior strength, we weighted the hypothesized similarity (1) with the three graded levels of expectation strength (i.e., 1, 0.5, 0), (2) the highly expected face (i.e., 1, 1, 0), and (3) the face with the low and high probability (i.e., 0, 1, 0) for the expected faces (Fig.…”
Section: Mri Data Acquisitionmentioning
confidence: 99%
“…4a and b, respectively). As in previous studies 67,84,85 , we used subsections to test our hypotheses about representational similarity across localizer and test runs. Specifically, to evaluate how representational dissimilarity depends on prior strength, we weighted the hypothesized similarity (1) with the three graded levels of expectation strength (i.e., 1, 0.5, 0), (2) the highly expected face (i.e., 1, 1, 0), and (3) the face with the low and high probability (i.e., 0, 1, 0) for the expected faces (Fig.…”
Section: Mri Data Acquisitionmentioning
confidence: 99%
“…The auditory system is involved in a number of crucial sensory functions, including speech processing (Hamilton et al 2021), (Matsumoto et al 2011), (Fontolan et al 2014), (Gourévitch et al 2008), sound localization (Andéol et al 2011), (Carlile, Martin, and McAnally 2005), (Ahveninen, Kopčo, and Jääskeläinen 2014), pitch discrimination (Tramo, Shah, and Braida 2002), (Tramo et al 2005), (Dykstra et al 2012), (Hyde, Peretz, and Zatorre 2008), and voice recognition (Latinus et al 2013), (Holmes and Johnsrude 2021). Aberrations along this pathway can result in a wide variety of pathologies.…”
Section: Introductionmentioning
confidence: 99%