Abstract:Successful communication requires attention to people and the complex signals they generate. Adapting attention from one moment to the next allows speakers and listeners to produce and perceive cues in real time and to adjust their responses in ways that facilitate an efficient exchange of information (MacDonald, 2013a, 2013b). In order to break into these dynamics of communication and learn from their caregivers, young children have to process and adapt to the rich, multidimensional information embedded in ch… Show more
“…Future research involving pre-verbal infants and nonhuman primates could shed light on this issue. For instance, recent research suggests that toddlers already have biases to focus on similar prosodic markers in child-directed-speech and that this may be associated with better learning 62 .…”
The success of human cooperation crucially depends on mechanisms enabling individuals to detect unreliability in their conspecifics. Yet, how such epistemic vigilance is achieved from naturalistic sensory inputs remains unclear. Here we show that listeners’ perceptions of the certainty and honesty of other speakers from their speech are based on a common prosodic signature. Using a data-driven method, we separately decode the prosodic features driving listeners’ perceptions of a speaker’s certainty and honesty across pitch, duration and loudness. We find that these two kinds of judgments rely on a common prosodic signature that is perceived independently from individuals’ conceptual knowledge and native language. Finally, we show that listeners extract this prosodic signature automatically, and that this impacts the way they memorize spoken words. These findings shed light on a unique auditory adaptation that enables human listeners to quickly detect and react to unreliability during linguistic interactions.
“…Future research involving pre-verbal infants and nonhuman primates could shed light on this issue. For instance, recent research suggests that toddlers already have biases to focus on similar prosodic markers in child-directed-speech and that this may be associated with better learning 62 .…”
The success of human cooperation crucially depends on mechanisms enabling individuals to detect unreliability in their conspecifics. Yet, how such epistemic vigilance is achieved from naturalistic sensory inputs remains unclear. Here we show that listeners’ perceptions of the certainty and honesty of other speakers from their speech are based on a common prosodic signature. Using a data-driven method, we separately decode the prosodic features driving listeners’ perceptions of a speaker’s certainty and honesty across pitch, duration and loudness. We find that these two kinds of judgments rely on a common prosodic signature that is perceived independently from individuals’ conceptual knowledge and native language. Finally, we show that listeners extract this prosodic signature automatically, and that this impacts the way they memorize spoken words. These findings shed light on a unique auditory adaptation that enables human listeners to quickly detect and react to unreliability during linguistic interactions.
“…Consistent with these studies, research on auditory emotional words show that, in adults, shifts towards higher F0 mean and F0 variation positively affect perceptual salience of these spoken words, engaging attention, and ultimately, favoring their intelligibility (Davis et al 2017;Dupuis and Pichora-Fuller 2014;Nencheva et al 2020). Numerous studies have addressed this topic focusing on the special speech register that human caregivers use when addressing infants (hereafter infant-directed speech or IDS).…”
Section: The Human Ability To Identify and Produce Phonemes (Within Cmentioning
The aim of the present work is to investigate the facilitating effect of vocal emotional intonation on the evolution of the following processes involved in language: (a) identifying and producing phonemes, (b) processing compositional rules underlying vocal utterances, and (c) associating vocal utterances with meanings. To this end, firstly, I examine research on the presence of these abilities in animals, and the biologically ancient nature of emotional vocalizations. Secondly, I review research attesting to the facilitating effect of emotional voice intonation on these abilities in humans. Thirdly, building on these studies in animals and humans, and through taking an evolutionary perspective, I provide insights for future empirical work on the facilitating effect of emotional intonation on these three processes in animals and preverbal humans. In this work, I highlight the importance of a comparative approach to investigate language evolution empirically. This review supports Darwin’s hypothesis, according to which the ability to express emotions through voice modulation was a key step in the evolution of spoken language.
“…As stated in the introduction, IDS contains melodic features that make it more similar to songs compared to say adult-directed speech (ADS). The methodological decision to use IDS was made given that previous studies that used the IPLP to examine word learning in toddlers have used this type of speech register (Schmale et al, 2011 ; Newman et al, 2018 ), and because IDS has been found to increase attention and guide word learning in toddlers (Nencheva et al, 2021 ). Nevertheless, it is possible that adding a condition in which spoken sentences are produced in ADS might lead to even better accuracy during this type of learning task, and perhaps even lead to a difference in performance with the younger participants.…”
This article evaluates a testing procedure for collecting eye-gaze data with toddlers and preschoolers during a word-learning task. We provide feasibility and precision data by comparing performance in an in-person version of the study (conducted under controlled conditions in the lab), with performance in a virtual version in which participants completed the testing procedure from home. Our data support the feasibility of collecting remote eye-gaze data with young children, and present it as a viable alternative for conducting developmental language research when in-person interactions with participants cannot take place. Additionally, we use this methodological approach to examine a topic that has gained popularity in recent years—the role of music and songs on vocabulary learning. We provide evidence suggesting that while songs may help increase attention during a particular task, greater attention does not lead to greater learning. In fact, preschoolers show improved word-learning performance for items that were trained in a spoken sentence compared to items that were trained in a song. This means that while songs may be beneficial for increasing child engagement, spoken sentences may be best for supporting deep level learning of language concepts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.