This study investigated pitch and communicative intent in mothers' infant‐directed speech spoken to their infants at birth, 3, 6, 9, and 12 months. Audio recordings of mothers (6 with female, and 6 with male infants) talking to another adult and to their infant at 5 ages were low‐pass filtered and rated by 60 adults on 5 scales (Positive or Negative Affect, Express Affection, Encourage Attention, Comfort or Soothe, and Direct Behavior). Mean fundamental frequency (F0) and pitch range of utterances were also measured. Utterances associated with positive affect tend to peak at 6 and 12 months, whereas more directive utterances peaked at 9 months. Mean F0 followed the age trend for affective utterances, and pitch range followed the trend for directive utterances. The results suggest mother speech patterns reflect, complement, and perhaps facilitate infant development.
The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various signal-to-noise levels. In Experiment 1 with two groups of adults, native speakers of Japanese and native speakers of English, the results on both percent visually influenced responses and reaction time supported previous reports of a weaker visual influence for Japanese participants. In Experiment 2, an additional three age groups (6, 8, and 11 years) in each language group were tested. The results showed that the degree of visual influence was low and equivalent for Japanese and English language 6-year-olds, and increased over age for English language participants, especially between 6 and 8 years, but remained the same for Japanese participants. This may be related to the fact that English language adults and older children processed visual speech information relatively faster than auditory information whereas no such inter-modal differences were found in the Japanese participants' reaction times.
The McGurk effect, in which auditory [ba] dubbed onto [ga] lip movements is perceived as "da" or "tha," was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4 1/2-month-olds were tested in a habituation-test paradigm, in which an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [(delta)a] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [(delta)a], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [(delta)a] were no more familiar than [ba]. These results are consistent with infants' perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information.
Developmental dyslexia is a multifaceted disorder of learning primarily manifested by difficulties in reading, spelling, and phonological processing. Neural studies suggest that phonological difficulties may reflect impairments in fundamental cortical oscillatory mechanisms. Here we examine cortical mechanisms in children (6-12 years of age) with or without dyslexia (utilising both age- and reading-level-matched controls) using electroencephalography (EEG). EEG data were recorded as participants listened to an audio-story. Novel electrophysiological measures of phonemic processing were derived by quantifying how well the EEG responses tracked phonetic features of speech. Our results provide, for the first time, evidence for impaired low-frequency cortical tracking to phonetic features during natural speech perception in dyslexia. Atypical phonological tracking was focused on the right hemisphere, and correlated with traditional psychometric measures of phonological skills used in diagnostic dyslexia assessments. Accordingly, the novel indices developed here may provide objective metrics to investigate language development and language impairment across languages.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.