This study investigates the effect of age and gender on the internal structure, cross-category distance, and discriminability of phonemic categories for two contrasts varying in fricative place of articulation (/s/-/∫/) and stop voicing (/b/-/p/) in word-initial tokens spoken by adults and normally developing children aged 9-14 yr. Vast between- and within-talker variability was observed with 16% of speakers exhibiting some degree of overlap between phonemic categories-a possible contribution to the range of talker intelligibility found in the literature. Females of all ages produced farther and thus more discriminable categories than males, although gender-marking for fricative between-category distance did not emerge until approximately 11 yr of age. Children produced farther yet also much more dispersed categories than adults with increasing discriminability with age, such that by age 13, children's categories were no less discriminable than those of adults. However, children's ages did not predict category distance or dispersion, indicating that convergence on adult-like category structure must occur later in adolescence.
The recent development in the measurements of spontaneous mental state understanding, employing eye-movements instead of verbal responses, has opened new opportunities for understanding the developmental origin of “mind-reading” impairments frequently described in autism spectrum disorders (ASDs). Our main aim was to characterize the relationship between mental state understanding and the broader autism phenotype, early in childhood. An eye-tracker was used to capture anticipatory looking as a measure of false beliefs attribution in 3-year-old children with a family history of autism (at-risk participants, n = 47) and controls (control participants, n = 39). Unlike controls, the at-risk group, independent of their clinical outcome (ASD, broader autism phenotype or typically developing), performed at chance. Performance was not related to children’s verbal or general IQ, nor was it explained by children “missing out” on crucial information, as shown by an analysis of visual scanning during the task. We conclude that difficulties with using mental state understanding for action prediction may be an endophenotype of autism spectrum disorders.
Further developments in speech production take place during later childhood. Children use clear speech strategies to benefit an interlocutor facing intelligibility problems but may not be able to attune these strategies to the same degree as adults.
This study investigated (a) the acoustic-phonetic characteristics of spontaneous speech produced by talkers aged 9–14 years in an interactive (diapix) task with an interlocutor of the same age and gender (NB condition) and (b) the adaptations these talkers made to clarify their speech when speech intelligibility was artificially degraded for their interlocutor (VOC condition). Recordings were made for 96 child talkers (50 F, 46 M); the adult reference values came from the LUCID corpus recorded under the same conditions [Baker and Hazan, J. Acoustic. Soc. Am. 130, 2139–2152 (2011)]. Articulation rate, pause frequency, fundamental frequency, vowel area, and mean intensity (1–3 kHz range) were analyzed to establish whether they had reached adult-like values and whether young talkers showed similar clear speech strategies as adults in difficult communicative situations. In the NB condition, children (including the 13–14 year group) differed from adults in terms of their articulation rate, vowel area, median F0, and intensity. Child talkers made adaptations to their speech in the VOC condition, but adults and children differed in their use of F0 range, vowel hyperarticulation, and pause frequency as clear speech strategies. This suggests that further developments in speech production take place during later adolescence. [Work supported by ESRC.]
Listeners must cope with a great deal of variability in the speech signal, and thus theories of speech perception must also account for variability, which comes from a number of sources, including variation between accents. It is well-known that there is a processing cost when listening to speech in an accent other than one's own, but recent work has suggested that this cost is reduced when listening to a familiar accent widely represented in the media, and/or when short amounts of exposure to an accent are provided. Little is known, however, about how these factors (long-term familiarity and short-term familiarization with an accent) interact. The current study tested this interaction by playing listeners difficult-to-segment sentences in noise, before and after a familiarization period where the same sentences were heard in the clear, allowing us to manipulate shortterm familiarization. Listeners were speakers of either Glasgow English or Standard Southern British English, and they listened to speech in either their own or the other accent, thereby allowing us to manipulate long-term familiarity. Results suggest that both long-term familiarity and short-term familiarization mitigate the perceptual processing costs of listening to an accent that is not one's own, but seem not to compensate for them entirely, even when the accent is widely heard in the media.3
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.