We report six experiments on learnability of four non-adjacent phonotactic constraints which differ in their attested frequency and phonetic conditioning factors; liquid harmony, liquid disharmony, backness harmony, and backness disharmony. Our results suggest that such phonotactic constraints can be implicitly learned from brief experience and that learnability of a phonological grammar may be independent of its attested frequency and phonetic basis.
Pattern playback systems were instrumental in speech perception research [e.g., Cooper et al. (1951)] and can be valuable for pedagogical purposes [e.g., Arai et al . (2006)]. They would be utilized further if one could integrate them with other speech processing software written in a common programming language. In response, I present an open-source digital pattern playback system implemented in the Python programming language. The software allows the user to provide an image of a magnitude spectrogram as input by either selecting an image file (e.g., PNG, JPG) or drawing one directly on a blank canvas using a pointing device (e.g., computer mouse, stylus, fingertip). It first translates pixel values of the image to an array of magnitude spectral coefficients and then applies the inverse short-time Fourier transform assuming zero phase to convert the array into a waveform. Users can readily manipulate basic parameters of conversion (e.g., sampling rate, frame length) and augment the process by utilizing various signal processing methods available in Python libraries such as SciPy and librosa. The source code is available for download and will be maintained on the author's GitHub repository and personal website.
Bilingual speakers sometimes change their pitch and voice quality when they switch from one language to the other. For example, when speaking in L2 rather than L1, German learners of French pronounce vowels with less adduction of the vocal folds (Pützer et al., 2016), and Korean learners of English with a lower pitch (Cheng, 2020). Here, I present a corpus study which suggests that the extent to which L2 learners of English change their pitch and voice quality may depend on how similar their L1 is to English. I extracted 68 211 vowels—51 857 in L1 and 16 354 in L2—from 1617 speakers with 21 different L1 backgrounds (including English) in the CSLU: 22 Languages Corpus and measured F0, harmonics-to-noise ratio (HNR), and H1i–H2 for each vowel. I then computed two cluster distances for each L1 and for each measure: (1) vowels from the native English speakers versus L1 vowels from the learners and (2) L1 vowels versus L2 vowels from the learners. I found strong correlations between (1) and (2): r = 0.416 for F0 (p = 0.068), r = 0.531 for HNR (p = 0.016), and r = 0.374 for H1–H2 (p = 0.105).
Recent studies on retuning of phonetic categories by lexically guided perceptual learning suggest an inverse relationship between inherent acoustic variability and malleability of phonetic categories—the greater the variability, the smaller the retuning effect. However, it remains to be investigated what perceptual processing mechanisms are responsible for the observed relationship. Extending our previous study (Kataoka and Koo, 2017) that compared the degree of malleability between [u] (more variable) and [i] (less variable), we not only compared the size of retuning effect between the two vowels but also examined how listeners judge category goodness of synthesized stimuli from a [i]-[u] continuum. Our subjects (1) showed signs of category retuning for [i] but not for [u] and (2) judged goodness of stimuli in a more gradient manner and took longer to do so when asked to judge with reference to [u] than [i]. The results suggest that the two vowels differ in terms of their acoustic variability as well as their internal structure and that relative difficulty in resolving input speech signal in reference to a category such as [u] might be one reason the category is less malleable than a less variable category such as [i].
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.