An unsupervised neural network model inductively acquires the ability to dis h categorically the stop consonants of English, in a manner c ent with prenatal and early postnatal auditory experience, and without reference to any spcialz knowledge of ligisic structure or the properties of speech. This argues against the common assumption that inguisic knowledge, and spee perception particular, cannot be learned and must therefore be innately specified.Chomsky's view that the "core" features ofhuman linguistic ability are innate (1) (2,(7)(8)(9)(10). In this view, linguistic development is not a learning process, but a process of selecting the discriminations useful to the maturing infant and forgetting those that are not useful (11). Supporting this belief is the discovery that infants, unlike adults, can discriminate phonetic units of languages they have never heard (12, 13). It is believed that such complex, cognitive behaviors of infants cannot arise from prenatal, experience-dependent modification of neurons.However, such modifications have been shown to play a critical role in the development of neuronal selectivity. In visual cortex, for example, experience-dependent development proceeds rapidly from the onset of visual function through the so-called "critical period" and is strongly dependent on the visual environment in which the animal is raised. In the following study, we show that the prenatal auditory environment combined with a model of neuronal modification similar to that proposed for visual cortex can account for the acquisition of some basic speech contrasts as well as categorical perception of speech sounds.The onset ofhearing for humans begins as early as the 24th week of gestation (14,15), raising the possibility that a lengthy "critical period" for auditory development may take place during the last several months offetal life (16). Clearly, auditory experience in immature animals can alter frequency tuning (17) and spatial mapping (18) in auditory centers of the brain, and cognitive studies ofhuman infants have shown that both prenatal (3) and postnatal (4) experiences may alter aspects of human speech perception prior to language acquisition. The fetus develops in an acoustically rich environment including the mother's voice. Low-frequency sounds dominate (19), whereas pure tones with higher frequencies (from external sources) are more attenuated. A certain amount of masking of low-frequency sounds is to be expected, though, due to the presence of low-frequency intrauterine noise, and tests offetal hearing commonly use frequencies ranging from 500 Hz to 4 kHz. Low-frequency, broad-band noises are expected to be most efficient in producing responses in such tests (20). No adequate characterization of the transfer functions of the fetal middle ear exists (21).The auditory periphery is characterized by broad bandpass tuning and poor phase-locking abilities during early mammalian development (22), though the "circuits" passing encoded information to auditory cortex appear to develop as fu...