A long-term training paradigm in lipreading was used to test the fuzzy logical model ofperception (FLMP). This model has been used successfully to describe the joint contribution of audible and visible speech in bimodal speech perception. Tests ofthe model were extended in the present experiment to include the prediction of confusion matrices, as well as performance at several different levels of skill. The predictions of the FLMP were contrasted with the predictions of a prelabeling integration model (PRLM). Subjects were taught to lipread 22 initial consonants in three different vowel contexts. Training involved a variety of discrimination and identification lessons with the consonant-vowel syllables. Repeated testing was given on syllables, words, and sentences. The test items were presented visually, auditorily, and bimodally, at normal rate or three times normal rate. The subjects improved in their lipreading ability across all three types of test items. Replicating previous results, the present study illustrates that substantial gains in lipreading performance are possible. Relative to the PRLM, the FLMP gave a better description of the confusion matrices at both the beginning and the end of practice. One new finding from the present study is that the FLMP can account for the gains in bimodal speech perception as subjects improve their lipreading and listening abilities.In face-to-face communication, visible speech contributes to speech perception. As the signal-to-noise ratio of the speech signal decreases, the benefits of viewing the talker increase (Dodd, 1977;Erber, 1969;Hutton, 1959;Neely, 1956;O'Neill, 1954;Sumby & Pollack, 1954). Even when auditory speech is intelligible, visual information from the talker's face can influence speech perception (Massaro & Cohen, 1983;McGurk & MacDonald, 1976). Bimodal speech perception can be characterized as a process in which the auditory and visual sources each provide continuous information that is combined or integrated to achieve an overall goodness of match with each possible alternative. The perceptual judgment is determined by the relative goodness of match of each of the relevant alternatives (Massaro, 1987; Summerfield, 1979). The percept that emerges from this processing reflects the contribution of both sources of information. Given an auditory ldal and a visual 1001, for example, the perceiver often categorizes the event as Ibda/. This experience is a reasonable outcome, given the use of both sources, because an auditory ldal is similar to auditory Ibda/, and visual /bal is similar to visual Ibdal. A relatively close match on both sources is a more optimal deThe research reported in this paper and the writing of the paper were supported, in part, by grants from the Public Health Service (PHS ROI NS 20314), the National Science Foundation (BNS 8812728), a James McKeen Cattell Fellowship, and the graduate division of the University of California, Santa Cruz. The authors would like to thank Lester Krueger and three anonymous reviewers, whose comments were...