“…One is to better understand how speech is perceived and produced by humans ͑e.g., Rubin, Baer, and Mermelstein, 1981͒, and the other is to develop articulatory-based techniques for automatic speech recognition ͑e.g., Blackburn and Young, 2000a͒ and speech synthesis ͑e.g., Greenwood, Goodyear, and Martin, 1992͒. In the service of these purposes, computational models have been developed to simulate the forward mapping from articulation to acoustics ͑e.g., Baer et al, 1991;Beautemps, Badin, and Laboissiere, 1995͒. These forward models have been based upon articulatory and acoustic dimensions that are known to convey phonetic information, and upon physical principles of the vocal tract.…”