Continuous Speech Recognition (CSR) systems require acoustic models to represent the characteristics of the acoustic signal and Language Models (LM) to represent the syntactic constraints of the language. Both acoustic and LM probability distributions are usually independently obtained and evaluated. Then, the respective "best" models are selected to be integrated in the CSR systems. But, in this paper it was proved that the use of more accurate acoustic models (for example the use of semicontinuous models instead of discrete ones or the use of a bigger number of then representing a more complete set of sublexical units) didn't always mean a better performance of the integrated system because the acoustic improvements were softened when the LM probabilities were applied. This experimental evaluation was carried out over a Spanish speech application task.