The fundamental property of feedforward neural networks -parsimonious approximationmakes them excellent candidates for modeling static nonlinear processes from measured data. Similarly, feedback (or recurrent) neural networks have very attractive properties for the dynamic nonlinear modeling of artificial or natural processes; however, the design of such networks is more complex than that of feedforward neural nets, because the designer has additional degrees of freedom. In the present paper, we show that this complexity may be greatly reduced by (i) incorporating into the very structure of the network all the available mathematical knowledge about the process to be modeled, and by (ii) transforming the resulting network into a "universal" form, termed canonical form, which further reduces the complexity of analyzing and training dynamic neural models.