Often in computer-based simulations, sensor models are used to generate realistic data to stimulate an algorithm under test. These algorithms might include, but are not limited to, guidance, navigation, control, tracking, and mission management algorithms. This paper builds on the authors' previous research to apply a systematic approach for comparing two separately developed models of the same sensor. An example application comparing two radar models is used to demonstrate the application of these methods.When working on a particular simulation or project that spans several individuals or contractors it is common for multiple models of the same sensor to be created. Multiple versions of a model may be the result of independent development, a different simulation emphasis, or due to evolution of a model over the course of a development cycle. These models may behave very differently from one another even if they are "drop-in" replacements that accept the same input format and generate the same output format. It is important for the simulation developers as well as the end users to understand the differences between two models so that they may decide to adopt or reject a new version of a model or adequately compare results generated using the two models. This motivates a need for a structured approach for comparing similar models using parametric and statistical techniques.Previous work by the authors focused on the application of design of experiments techniques (DoE) for generating model inputs, in an efficient and effective manner, that span the parameter space over which the model is designed to be valid and for which it will be used in the broader simulation environment. The results described here extend the previous approach to include parametric and statistical model comparison techniques. Monte Carlo simulations are used to generate statistical metrics for comparing the models.