The development of nonparametrics originated from a concern about the approximate validity of parametric procedures based on a specific narrow model when the model is questionable. Procedures which are reasonably insensitive to the exact assumptions that one makes are called robust. Such assumptions may be about a variety of things. They may be about an underlying common density assuming that the data are iid; they may be about the dependence structure of the data itself; in regression problems, they may be about the form of the regression function, etc. For example, if we assume that our data are iid from a certain N (θ, 1) density, then we have a specific parametric model for our data. Statistical models are always, at best, an approximation. We do not believe that the normal model is the correct model. There is a trade-off.As a simple example, consider the t-test for the mean µ of a normal distribution.If normality holds, then under the null hypothesis, H 0 : µ = µ 0 ,for all n, µ 0 , and σ. However, if the population is not normal, neither the size nor the power of the t-test remains the same as under the normal case. If these hange substantially, we have a robustness problem. However, as we will later see, by making a minimal number of assumptions (specifically, no parametric assumptions) we can develop procedures with some sort of a safety net. Such methods would qualify for being called nonparametric methods.