A special case of the overlapping coefficient, the common area under two probability density curves. that has received intermittent attention in the scientific and statistical literature concerns the overlap of two normal distributions with equal variances. Here we consider the problem of constructing tests of hypotheses and interval estimation for the true overlap in this special situation. Direct and conditional tests for the true value of the overlap are discussed. A method of constructing an exact confidence interval estimator for the true overlap is presented. Several alternative methods of obtaining confidence intervals for the true overlap are compared in a Monte Carlo investigation. In an example, we use the normal theory results discussed and an invariance property of the overlapping coefficient to estimate the overlap between two log-normal distributions from sample data.Given two univariate probability (density) functions fi (x) andf2(x), the general form of the index of dissimilarity or separation coefficient can be defined as the following:
--ooThis measure of separation represents the area under whichfi (x) orf2(x) are not common to the other distribution. It can easily be shown that the value of this index is zero when the two distributions are identical; its value is unity when the two distributions are totally disjoint. This measure of the separation between fi (x) and f 2 ( x ) is also a Hellinger or Matusita distance measure.We prefer to work with the complement of this separation index, which we call the overlapping coefficient 0 VL:-oo This measure of the agreement between two distributions meets the usual convention for measures of association noted by Goodman and Kruskal (Reference 5, p. 8): O V L = 1 signifies the perfect identity of the two distributions, and O V L = 0 indicates their complete separation. Since, O V L = 1 -C,, properties determined for one of these measures apply immediately to the other. In the context of two normal distributions with means p2 and p2 and common variance a', OVL=2iP ( --;"11 , , , I ) = 2 @ ( -$ q ) ,where @( -) represents the standard normal distribution function and 6 = (plp 2 ) / o . In this distributional setting, 0 V L can be viewed as a transformation applied to the Mahalanobis distance b2 or the Kulback discrimination information b 2 / 2 . The parameter 6 itself is known as the standard mean difference and has become widely accepted as a measure of effect size in metaanalysis (for example, see Reference 7, pp. 76).A sample estimator for O V L based on independent random samples from the two normal distributions defined above can be obtained by replacing the parameters pI , p2 and 0 in (2) with appropriate sample estimators. Let n l and n2 indicate the sizes of the random samples from the two normal distributions; let XI and Xz denote the usual sample means and S: and S: the unbiased estimators for the common variance o2 computed from the two samples. If we insert XI, X2 and for p l , p2 and 0 in expression (2) for 0 VL, we obtain the maximum-likeli...