Regression component decompositions (RCD) are defined as a special class of component decompositions where the pattern contains the regression weights for predicting the observed variables from the latent variables. Compared to factor analysis, RCD has a broader range of applicability, greater ease and simplicity of computation, and a more logical and straightforward theory. The usual distinction between factor analysis as a falsifiable model, and component analysis as a tautology, is shown to be misleading, since a special case of regression component decomposition can be defined which is not only falsifiable, but empirically indistinguishable from the factor model. SCH~NEMANN AND STEIGER [Br. J. math. statist. Psycl~ol.into focus in a number of papers (Schonemann, 1971; Schonemann & Wang, 1972; Steiger & Schonemann, 1975) which finally succeeded in reopening these outstanding questions. A very comprehensive and authoritative paper by Guttman in 1955 had been virtually ignored, together with all previous work in this area. I n the empirical part of their 1972 paper, Schonemann & Wang reanalyzed the data of 13 published factor analyses, employing the recently perfected maximum likelihood algorithms for factor extraction, which make it feasible to test the model statistically. They found (1) that the factor model usually did not fit statistically for the small number of common factors which appeared in the published accounts and (2) that some of the common factors were usually poorly defined. Frequently, the correlation of a factor with a minimally correlated equivalent factor was zero. As Guttman (1955) had pointed out, this 'raises the question what is being estimated in the first place; instead of only one primary trait there are many widely different variables associated with a given profile of factor loadings'. Finally, they found (3) that both problems, factor indeterminacy and lack of identifiability, grew worse as the number of factors was raised in an effort to improve the statistical fit.Understandably, this study has provoked some controversy about the merits of the factor model. One might expect some resistance against discarding factor analysis, because, in spite of the theoretical difficulties of the underlying model, the method has proven flexible and useful as a data reduction technique. In this paper, we propose and discuss an alternative method for data reduction which has many of the practical virtues of factor analysis, is equally flexible, but computationally more efficient and free from its theoretical problems. Theorem 4. 2 = A* A*'+ U2, with A* of full column rank m and U 2 positive definite, diagonal, iff there exists a diagonal, positive definite matrix U such that C* = U-l XU-, = AA' + E, where AA' has latent roots b12 2 b,, 2 . . . 2 b,: > 1, and E = E' = E2, E A = 4. Since A* has full column rank m y Z = A* A*'+ U 2 implies that C* = U-l XU-, = (U-l A*) (A*' U-l) + I has pm roots equal to unity, and m roots b12, . . ., b,,, > 1. If the eigendecomposition of X* is C* = L, Db12 L,' + L...