The purpose of this study is to investigate the relations between aptitude variables and school achievement using a model of ability which allows simultaneous identification of general and specific abilities. A battery of 16 aptitude tests was administered in the 6th grade and course grades were collected in 17 different subject matter areas in the 9th grade (N = 866). For the aptitude tests a confirmatory factor model is fitted with a general factor (G) along with nine orthogonal, residual factors. Some of the residual factors are quite broad (Crystallized intelligence, Gc(1), and General visualization G(1)), but most are narrow factors identified by pairs of tests (e.g., V(1), Ms(1), Num Ach(1), Vz(1), S, and Cs(1)). A model is fitted to the 17 course grades as well. The model includes a general school achievement factor (GENACH) and domain specific achievement factors in areas such as science-mathematics (SCIENCE), social science (SOCSCI(1)), language (LANG(1)) and spatial-practical performance (SPATPR(1)). Relating the latent criterion variables to the latent aptitude variables it is found that some 40% of the variance in GENACH may be accounted for by G and Gc(1). However, larger proportions of variance are accounted for in the domain specific achievement factors and different aptitude factors are important in different domains. It is conclu~ded that differentiation among at least a limited number of broad abilities may be worthwhile.
In this paper, the state of research on the assessment of competencies in higher education is reviewed. Fundamental conceptual and methodological issues are clarified by showing that current controversies are built on misleading dichotomies. By systematically sketching conceptual controversies, competing competence definitions are unpacked (analytic/trait vs. holistic/real-world performance) and commonplaces are identified. Disagreements are also highlighted. Similarly, competing statistical approaches to assessing competencies, namely item-response theory (latent trait) versus generalizability theory (sampling error variance), are unpacked. The resulting framework moves beyond dichotomies and shows how the different approaches complement each other. Competence is viewed along a continuum from traits that underlie perception, interpretation, and decision-making skills, which in turn give rise to observed behavior in real-world situations. Statistical approaches are also viewed along a continuum from linear to nonlinear models that serve different purposes. Item response theory (IRT) models may be used for scaling item responses and modeling structural relations, and generalizability theory (GT) models pinpoint sources of measurement error variance, thereby enabling the design of reliable measurements. The proposed framework suggests multiple new research studies and may serve as a “grand” structural model.
Sentence repetition tasks are widely used in the diagnosis and assessment of children with language difficulties. This paper seeks to clarify the nature of sentence repetition tasks and their relationship to other language skills. We present the results from a 2-year longitudinal study of 216 children. Children were assessed on measures of sentence repetition, vocabulary knowledge and grammatical skills three times at approximately yearly intervals starting at age 4. Sentence repetition was not a unique longitudinal predictor of the growth of language skills. A unidimensional language latent factor (defined by sentence repetition, vocabulary knowledge and grammatical skills) provided an excellent fit to the data, and language abilities showed a high degree of longitudinal stability. Sentence repetition is best seen as a reflection of an underlying language ability factor rather than as a measure of a separate construct with a specific role in language processing. Sentence repetition appears to be a valuable tool for language assessment because it draws upon a wide range of language processing skills.
One of the fundamental ideas in the constmction of psychological measurement instruments is that each instrument should be homogenous and measure one attribute only. The idea of unidimensionality is a central assumption of most models within both classical test theory and modem test theory (e.g., Gulliksen, 1950;Lord, 1980;McDonald, 1999). There are good statistical reasons for favoring one-dimensional models to solve measurement problems. Reasons of interpretation also speak in favor of a focus on unidimensionality, because if multiple attributes are measured, researchers will not know which attribute to invoke to account for a particular score.However, many observations in the literature have suggested that the unidimensionality requirement may have negative effects on the interpretability and usefulness ofthe resulting measure. One problem is that this requirement causes measures to focus on narrow aspects of phenomena. For example, Humphreys (1962) observed that the principle of unidimensionality caused the construct of intelligence to splinter into a large set of measures of narrowly defined cognitive abilities, causing the broad construct of intelligence to fall out of focus for a long time.Another indication that unidimensionality need not be a necessary characteristic of psychological instruments is that many instruments that have been proven to be highly useful for theoretical, diagnostic, and predictive purposes do not fulfill the unidimensionality requirement. For example, intelligence test batteries, such as the Wechsler series, are certainly not unidimensional but are considered to be extremely useful for purposes of diagnosis and prediction. In virtually any field of psychological measurement, there are numerous other examples of instruments that consist of different subtests aggregated into a composite score.The emphasis on unidimensionality is based on the idea that a variable should be unitary and express one characteristic only. However, there are situations in which variables are not seen as unitary. In a multiple regression analysis, for example, the independent variables are typically regarded as unitary, but the dependent variable is not. Instead, the main aim of a multiple regression analysis is to decompose the dependent variable into different components of 97
Problems and procedures in amessing and obtaining fit of data to the Rasch model are treated in the paper. The assumptions embodied in the model are made explicit and it is concluded that statistical tests are neoded which are sensitive to deviations such that more than one item parameter would be needed for each item, and such that more than one person parameter would be noeded for each person. Statistical goodness-of-fit tests, based on the conditional maximum-likelihood estimates of the item parameters, which can detect these two kinds of deviation are presented. Common sources of deviation are also identified, t w are the tests needed to detect them. Problems in the use of Statistical tests to msess fit are discussed and some investigations of power are presented. In relation to a distinction between use of the Rasch model as a criterion and as an instrument the treatment of the goodness-of-fit problem in different measurement contexts is discussed. Finally it is concluded that items which can be identified as misfitting should not be routinely excluded to obtain fit to the model; instead other actions should often be taken such as grouping of the items into homogeneous subsets.
A path model of organizational creativity was presented; it conceptualized the influences of information sharing, learning culture, motivation, and networking on creative climate. A structural equation model was fitted to data from the pharmaceutical industry to test the proposed model. The model accounted for 86% of the variance in the creative climate‐dependent variable. Information sharing had a positive effect on learning culture, which in turn had a positive effect on creative climate, while there were negative direct effects of information sharing on creative climate and on intrinsic motivation. This study suggests that information sharing and intrinsic motivation are important drivers for organizational creativity in a complex R&D environment in the pharmaceutical industry. Implications of the model are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.