This research derived information functions and proposed new scalar information indices to examine the quality of multidimensional forced choice (MFC) items based on the RANK model. We also explored how GGUM-RANK information, latent trait recovery, and reliability varied across three MFC formats: pairs (two response alternatives), triplets (three alternatives), and tetrads (four alternatives). As expected, tetrad and triplet measures provided substantially more information than pairs, and MFC items composed of statements with high discrimination parameters were most informative. The methods and findings of this study will help practitioners to construct better MFC items, make informed projections about reliability with different MFC formats, and facilitate the development of MFC triplet-and tetrad-based computerized adaptive tests.
Historically, multidimensional forced choice (MFC) measures have been criticized because conventional scoring methods can lead to ipsativity problems that render scores unsuitable for interindividual comparisons. However, with the recent advent of item response theory (IRT) scoring methods that yield normative information, MFC measures are surging in popularity and becoming important components in high-stake evaluation settings. This article aims to add to burgeoning methodological advances in MFC measurement by focusing on statement and person parameter recovery for the GGUM-RANK (generalized graded unfolding-RANK) IRT model. Markov chain Monte Carlo (MCMC) algorithm was developed for estimating GGUM-RANK statement and person parameters directly from MFC rank responses. In simulation studies, it was examined that how the psychometric properties of statements composing MFC items, test length, and sample size influenced statement and person parameter estimation; and it was explored for the benefits of measurement using MFC triplets relative to pairs. To demonstrate this methodology, an empirical validity study was then conducted using an MFC triplet personality measure. The results and implications of these studies for future research and practice are discussed.
Adaptive learning systems aim to provide learning items tailored to the behavior and needs of individual learners. However, one of the outstanding challenges in adaptive item selection is that often the corresponding systems do not have information on initial ability levels of new learners entering a learning environment. Thus, the proficiency of those new learners is very difficult to be predicted. This heavily impairs the quality of personalized items' recommendation during the initial phase of the learning environment. In order to handle this issue, known as the cold-start problem, we propose a system that combines item response theory (IRT) with machine learning. Specifically, we perform ability estimation and item response prediction for new learners by integrating IRT with classification and regression trees built on learners' side information. The goal of this work is to build a learning system that incorporates IRT and machine learning into a unified framework. We compare the proposed hybrid model to alternative approaches by conducting experiments on two educational data sets. The obtained results affirmed the potential of the proposed method. In particular, the obtained results indicate that IRT combined with Random Forests provides the lowest error for the ability estimation and the highest accuracy in terms of response prediction. This way, we deduce that the employment of machine learning in combination with IRT could indeed alleviate the effect of the cold start problem in an adaptive learning environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.