Computer-based assessment can provide new insights into behavioral processes of task completion that cannot be uncovered by paper-based instruments. Time presents a major characteristic of the task completion process. Psychologically, time on task has 2 different interpretations, suggesting opposing associations with task outcome: Spending more time may be positively related to the outcome as the task is completed more carefully. However, the relation may be negative if working more fluently, and thus faster, reflects higher skill level. Using a dual processing theory framework, the present study argues that the validity of each assumption is dependent on the relative degree of controlled versus routine cognitive processing required by a task, as well as a person's acquired skill. A total of 1,020 persons ages 16 to 65 years participated in the German field test of the Programme for the International Assessment of Adult Competencies. Test takers completed computer-based reading and problem solving tasks. As revealed by linear mixed models, in problem solving, which required controlled processing, the time on task effect was positive and increased with task difficulty. In reading tasks, which required more routine processing, the time on task effect was negative and the more negative, the easier a task was. In problem solving, the positive time on task effect decreased with increasing skill level. In reading, the negative time on task effect increased with increasing skill level. These heterogeneous effects suggest that time on task has no uniform interpretation but is a function of task difficulty and individual skill.There are two fundamental observations on human perfor mance: the result obtained on a task and the time taken (e.g., Ebel, 1953). In educational assessment, the focus is mainly on the task outcome; behavioral processes that led to the result are usually not considered. One reason may be that traditional assessments are paper-based and, hence, are not suitable for collecting behavioral process data at the task level (cf. Scheuermann & Bjomsson, 2009). However, computer-based assessment-besides other ad vantages, such as increased construct validity (e.g., Sireci & Zenisky, 2006) or improved test design (e.g., van der Linden, 2005)-can provide further insights into the task completion pro cess. This is because in computer-based assessment, log file data can be recorded by the assessment system that allows the re searcher to derive theoretically meaningful descriptors of the task completion process. The present study draws on log file data from an international computer-based large-scale assessment to address the question of how time on task is related to the task outcome. As shown in the following, by analyzing the relation of task perfor mance to the time test takers spent on task, we were able to obtain new insights into how the interaction of task and person charac-
Log data from educational assessments attract more and more attention and largescale assessment programs have started providing log data as scientific use files. Such data generated as a by-product of computer-assisted data collection has been known as paradata in survey research. In this paper, we integrate log data from educational assessments into a taxonomy of paradata. To provide a generic framework for the analysis of log data, finite state machines are suggested. Beyond its computational value, the specific benefit of using finite state machines is achieved by separating platform-specific log events from the definition of indicators by states. Specifically, states represent filtered log data given a theoretical process model, and therefore, encode the information of log files selectively. The approach is empirically illustrated using log data of the context questionnaires of the Programme for International Student Assessment (PISA). We extracted item-level response time components from questionnaire items that were administered as item batteries with multiple questions on one screen and related them to the item responses. Finally, the taxonomy and the finite state machine approach are discussed with respect to the definition of complete log data, the verification of log data and the reproducibility of log data analyses.
The main challenge of ability tests relates to the difficulty of items, whereas speed tests demand that test takers complete very easy items quickly. This article proposes a conceptual framework to represent how performance depends on both between-person differences in speed and ability and the speed-ability compromise within persons. Related measurement challenges and psychometric models that have been proposed to deal with the challenges are discussed. It is argued that addressing individual differences in the speed-ability trade-off requires the control of item response times. In this way, response behavior can be captured exclusively with the response variable remedying problems in traditional measurement approaches.Keywords: ability, experimental control, item response modeling, response time modeling, speed, speed-ability trade-offIn their book on the measurement of intelligence, Thorndike, Bregman, Cobb, and Woodyard (1926) present a theorem, which says that "other things being equal, if intellect A can do at each level the same number of tasks as intellect B, but in a less time, intellect A is better." (p. 33). This statement illustrates that in any performance measure, both the result of interacting with an item and how long it took to reach the result need to be considered and that comparing individuals in one respect requires keeping the other aspect constant. Along these lines, Thorndike et al. (1926) proposed the concepts of level (i.e., ability) and speed, which are empirically defined by the produced products (item responses) and the time required to produce them (response times). In measurement literature, various concepts such as ability, level, and power have been used to refer to a disposition explaining individual differences in response accuracy (Gulliksen, 1950;Thorndike et al., 1926;Thurstone, 1937); for consistency reasons, only the term ability will be used in this paper. A wide range of approaches have been suggested to conceptualize and model
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.