We propose a model for programmatic assessment in action, which simultaneously optimises assessment for learning and assessment for decision making about learner progress. This model is based on a set of assessment principles that are interpreted from empirical research. It specifies cycles of training, assessment and learner support activities that are complemented by intermediate and final moments of evaluation on aggregated assessment data points. A key principle is that individual data points are maximised for learning and feedback value, whereas high-stake decisions are based on the aggregation of many data points. Expert judgement plays an important role in the programme. Fundamental is the notion of sampling and bias reduction to deal with the inevitable subjectivity of this type of judgement. Bias reduction is further sought in procedural assessment strategies derived from criteria for qualitative research. We discuss a number of challenges and opportunities around the proposed model. One of its prime virtues is that it enables assessment to move, beyond the dominant psychometric discourse with its focus on individual instruments, towards a systems approach to assessment design underpinned by empirically grounded theory.
Because learning and instruction are increasingly competence-based, the call for assessment methods to adequately determine competences is growing. Using just one single assessment method is not sufficient to determine competence acquisition. This article argues for Competences Assessment Programmes (CAPs), consisting of a combination of different assessment methods, including both traditional and new forms of assessment. To develop and evaluate CAPs, criteria to determine their quality are needed. Just as CAPs are combinations of old and new forms of assessment, criteria used to evaluate CAP quality should be derived from both psychometrics and edumetrics. A framework of ten quality criteria for CAPs is presented, which is then compared to Messick's framework of construct validity. Results show that the 10-criterion framework partly overlaps with Messick's, but adds some important new criteria, which get a more prominent place in quality control issues in competence-based education.Keywords: evaluation criteria; quality control; assessment programmes; competence-based Modern societies have dramatically changed due to technological changes such as the development of information technology systems. Service industries have become knowledge oriented, production economies have become knowledge economies and production workers have become knowledge workers. Learners need to be flexible and adaptive if they are to function well in today's complex and global societies. To support the needs of these new learners, education is changing its focus from one of transmitting isolated knowledge and skills to one of acquiring complex competences, guiding learners in developing skills for learning and getting information from the diverse range of sources available in modern society. In short, education is increasingly becoming learner-centred and competence-based.As part of the larger drive to change the curriculum, assessment needs to be reformed as well. Biggs ' (1996) idea of constructive alignment between instruction, learning and assessment implies that these three elements should be based on the same underlying principles, in this case competence-based education. Birenbaum et al. state in their EARLI position paper (2006) that current assessment practices in European countries fail to address learners' needs because they tend to focus on assessment of learning instead of on assessment for learning, are limited in scope, drive teaching for assessment instead of teaching for learning, and ignore individual differences. Although part of this might be true, new assessment methods are not without problems either and some feel that the evidence against classical tests is not as strong as has been claimed (Hambleton & Murphy, 1992), and that the claim that newer forms of assessment are better suitable to address learners' needs still needs empirical confirmation (Stokking, Van der Schaaf, Jaspers, & Erkens, 2004). Still, as a consequence of the changes towards competencebased education, a call is growing for the development of a...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.