Programmatic instructional design hinges on a careful description and motivation of choices, whose effectiveness should be measured against the intended outcomes. We should not evaluate individual methods, but provide evidence of the utility of the assessment programme as a whole.
We propose a model for programmatic assessment in action, which simultaneously optimises assessment for learning and assessment for decision making about learner progress. This model is based on a set of assessment principles that are interpreted from empirical research. It specifies cycles of training, assessment and learner support activities that are complemented by intermediate and final moments of evaluation on aggregated assessment data points. A key principle is that individual data points are maximised for learning and feedback value, whereas high-stake decisions are based on the aggregation of many data points. Expert judgement plays an important role in the programme. Fundamental is the notion of sampling and bias reduction to deal with the inevitable subjectivity of this type of judgement. Bias reduction is further sought in procedural assessment strategies derived from criteria for qualitative research. We discuss a number of challenges and opportunities around the proposed model. One of its prime virtues is that it enables assessment to move, beyond the dominant psychometric discourse with its focus on individual instruments, towards a systems approach to assessment design underpinned by empirically grounded theory.
World-wide, universities in health sciences have transformed their curriculum to include collaborative learning and facilitate the students’ learning process. Interaction has been acknowledged to be the synergistic element in this learning context. However, students spend the majority of their time outside their classroom and interaction does not stop outside the classroom. Therefore we studied how informal social interaction influences student learning. Moreover, to explore what really matters in the students learning process, a model was tested how the generally known important constructs—prior performance, motivation and social integration—relate to informal social interaction and student learning. 301 undergraduate medical students participated in this cross-sectional quantitative study. Informal social interaction was assessed using self-reported surveys following the network approach. Students’ individual motivation, social integration and prior performance were assessed by the Academic Motivation Scale, the College Adaption Questionnaire and students’ GPA respectively. A factual knowledge test represented student’ learning. All social networks were positively associated with student learning significantly: friendships (β = 0.11), providing information to other students (β = 0.16), receiving information from other students (β = 0.25). Structural equation modelling revealed a model in which social networks increased student learning (r = 0.43), followed by prior performance (r = 0.31). In contrast to prior literature, students’ academic motivation and social integration were not associated with students’ learning. Students’ informal social interaction is strongly associated with students’ learning. These findings underline the need to change our focus from the formal context (classroom) to the informal context to optimize student learning and deliver modern medics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.