Background
Programmatic assessment is increasingly being implemented within competency-based health professions education. In this approach a multitude of low-stakes assessment activities are aggregated into a holistic high-stakes decision on the student’s performance. High-stakes decisions need to be of high quality. Part of this quality is whether an examiner perceives saturation of information when making a holistic decision. The purpose of this study was to explore the influence of narrative information in perceiving saturation of information during the interpretative process of high-stakes decision-making.
Methods
In this mixed-method intervention study the quality of the recorded narrative information was manipulated within multiple portfolios (i.e., feedback and reflection) to investigate its influence on 1) the perception of saturation of information and 2) the examiner’s interpretative approach in making a high-stakes decision. Data were collected through surveys, screen recordings of the portfolio assessments, and semi-structured interviews. Descriptive statistics and template analysis were applied to analyze the data.
Results
The examiners perceived less frequently saturation of information in the portfolios with low quality of narrative feedback. Additionally, they mentioned consistency of information as a factor that influenced their perception of saturation of information. Even though in general they had their idiosyncratic approach to assessing a portfolio, variations were present caused by certain triggers, such as noticeable deviations in the student’s performance and quality of narrative feedback.
Conclusion
The perception of saturation of information seemed to be influenced by the quality of the narrative feedback and, to a lesser extent, by the quality of reflection. These results emphasize the importance of high-quality narrative feedback in making robust decisions within portfolios that are expected to be more difficult to assess. Furthermore, within these “difficult” portfolios, examiners adapted their interpretative process reacting on the intervention and other triggers by means of an iterative and responsive approach.
Purpose
This study aims to report the design, development and evaluation of a digital quality assurance application aimed at improving and ensuring the quality of assessment programmes in higher education.
Design/methodology/approach
The application was developed using a design-based research (DBR) methodology. The application’s design was informed by a literature search and needs assessment of quality assurance stakeholders to ensure compliance with daily practices and accreditation requirements. Stakeholders from three study programmes evaluated the application.
Findings
As part of the development of the application, module- and programme-level dashboards were created to provide an overview of the programme’s outcomes, assessment methods, assessment metrics, self-evaluated quality indicators and assessment documents. The application was evaluated by stakeholders at the module and programme levels. Overall, the results indicated that the dashboards aided them in gaining insight into the assessment programme and its alignment with underlying assessments.
Practical implications
Visualisation of the assessment programme’s structure and content identifies gaps and opportunities for improvement, which can be used to initiate a dialogue and further actions to improve assessment quality.
Originality/value
The application developed facilitates a cyclical and transparent assessment quality assurance procedure that is continuously available to various stakeholders in quality assurance.
IntroductionThe shift toward an assessment for learning culture includes assessment quality criteria that emphasise the learning process, such as transparency and learning impact, in addition to the traditional validity and reliability criteria. In practice, the quality of the assessment depends on how the criteria are interpreted and applied. We explored how educators perceive and achieve assessment quality, as well as how they perceive assessment impact upon student learning.MethodsWe employed a qualitative research approach and conducted semi-structured interviews with 37 educators at one Dutch research university. The data were subsequently analysed using a template analysis.ResultsThe findings indicate that educators predominantly perceive and achieve assessment quality through traditional criteria. The sampled curricular stakeholders largely perceived assessment quality at the course level, whilst few specified programme-level quality criteria. Furthermore, educators perceived the impact of assessment on student learning in two distinct ways: as a source of information to monitor and direct student learning, and as a tool to prompt student learning.DiscussionThe shift toward a culture of assessment for learning is not entirely reflected in educators’ current perceptions. The study’s findings set the stage for better assessment quality and alignment with an assessment for learning culture.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.