2014
DOI: 10.1186/1472-6920-14-197
|View full text |Cite
|
Sign up to set email alerts
|

The reliability and validity of a portfolio designed as a programmatic assessment of performance in an integrated clinical placement

Abstract: BackgroundLittle is known about the technical adequacy of portfolios in reporting multiple complex academic and performance-based assessments. We explored, first, the influencing factors on the precision of scoring within a programmatic assessment of student learning outcomes within an integrated clinical placement. Second, the degree to which validity evidence supported interpretation of student scores.MethodsWithin generalisability theory, we estimated the contribution that each wanted factor (i.e. student c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
34
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 30 publications
(34 citation statements)
references
References 36 publications
0
34
0
Order By: Relevance
“…Programmatic assessment has been described in medical programs, 13 17 postgraduate residency programs, 18 veterinary courses, 19 and nutrition and dietetics. 20 Developed and implemented mostly in the Netherlands, 16 , 17 , 19 , 21 it has also been implemented in the USA, 14 , 15 Canada, 18 Australia, 20 , 22 and New Zealand. 13 …”
Section: Models Of Programmatic Assessmentmentioning
confidence: 99%
See 1 more Smart Citation
“…Programmatic assessment has been described in medical programs, 13 17 postgraduate residency programs, 18 veterinary courses, 19 and nutrition and dietetics. 20 Developed and implemented mostly in the Netherlands, 16 , 17 , 19 , 21 it has also been implemented in the USA, 14 , 15 Canada, 18 Australia, 20 , 22 and New Zealand. 13 …”
Section: Models Of Programmatic Assessmentmentioning
confidence: 99%
“… 3 Some descriptions of programmatic assessment focus primarily on the use of a portfolio as a means of capturing the evidence and contributing to decision making. 14 , 15 , 22 However, it may be timely to consider (and encourage) other models. To do this requires us to dissect out the key elements, which we suggest are: Create clear expectations of required learning Undertake purposeful selection of assessments Focus on those learners who need extra attention and/or extra information Separate data from decisions Aggregate by attribute, not method or timing Make decisions on aggregate, not on individual assessments Promote sharing of information and dialogue around narrative rather than numbers Maximize the assessments to guide learning …”
Section: Models Of Programmatic Assessmentmentioning
confidence: 99%
“…It has been suggested that inter-assessor reliability (that is, the consistency in mark between assessors) for high-stakes medical education assessments should show correlations of r= 0.70-0.80 (Roberts, Shadbolt, Clark, & Simpson, 2014). Average inter-assessor reliability is in fact typically ~0.60, is affected by a wide range of factors, and will vary across submissions (e.g.…”
Section: Effect Of Assessment Format and Criteria On Assessor Reliabimentioning
confidence: 99%
“…Average inter-assessor reliability is in fact typically ~0.60, is affected by a wide range of factors, and will vary across submissions (e.g. Bloxham & Price, 2013, Elton & Johnston, 2002Roberts et al, 2014).…”
Section: Effect Of Assessment Format and Criteria On Assessor Reliabimentioning
confidence: 99%
“… HPAC's programmatic assessment case study 2016 the portfolio approach to a curriculum module described in Roberts et al (2014)  the approach developed by the Association of American Medical Colleges 2014which is concerned with a learner's trustworthiness to complete clinical tasks that integrate different skills and knowledge independently (ten Cate 2013).…”
mentioning
confidence: 99%