2018
DOI: 10.29060/taps.2018-3-2/oa1049
|View full text |Cite
|
Sign up to set email alerts
|

The process of developing a rubric to assess the cognitive complexity of student-generated multiple choice questions in medical education

Abstract: Cognitively complex assessments encourage students to prepare using deep learning strategies rather than surface learning, recallbased ones. In order to prepare such assessment tasks, it is necessary to have some way of measuring cognitive complexity. In the context of a student-generated MCQ writing task, we developed a rubric for assessing the cognitive complexity of MCQs based on Bloom's taxonomy. We simplified the six-level taxonomy into a three-level rubric. Three rounds of moderation and rubric developme… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
2
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 16 publications
0
2
0
Order By: Relevance
“…To assess feasibility of MCQ-writing, we rated MCQs for cognitive complexity and asked students how they went about completing the MCQ task. We rated question quality using a three-level rubric based on Bloom’s taxonomy (summarized in Table 1 , see [ 33 ] for development of the rating system). We also asked students to indicate how long it took them to complete the task, and asked free-text questions (see Table 2 ).…”
Section: Methodsmentioning
confidence: 99%
“…To assess feasibility of MCQ-writing, we rated MCQs for cognitive complexity and asked students how they went about completing the MCQ task. We rated question quality using a three-level rubric based on Bloom’s taxonomy (summarized in Table 1 , see [ 33 ] for development of the rating system). We also asked students to indicate how long it took them to complete the task, and asked free-text questions (see Table 2 ).…”
Section: Methodsmentioning
confidence: 99%
“…Inter-rater reliability measures the consistency of student outcomes when tested by different examiners and may reduce in MCQs as a result of varying experience of assessment creators in knowledge and question-writing. Test-retest refers to the reproducibility of consistent MCQ results over time [ 31 ]. MCQs have higher test-retest reliability compared to other methods of assessment such as OSCEs, supervised learning events or essays as the questioning environment is controlled and the options that the candidates can choose are discrete [ 32 ].…”
Section: Reviewmentioning
confidence: 99%
“…MCQs have higher test-retest reliability compared to other methods of assessment such as OSCEs, supervised learning events or essays as the questioning environment is controlled and the options that the candidates can choose are discrete [ 32 ]. Nevertheless, test-retest reliability can be reduced by having short durations between MCQ tests, as participants may recall information from the first test or too long, as participants could have changed in some way (for example, having life-stressors or change in motivation), which could also bias results [ 31 ]. Internal consistency reliability can be maintained by ensuring consistent difficulty levels between MCQ items comprising the assessment and, again, is generally higher than other forms of assessment [ 9 ].…”
Section: Reviewmentioning
confidence: 99%
“…The learning objectives provide a foundation to gauge conceptually planned learning through a process called course mapping (21,22,23 To facilitate student learning and evaluate the complexity of learning, the use of rubrics offers clarity, improved transparency and consistency (25). In addition, the use of rubrics enhances the provision of meaningful feedback to the student for understanding how learning objectives are met (26).…”
Section: Assessmentmentioning
confidence: 99%