We analyze expert review and student performance data to evaluate the validity of the Cybersecurity Concept Inventory (CCI) for assessing student knowledge of core cybersecurity concepts after a first course on the topic. A panel of 12 experts in cybersecurity reviewed the CCI, and 142 students from six different institutions took the CCI as a pilot test. The panel reviewed each item of the CCI and the overwhelming majority rated every item as measuring appropriate cybersecurity knowledge. We administered the CCI to students taking a first cybersecurity course either online or proctored by the course instructor. We applied classical test theory to evaluate the quality of the CCI. This evaluation showed that the CCI is sufficiently reliable for measuring student knowledge of cybersecurity and that the CCI may be too difficult as a whole. We describe the results of the expert review and the pilot test and provide recommendations for the continued improvement of the CCI.
We present and analyze results from a pilot study that explores how crowdsourcing can be used in the process of generating distractors (incorrect answer choices) in multiple-choice concept inventories (conceptual tests of understanding). To our knowledge, we are the first to propose and study this approach. Using Amazon Mechanical Turk, we collected approximately 180 open-ended responses to several question stems from the Cybersecurity Concept Inventory of the Cybersecurity Assessment Tools Project and from the Digital Logic Concept Inventory. We generated preliminary distractors by filtering responses, grouping similar responses, selecting the four most frequent groups, and refining a representative distractor for each of these groups.We analyzed our data in two ways. First, we compared the responses and resulting distractors with those from the aforementioned inventories. Second, we obtained feedback from Amazon Mechanical Turk on the resulting new draft test items (including distractors) from additional subjects. Challenges in using crowdsourcing include controlling the selection of subjects and filtering out responses that do not reflect genuine effort. Despite these challenges, our results suggest that crowdsourcing can be a very useful tool in generating effective distractors (attractive to subjects who do not understand the targeted concept). Our results also suggest that this method is faster, easier, and cheaper than is the traditional method of having one or more experts draft distractors, and building on talk-aloud interviews with subjects to uncover their misconceptions. Our results are significant because generating effective distractors is one of the most difficult steps in creating multiple-choice assessments.
For two days in February 2018, 17 cybersecurity educators and professionals from government and industry met in a "hackathon" to refine existing draft multiple-choice test items, and to create new ones, for a Cybersecurity Concept Inventory (CCI) and Cybersecurity Curriculum Assessment (CCA) being developed as part of the Cybersecurity Assessment Tools (CATS) Project. We report on the results of the CATS Hackathon, discussing the methods we used
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.