Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Computer-supported learning technologies are essential for conducting hands-on cybersecurity training. These technologies create environments that emulate a realistic IT infrastructure for the training. Within the environment, training participants use various software tools to perform offensive or defensive actions. Usage of these tools generates data that can be employed to support learning. This paper investigates innovative methods for leveraging the trainee data to provide automated feedback about the performed actions. We proposed and implemented feedback software with four modules that are based on analyzing command-line data captured during the training. The modules feature progress graphs, conformance analysis, activity timeline, and error analysis. Then, we performed field studies with 58 trainees who completed cybersecurity training, used the feedback modules, and rated them in a survey. Quantitative evaluation of responses from 45 trainees showed that the feedback is valuable and supports the training process, even though some features are not fine-tuned yet. The graph visualizations were perceived as the most understandable and useful. Qualitative evaluation of trainees’ comments revealed specific aspects of feedback that can be improved. We publish the software as an open-source component of the KYPO Cyber Range Platform. Moreover, the principles of the automated feedback generalize to different learning contexts, such as operating systems, networking, databases, and other areas of computing. Our results contribute to applied research, the development of learning technologies, and the current teaching practice.
Computer-supported learning technologies are essential for conducting hands-on cybersecurity training. These technologies create environments that emulate a realistic IT infrastructure for the training. Within the environment, training participants use various software tools to perform offensive or defensive actions. Usage of these tools generates data that can be employed to support learning. This paper investigates innovative methods for leveraging the trainee data to provide automated feedback about the performed actions. We proposed and implemented feedback software with four modules that are based on analyzing command-line data captured during the training. The modules feature progress graphs, conformance analysis, activity timeline, and error analysis. Then, we performed field studies with 58 trainees who completed cybersecurity training, used the feedback modules, and rated them in a survey. Quantitative evaluation of responses from 45 trainees showed that the feedback is valuable and supports the training process, even though some features are not fine-tuned yet. The graph visualizations were perceived as the most understandable and useful. Qualitative evaluation of trainees’ comments revealed specific aspects of feedback that can be improved. We publish the software as an open-source component of the KYPO Cyber Range Platform. Moreover, the principles of the automated feedback generalize to different learning contexts, such as operating systems, networking, databases, and other areas of computing. Our results contribute to applied research, the development of learning technologies, and the current teaching practice.
No abstract
The computing education research community now has at least 40 years of published research on teaching ethics in higher education. To examine the state of our field, we present a systematic literature review of papers in the Association for Computing Machinery (ACM) computing education venues that describe teaching ethics in higher-education computing courses. Our review spans all papers published to SIGCSE, ICER, ITiCSE, CompEd, Koli Calling, and TOCE venues through 2022, with 100 papers fulfilling our inclusion criteria. Overall, we found a wide variety in content, teaching strategies, challenges, and recommendations. The majority of the papers did not articulate a conception of “ethics,” and those that did used many different conceptions, from broadly-applicable ethical theories, to social impact, to specific computing application areas (e.g., data privacy, hacking). Instructors used many different pedagogical strategies (e.g., discussions, lectures, assignments) and formats (e.g., standalone courses, incorporated within a technical course). Many papers identified measuring student knowledge as a particular challenge, and 59% of papers included mention of assessments or grading. Of the 69% of papers that evaluated their ethics instruction, most used student self-report surveys, course evaluations, and instructor reflections. While many papers included calls for more ethics content in computing, specific recommendations were rarely broadly applicable, preventing a synthesis of guidelines. To continue building on the last 40 years of research and move toward a set of best practices for teaching ethics in computing, our community should delineate our varied conceptions of ethics, examine which teaching strategies are best suited for each, and explore how to measure student learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.