Based on three one-day workshops with teachers, we identify drivers and barriers for introducing cybersecurity into secondary school education. We found that students, although more knowledgeable in cybersecurity than their teachers, lacked understanding of career pathways and online safety. Teachers, however, lacked adequate knowledge and resources.
Existing research has shown that developers will use StackOverflow to answer programming questions: but what draws them to one particular answer over any other? The choice of answer they select can mean the difference between a secure application and insecure one, as the quality of supposedly secure answers can vary. Prior work has studied people posting on Stack Overflow-a two-way communication between the original poster and the Stack Overflow community. Instead, we study the situation of one-way communication, where people only read a Stack Overflow thread without being actively involved in it, sometimes long after a thread has closed. We report on a mixed-method study including a controlled between-groups experiment and qualitative analysis of participants' rationale (N=1188), investigating whether explanation detail, answer scoring, accepted answer marks, as well as the security of the code snippet itself affect the answers participants accept. Our findings indicate that explanation detail affects what answers participants reading a thread select (p<0.01), while answer score and acceptance do not (p>0.05)-the inverse of what research has shown for those asking and answering questions. The qualitative analysis of participants' rationale further explains how several cognitive biases underpin these findings. Correspondence bias, in particular, plays an important role in instilling readers with a false sense of confidence in an answer through the way it looks, regardless of whether it works, is secure, or if the community agrees with it. As a result, we argue that StackOverflow's use as a knowledge base by people not actively involved in threads-when there is only one-way-communication-may inadvertently contribute to the spread of insecure code, as the community's voting mechanisms hold little power to deter them from answers.
Cyber security decision making is inherently complicated, with nearly every decision having knock-on consequences for an organisation’s vulnerability and exposure. This is further compounded by the fact that decision-making actors are rarely security experts and may have an incomplete understanding of the security that the organisation currently has in place. They must contend with a multitude of possible security options that they may only partially understand. This challenge is met by decision makers’ risk thinking —their strategies for identifying risks, assessing their severity, and prioritising responses. We study the risk thinking strategies employed by teams of participants in an existing dataset derived from a tabletop cyber-physical systems security game. Our analysis identifies four structural patterns of risk thinking and two reasoning strategies: risk-first and opportunity-first . Our work highlights that risk-first approaches (as prescribed by the likes of NIST-800-53 and ISO 27001) are followed neither substantially nor exclusively when it comes to decision making. Instead, our analysis finds that decision making is affected by the plasticity of teams—that is, the ability to readily switch between ideas and practising both risk-first and opportunity-first reasoning.
Cyber security requirements are influenced by the priorities and decisions of a range of stakeholders. Board members and CISOs determine strategic priorities. Managers have responsibility for resource allocation and project management. Legal professionals concern themselves with regulatory compliance. Little is understood about how the security decision-making approaches of these different stakeholders contrast, and if particular groups of stakeholders have a better appreciation of security requirements during decision-making. Are risk analysts better decision makers than CISOs? Do security experts exhibit more effective strategies than board members? This paper explores the effect that different experience and diversity of expertise has on the quality of a team's cyber security decision-making and whether teams with members from more varied backgrounds perform better than those with more focused, homogeneous skill sets. Using data from 208 sessions and 948 players of a tabletop game run in the wild by a major national organization over 16 months, we explore how choices are affected by player background (e.g., cyber security experts versus risk analysts, board-level decision makers versus technical experts) and different team make-ups (homogeneous teams of security experts versus various mixes). We find that no group of experts makes significantly better game decisions than anyone else, and that their biases lead them to not fully comprehend what they are defending or how the defenses work. ! 1. www.decisions-disruptions.org 2. 13 additional recordings were untranscribable due to noise.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.