“…making regular backups ensure that your photos will always be there for you." FA3 Exact details related to feasible recommended actions to be taken to reduce or remove the threat (how to assuage the fear (Leventhal 1970;Witte 1992;Lawson et al 2016)) e.g. "register for a cloud backup service and set up auto-backup on your devices, as follows..." FA4 A statement related to their ability to take the action: self-efficacy (the individual's belief in being able to take the action (Dillard et al 2017;Dabbs and Leventhal 1966;Hamilton et al 2000;Emery et al 2014;Bandura 2001;Hartmann et al 2014;Peters et al 2013;Bandura 1977)) e.g.…”
Section: Fear Appeals (Fai)mentioning
confidence: 99%
“…The recommended action gives the participant a reasonable cybersecurity action that can be taken, thereby making it possible for the recipient to engage in danger control (Lawson et al 2016;Leventhal 1970;Witte 1992). Ensuring that a feasible action is available addresses potential detriment PD6.…”
Section: E4: Provide Feasible Recommended Cybersecurity Actionmentioning
Fear appeals are used in many domains. Cybersecurity researchers are also starting to experiment with fear appeals, many reporting positive outcomes. Yet there are ethical concerns related to the use of fear to motivate action. In this paper, we explore this aspect from the perspectives of cybersecurity fear appeal deployers and recipients. We commenced our investigation by considering fear appeals from three foundational ethical perspectives. We then consulted the two stakeholder groups to gain insights into the ethical concerns they consider to be pertinent. We first consulted deployers: (a) fear appeal researchers and (b) Chief Information Security Officers (CISOs), and then potential cybersecurity fear appeal recipients: members of a crowdsourcing platform. We used their responses to develop an effects-reasoning matrix, identifying the potential benefits and detriments of cybersecurity fear appeals for all stakeholders. Using these insights, we derived six ethical principles to guide cybersecurity fear appeal deployment. We then evaluated a snapshot of cybersecurity studies using the ethical principle lens. Our contribution is, first, a list of potential detriments that could result from the deployment of cybersecurity fear appeals and second, the set of six ethical principles to inform the deployment of such appeals. Both of these are intended to inform cybersecurity fear appeal design and deployment.
“…making regular backups ensure that your photos will always be there for you." FA3 Exact details related to feasible recommended actions to be taken to reduce or remove the threat (how to assuage the fear (Leventhal 1970;Witte 1992;Lawson et al 2016)) e.g. "register for a cloud backup service and set up auto-backup on your devices, as follows..." FA4 A statement related to their ability to take the action: self-efficacy (the individual's belief in being able to take the action (Dillard et al 2017;Dabbs and Leventhal 1966;Hamilton et al 2000;Emery et al 2014;Bandura 2001;Hartmann et al 2014;Peters et al 2013;Bandura 1977)) e.g.…”
Section: Fear Appeals (Fai)mentioning
confidence: 99%
“…The recommended action gives the participant a reasonable cybersecurity action that can be taken, thereby making it possible for the recipient to engage in danger control (Lawson et al 2016;Leventhal 1970;Witte 1992). Ensuring that a feasible action is available addresses potential detriment PD6.…”
Section: E4: Provide Feasible Recommended Cybersecurity Actionmentioning
Fear appeals are used in many domains. Cybersecurity researchers are also starting to experiment with fear appeals, many reporting positive outcomes. Yet there are ethical concerns related to the use of fear to motivate action. In this paper, we explore this aspect from the perspectives of cybersecurity fear appeal deployers and recipients. We commenced our investigation by considering fear appeals from three foundational ethical perspectives. We then consulted the two stakeholder groups to gain insights into the ethical concerns they consider to be pertinent. We first consulted deployers: (a) fear appeal researchers and (b) Chief Information Security Officers (CISOs), and then potential cybersecurity fear appeal recipients: members of a crowdsourcing platform. We used their responses to develop an effects-reasoning matrix, identifying the potential benefits and detriments of cybersecurity fear appeals for all stakeholders. Using these insights, we derived six ethical principles to guide cybersecurity fear appeal deployment. We then evaluated a snapshot of cybersecurity studies using the ethical principle lens. Our contribution is, first, a list of potential detriments that could result from the deployment of cybersecurity fear appeals and second, the set of six ethical principles to inform the deployment of such appeals. Both of these are intended to inform cybersecurity fear appeal design and deployment.
“…This outcome is frequently attributed to processes of "hyping", whereby reporting tends to concentrate on dramatic fear-arousing accounts of survivors (Ali, 2013;Houston et al, 2012;Kasperson et al, 1988;Slovic, 2000;Vasterman, 2005;Vasterman et al, 2005;Wenger and Friedman, 1986), as well as the hunt for information concerning the materialisation of similar events in the future (Coppola, 2005: 43-44;Kitzinger and Reilly, 1997). Numerous studies have therefore highlighted cases in which coverage of disaster risks has contributed to amplified public fears, including reporting around terrorism (Izard and Perkins, 2011;Jha and Izard, 2011), cyber threats (Debrix, 2001;Lawson et al, 2016), and pandemic disease (Espinola et al, 2016).…”
Section: A 'Culture Of Fear' and Disaster Risk Concern In The United mentioning
The version in the Kent Academic Repository may differ from the final published version. Users are advised to check http://kar.kent.ac.uk for the status of the paper. Users should always cite the published version of record.
“…A second example could be sought in the increasing scale of cyberattacks. The term "Cyber Pearl Harbor" is used for an attack of such a scale as to serve as a wake-up call due to the damage done [23,24,38]. Like exemplary singularity claims, it also represents a clear example of a doomsday scenario [23].…”
Section: Examplesmentioning
confidence: 99%
“…The term "Cyber Pearl Harbor" is used for an attack of such a scale as to serve as a wake-up call due to the damage done [23,24,38]. Like exemplary singularity claims, it also represents a clear example of a doomsday scenario [23]. However, apart from being the apotheosis of a gradual increase in the size of attacks, it could be questioned whether this example has the characteristics of contagion or acceleration through system dynamics.…”
In future studies involving artificial intelligence, the so-called technological singularity is a key theme. It refers to a hypothetical point in the future where technological progress becomes automated through the creation of a new form of intelligence. Under the assumption of adversarial behaviour, this could pose an existential threat to humanity. More modestly, singularities and tipping points refer to thresholds beyond which the behaviour of a system changes in a qualitative way. The nonlinearity of the behaviour causes existing control mechanisms to become obsolete, guiding the system towards a new balance, if this exists. In this paper, we ask the question to what extent the notions of singularity and tipping point can contribute to an analysis of security in 2038. Can we expect to have seen such phenomena in twenty years time, and will they have changed our perception of what security entails? Or are they useless forms of speculation diverting our attention away from the day-to-day best practices that are needed to keep our basic security up-to-date? We discuss examples of singularity-style developments, characterise them in terms of acceleration mechanisms and discontinuities, and discuss whether and how these characteristics should be used to prepare ourselves. We conclude that a broad discussion on potential security singularities and associated general adaptation strategies is more useful than focusing on one big singularity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.