Individuals with autism spectrum disorder (ASD) display an interest in and affinity for technology; however this population’s wide range of special needs is not often taken into account in technology design or usability testing. To assist human factors professionals in understanding this user population, we present a description of three domains of special needs for those with ASD: information processing, communication, and behavior. We then present a proposed model that human factors professionals could employ to understand the unique characteristics of individuals with ASD. We also synthesize research on design considerations for these users and present a composite list of recommendations for usability testing. Thus, this paper is intended to inform the human factors community of the unique characteristics of this user group, and provide guidelines for both design and usability testing.
Purpose This study aims to examine how social engineers use persuasion principles during vishing attacks. Design/methodology/approach In total, 86 examples of real-world vishing attacks were found in articles and videos. Each example was coded to determine which persuasion principles were present in that attack and how they were implemented, i.e. what specific elements of the attack contributed to the presence of each persuasion principle. Findings Authority (A), social proof (S) and distraction (D) were the most widely used persuasion principles in vishing attacks, followed by liking, similarity and deception (L). These four persuasion principles occurred in a majority of vishing attacks, while commitment, reciprocation and consistency (C) did not. Further, certain sets of persuasion principles (i.e. authority, distraction, liking, similarity, and deception and social proof; , authority, commitment, reciprocation, and consistency, distraction, liking, similarity and deception, and social proof; and authority, distraction and social proof) were used more than others. It was noteworthy that despite their similarities, those sets of persuasion principles were implemented in different ways, and certain specific ways of implementing certain persuasion principles (e.g. vishers claiming to have authority over the victim) were quite rare. Originality/value To the best of authors’ knowledge, this study is the first to investigate how social engineers use persuasion principles during vishing attacks. As such, it provides important insight into how social engineers implement vishing attacks and lays a critical foundation for future research investigating the psychological aspects of vishing attacks. The present results have important implications for vishing countermeasures and education.
Phishing emails have certain characteristics, including wording related to urgency and unrealistic promises (i.e., “too good to be true”), that attempt to lure victims. To test whether these characteristics affected users’ suspiciousness of emails, users participated in a phishing judgment task in which we manipulated 1) email type (legitimate, phishing), 2) consequence amount (small, medium, large), 3) consequence type (gain, loss), and 4) urgency (present, absent). We predicted users would be most suspicious of phishing emails that were urgent and offered large gains. Results supporting the hypotheses indicate that users were more suspicious of phishing emails with a gain consequence type or large consequence amount. However, urgency was not a significant predictor of suspiciousness for phishing emails, but was for legitimate emails. These results have important cybersecurity-related implications for penetration testing and user training.
Interrater reliability (IRR) assesses the stability of a coding protocol over time and across coders. For practical reasons, it is often difficult to assess IRR for an entire dataset, so researchers sometimes calculate the IRR for a subset of the total data sample. The purpose of this study is to investigate the accuracy of such subset IRRs. Using bootstrapping, we determined the effects of sample size (10%, 25%, & 40% of the total dataset) and IRR measure type (percent agreement, Krippendorff’s alpha, & the G Index) on the bias and percent error of subset IRRs. Results support the use of calculating IRR from subsets of the total data sample, though we discuss how the accuracy of subset IRR values may depend on aspects of the dataset such as total sample size and coding methodology.
Phishing attack countermeasures have previously relied on technical solutions or user training. As phishing attacks continue to impact users resulting in adverse consequences, mitigation efforts may be strengthened through an understanding of how user characteristics predict phishing susceptibility. Several studies have identified factors of interest that may contribute to susceptibility. Others have begun to build predictive models to better understand the relationships among factors in addition to their prediction power, although these studies have only used a handful of predictors. As a step toward creating a holistic model to predict phishing susceptibility, it was first necessary to catalog all known predictors that have been identified in the literature. We identified 32 predictors related to personality traits, demographics, educational background, cybersecurity experience and beliefs, platform experience, email behaviors, and work commitment style.
While labor issues and quality assurance in crowdwork are increasingly studied, how annotators make sense of texts and how they are personally impacted by doing so are not. We study these questions via a narrative-sorting annotation task, where carefully selected (by sequentiality, topic, emotional content, and length) collections of tweets serve as examples of everyday storytelling. As readers process these narratives, we measure their facial expressions, galvanic skin response, and selfreported reactions. From the perspective of annotator well-being, a reassuring outcome was that the sorting task did not cause a measurable stress response, however readers reacted to humor. In terms of sensemaking, readers were more confident when sorting sequential, target-topical, and highly emotional tweets. As crowdsourcing becomes more common, this research sheds light onto the perceptive capabilities and emotional impact of human readers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.