Abstract:Compound Remote Associate (CRA) problems have been used to investigate insight problem solving using both behavioral and neuroimaging techniques. However, it is unclear to what extent CRA problems exhibit characteristics of insight such as impasses and restructuring. CRA problem-solving characteristics were examined in a study in which participants solved CRA problems while providing concurrent verbal protocols. The results show that solutions subjectively judged as insight by participants do exhibit some characteristics of insight. However, the results also show that there are at least two different ways in which people experience insight when solving CRA problems. Sometimes problems are solved and judged as insight when the solution is the first thing considered, but these solutions do not exhibit any characteristics of insight aside from the "Aha!" experience. In other cases, the solution is derived after a longer period of problem solving, and the solution process more closely resembles insight as it is has been traditionally defined in the literature. The results show that separating these two types of solution processes may provide a better understanding of the behavioral and neuroanatomical correlates of insight solutions.
Recent research in cybersecurity has begun to develop active defense strategies using game‐theoretic optimization of the allocation of limited defenses combined with deceptive signaling. These algorithms assume rational human behavior. However, human behavior in an online game designed to simulate an insider attack scenario shows that humans, playing the role of attackers, attack far more often than predicted under perfect rationality. We describe an instance‐based learning cognitive model, built in ACT‐R, that accurately predicts human performance and biases in the game. To improve defenses, we propose an adaptive method of signaling that uses the cognitive model to trace an individual's experience in real time. We discuss the results and implications of this adaptive signaling method for personalized defense.
This work is an initial step toward developing a cognitive theory of cyber deception. While widely studied, the psychology of deception has largely focused on physical cues of deception. Given that present-day communication among humans is largely electronic, we focus on the cyber domain where physical cues are unavailable and for which there is less psychological research. To improve cyber defense, researchers have used signaling theory to extended algorithms developed for the optimal allocation of limited defense resources by using deceptive signals to trick the human mind. However, the algorithms are designed to protect against adversaries that make perfectly rational decisions. In behavioral experiments using an abstract cybersecurity game (i.e., Insider Attack Game), we examined human decision-making when paired against the defense algorithm. We developed an instance-based learning (IBL) model of an attacker using the Adaptive Control of Thought-Rational (ACT-R) cognitive architecture to investigate how humans make decisions under deception in cyber-attack scenarios. Our results show that the defense algorithm is more effective at reducing the probability of attack and protecting assets when using deceptive signaling, compared to no signaling, but is less effective than predicted against a perfectly rational adversary. Also, the IBL model replicates human attack decisions accurately. The IBL model shows how human decisions arise from experience, and how memory retrieval dynamics can give rise to cognitive biases, such as confirmation bias. The implications of these findings are discussed in the perspective of informing theories of deception and designing more effective signaling schemes that consider human bounded rationality.
This paper improves upon recent game-theoretic deceptive signaling schemes for cyber defense using the insights emerging from a cognitive model of human cognition. One particular defense allocation algorithm that uses a deceptive signaling scheme is the peSSE (Xu et al., 2015). However, this static signaling scheme optimizes the rate of deception for perfectly rational adversaries and is not personalized to individuals. Here we advance this research by developing a dynamic and personalized signaling scheme using cognitive modeling. A cognitive model based on a theory of experiential-choice (Instance-Based Learning Theory; IBLT), implemented in a cognitive architecture (Adaptive Control of Thought -Rational; ACT-R), and validated using human experimentation with deceptive signals informs the development of a cognitive signaling scheme. The predictions of the cognitive model show that the proposed solution increases the compliance to deceptive signals beyond the peSSE. These predictions were verified in human experiments, and the results shed additional light on human reactions towards adaptive deceptive signals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.