Sensemaking is the active process of constructing a meaningful representation (i.e., making sense) of some complex aspect of the world. In relation to intelligence analysis, sensemaking is the act of finding and interpreting relevant facts amongst the sea of incoming reports, images, and intelligence. We present a cognitive model of core information-foraging and hypothesis-updating sensemaking processes applied to complex spatial probability estimation and decision-making tasks. While the model was developed in a hybrid symbolic-statistical cognitive architecture, its correspondence to neural frameworks in terms of both structure and mechanisms provided a direct bridge between rational and neural levels of description. Compared against data from two participant groups, the model correctly predicted both the presence and degree of four biases: confirmation, anchoring and adjustment, representativeness, and probability matching. It also favorably predicted human performance in generating probability distributions across categories, assigning resources based on these distributions, and selecting relevant features given a prior probability distribution. This model provides a constrained theoretical framework describing cognitive biases as arising from three interacting factors: the structure of the task environment, the mechanisms and limitations of the cognitive architecture, and the use of strategies to adapt to the dual constraints of cognition and the environment.
We explored distributed and massed training schedules as well as hybrids between them with respect to three knowledge types based on theories and an empirical study. The results suggest that industrial and operator training in complex tasks need not and probably should not be done on a distributed training schedule.
We report a way to build a series of GOMS-like cognitive user models representing a range of performance at different stages of learning. We use a spreadsheet task across multiple sessions as an example task; it takes about 20--30 min. to perform. The models were created in ACT-R using a compiler. The novice model has 29 rules and 1,152 declarative memory task elements (chunks)—it learns to create procedural knowledge to perform the task. The expert model has 617 rules and 614 task chunks (that it does not use) and 538 command string chunks—it gets slightly faster through limited declarative learning of the command strings and some further production compilation; there are a range of intermediate models. These models were tested against aggregate and individual human learning data, confirming the models’ predictions. This work suggests that user models can be created that learn like users while doing the task.
Animals routinely adapt to changes in the environment in order to survive. Though reinforcement learning may play a role in such adaption, it is not clear that it is the only mechanism involved, as it is not well suited to producing rapid, relatively immediate changes in strategy in response to environmental changes. We explored the possible adaptive mechanisms underlying in a cognitive model of human behavior in a change detection experiment. Besides reinforcement learning, the model incorporates counterfactual reasoning to help learn the utility of different task strategies under different environmental conditions. The results show that the model can accurately explain human data and that counterfactual reasoning is key to reproducing the various effects observed in this change detection paradigm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.