Self-efficacy, which is one's belief in one's capacity, has been found to both positively and negatively influence effort and performance. The reasons for these different effects have been a major topic of debate among social-cognitive and perceptual control theorists. In particular, the findings of various self-efficacy effects has been motivated by a perceptual control theory view of self-regulation that social-cognitive theorists' question. To provide more clarity to the theoretical arguments, a computational model of the multiple processes presumed to create the positive, negative, and null effects for self-efficacy is presented. Building on an existing computational model of goal choice that produces a positive effect for self-efficacy, the current article adds a symbolic processing structure used during goal striving that explains the negative self-efficacy effect observed in recent studies. Moreover, the multiple processes, operating together, allow the model to recreate the various effects found in a published study of feedback ambiguity's moderating role on the self-efficacy to performance relationship (Schmidt & DeShon, 2010). Discussion focuses on the implications of the model for the self-efficacy debate, alternative computational models, the overlap between control theory and social-cognitive theory explanations, the value of using computational models for resolving theoretical disputes, and future research and directions the model inspires. (PsycINFO Database Record
Human behavioral factors have been insufficiently represented in structured models (e.g., ontology frameworks) of insider threat risk. This paper describes the design and development of a structured model that emphasizes individual and organizational sociotechnical factors while incorporating technical indicators from previous work. We compare this model with previous research and describe a use case to demonstrate how the model can be applied as an ontology. We also summarize results of an expert knowledge elicitation study to reveal relationships among indicators and to examine several quantitative models for assessing threat of cases comprising multiple indicators.
The typical assumption that performance is distributed normally has come under question in recent years (e.g., O'Boyle & Aguinis, 2012). This paper uses a dynamic, computational model of performance-asresults to examine possible sources of such distributions. That is, building off the classic model of job performance (Campbell & Pritchard, 1976), components of a dynamic model are examined in 4 separate experiments using Monte Carlo simulations. The experiments indicate that positively skewed distributions can arise from pure luck, multiplicative combinations of factors where 1 of those factors has a zero origin, Matthew effects associated with learning, and feedback effects of performance on resource allocation policies by external agents. The results are discussed in terms of explanations for positively skewed performance distributions and the use and expansion of the computational model for examining dynamic performance more generally. I can only recognize the occurrence of the normal curve-the Laplacian curve of errors-as a very abnormal phenomenon. It is roughly approximated to in certain distributions; for this reason, and on account for its beautiful simplicity, we may, perhaps, use it as a first approximation, particularly in theoretical investigations. Pearson (1901, p. 111)
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.