The study of accidents ('human errors') has been dominated by efforts to develop 'error' taxonomies and 'error' models that enable the retrospective identification of likely causes. In the field of Human Reliability Analysis (HRA) there is, however, a significant practical need for methods that can predict the occurrence of erroneous actions--qualitatively and quantitatively. The present experiment tested an approach for qualitative performance prediction based on the Cognitive Reliability and Error Analysis Method (CREAM). Predictions of possible erroneous actions were made for operators using different types of alarm systems. The data were collected as part of a large-scale experiment using professional nuclear power plant operators in a full scope simulator. The analysis showed that the predictions were correct in more than 70% of the cases, and also that the coverage of the predictions depended critically on the comprehensiveness of the preceding task analysis.
Several serious incidents are unforeseen to organizations, companies and actors when they occur. Organizations as well as individuals are challenged by continuous threats, accidents and unforeseen events. Unforeseen events have other characteristics than events that can easily be predicted based on historical data and experience. This paper describes the data collection concept Methodology for handling the unforeseen (UN-METH), developed within the Strategic Institute Initiative at IFE(Institute For Energy Technology), IO-EPO(Integrated Operations-Emergency Preparedness Organization), and uses the insight about the nature of the unforeseen developed through the Norwegian basic research and book project "Pedagogy for the unforeseen". UN-METH consists of two different approaches: UN-CAF (Unforeseen Competence Assurance Framework), where an organization's preparedness plans are analyzed to determine to what extent they are considering the unforeseen, and UN-ORG (UNforeseen Organization questionnaire), which is a questionnaire that can be distributed to personnel in an organization, where individuals evaluate their organizations' preparedness and ability to handle the unforeseen. The main purpose of this article is documenting the development and evaluation process of UN-ORG. This process was conducted to investigate the applicability, usefulness and relevance of the questionnaire directly with professionals with relevant experience in the area. The development and evaluation approach is based on methodological principles proposed by Stufflebeam. Interviews, a survey and a case study were used during the evaluation. The results indicated that the questionnaire is highly applicable, focuses on the unforeseen and that it covers an important area. Interviews further identified specific recommendations of items to improve and add. Publishing the findings from this development and evaluation process of the questionnaire, is a first step in making the method known for different organizations. By using UN-ORG, separately or in combination with UN-CAF, organizations can gain valuable insight into their own preparedness for the unforeseen, and the researchers can get useful input and gradually improve the methodology itself.
There is increasing interest in the use of artificial intelligence (AI) to improve organizational decision-making. However, research indicates that people’s trust in and choice to rely on “AI decision aids” can be tenuous. In the present paper, we connect research on trust in AI with Mayer, Davis, and Schoorman’s (1995) model of organizational trust to elaborate a conceptual model of trust, perceived risk, and reliance on AI decision aids at work. Drawing from the trust in technology, trust in automation, and decision support systems literatures, we redefine central concepts in Mayer et al.’s (1995) model, expand the model to include new, relevant constructs (like perceived control over an AI decision aid), and refine propositions about the relationships expected in this context. The conceptual model put forward presents a framework that can help researchers studying trust in and reliance on AI decision aids develop their research models, build systematically on each other’s research, and contribute to a more cohesive understanding of the phenomenon. Our paper concludes with five next steps to take research on the topic forward.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.