Recent major accidents in complex industrial systems, such as in oil & gas platforms and in the aviation industry, were deeply connected to human factors, leading to catastrophic consequences. A striking example would be the investigation report from the National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling (2011) of the April 2010 blowout, in which eleven men died and almost five million barrels of oil were spilled in the Gulf of Mexico. The investigators unarguably emphasized the human factors role: features such a failure to interpret a pressure test and delay in react-*National Agency for Petroleum, Natural Gas and Biofuels (ANP), Brazil. ing to signals were found to have interacted with poor communication, lack of training and management problems to produce this terrible disaster. Other contemporary investigation reports, such as the Rio-Paris Flight 447 (Bureau d'Enquêtes et d'Analyses pour la sécurité de l'aviation civile, 2011) and Fukushima (Kurokawa, 2012), share the same characteristics regarding the significance of human-related features to the undesirable outcome. Thus, the understanding of the interactions between human factors, technology aspects and the organisational context seems to be vital, in order to ensure the safety of engineering systems and minimise the possibility of major accidents. A suitable Human Reliability Analysis (HRA) technique is usually applied to approach the human contribution to undesirable events.
Many industries are subjected to major hazards, which are of great concern to stakeholders groups. Accordingly, efforts to control these hazards and manage risks are increasingly made, supported by improved computational capabilities and the application of sophisticated safety and reliability models. Recent events, however, have revealed that apparently rare or seemingly unforeseen scenarios, involving complex interactions between human factors, technologies and organisations, are capable of triggering major catastrophes. The purpose of this work is to enhance stakeholders' trust in risk management by developing a framework to verify if tendencies and patterns observed in major accidents were appropriately contemplated by risk studies. This paper first discusses the main accident theories underpinning major catastrophes. Then, an accident dataset containing contributing factors from major events occurred in high-technology industrial domains serves as basis for the application of a clustering and data mining technique (self-organising maps -SOM), allowing the exploration of accident information gathered from in-depth investigations. Results enabled the disclosure of common patterns in major accidents, leading to the development of an attribute list to validate risk assessment studies to ensure that the influence of human factors, technological issues and organisational aspects was properly taken into account.
Major accidents are complex, multi-attribute events, originated from the interactions between intricate systems, cutting-edge technologies and human factors. Usually, these interactions trigger very particular accident sequences, which are hard to predict but capable of producing exacerbated societal reactions and impair communication channels among stakeholders. Thus, the purpose of this work is to convert high-dimensional accident data into a convenient graphical alternative, in order to overcome barriers to communicate risk and enable stakeholders to fully understand and learn from major accidents. This paper first discusses contemporary views and biases related to human errors in major accidents. The second part applies an artificial neural network approach to a major accident dataset, to disclose common patterns and significant features. The complex data will be then translated into 2-D maps, generating graphical interfaces which will produce further insight into the conditions leading to accidents and support a novel and comprehensive "learning from accidents" experience.
Risk analyses require proper consideration and quantification of the interaction between humans, organization, and technology in high-hazard industries. Quantitative human reliability analysis approaches require the estimation of human error probabilities (HEPs), often obtained from human performance data on different tasks in specific contexts (also known as performance shaping factors (PSFs)). Data on human errors are often collected from simulated scenarios, near-misses report systems, and experts with operational knowledge. However, these techniques usually miss the realistic context where human errors occur. The present research proposes a realistic and innovative approach for estimating HEPs using data from major accident investigation reports. The approach is based on Bayesian Networks used to model the relationship between performance shaping factors and human errors. The proposed methodology allows minimizing the expert judgment of HEPs, by using a strategy that is able to accommodate the possibility of having no information to represent some conditional dependencies within some variables. Therefore, the approach increases the transparency about the uncertainties of the human error probability estimations. The approach also allows identifying the most influential performance shaping factors, supporting assessors to recommend improvements or extra controls in risk assessments. Formal verification and validation processes are also presented.
Airplanes, ships, nuclear power plants and chemical production plants (including oil & gas facilities) are examples of industries that depend upon the interaction between operators and machines. Consequently, to assess the risks of those systems, not only the reliability of the technological components has to be accounted for, but also the 'human model'. For this reason, engineers have been working together with psychologists and sociologists to understand cognitive functions and how the organisational context influences individual actions. Human Reliability Analysis (HRA) identifies and analyses the causes, consequences and contributions of human performance (including failures) in complex sociotechnical systems. Generally, HRA research is concentrated in modelling workers' performance in the "sharp-end", assessing the ones directly involved in handling the system, especially operators. However, in theory, a reliability analysis can be applied to any kind of human action, including those from designers and managers. This research will evaluate a way of conducting HRA in the design process, as previous research has demonstrated that design failure is the predominant contributor to human errors (Moura et al., 2016). Bayesian Network (BN)-a systematic way of learning from experience and incorporating new evidence (deterministic or probabilistic)-is proposed to model the complex relationships within cognitive functions, organisational and technological factors. Conditional probability tables have been obtained from a dataset of major accidents from different industry sectors (Moura et al. 2017), using a classification scheme developed by Hollnagel (1998) for an HRA method called CREAM-Cognitive Reliability and Error Analysis Method. The model allows to infer which factors most influence human performance in different scenarios. Also, we will discuss if the model can be applied to any human actions through the project life cyclesince the design phase to the operational phase, including their management. The expected results of such study can be either qualitative or quantitative, depending on the industry sector best practice, data availability and regulatory requirements. Quantitative results for HRA means giving the human performance a number, a probability of occurrence-the so-called Human Error Probability (HEP). This gives decision-makers the
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.