“…Another possibility is the provision of what has been referred to as likelihood alarm systems (Sorkin, Kantowitz and Kantowitz 1988). These sorts of systems provide a more distinct information about the relative likelihood of critical events and haveshown to improve decision-making significantly if other informations sources are not available (Wiczorek and Manzey 2014).…”
Section: General Discussion and Conclusionmentioning
Responding to alarm systems which usually commit a number of false alarms and/or misses involves decision-making under uncertainty. Four laboratory experiments including a total of 256 participants were conducted to gain comprehensive insight into humans' dealing with this uncertainty. Specifically, it was investigated how responses to alarms/non-alarms are affected by the predictive validities of these events, and to what extent response strategies depend on whether or not the validity of alarms/non-alarms can be cross-checked against other data. Among others, the results suggest that, without cross-check possibility (experiment 1), low levels of predictive validity of alarms ( ≤ 0.5) led most participants to use one of two different strategies which both involved non-responding to a significant number of alarms (cry-wolf effect). Yet, providing access to alarm validity information reduced this effect dramatically (experiment 2). This latter result emerged independent of the effort needed for cross-checkings of alarms (experiment 3), but was affected by the workload imposed by concurrent tasks (experiment 4). Theoretical and practical consequences of these results for decision-making and response selection in interaction with alarm systems, as well as the design of effective alarm systems, are discussed.
“…Another possibility is the provision of what has been referred to as likelihood alarm systems (Sorkin, Kantowitz and Kantowitz 1988). These sorts of systems provide a more distinct information about the relative likelihood of critical events and haveshown to improve decision-making significantly if other informations sources are not available (Wiczorek and Manzey 2014).…”
Section: General Discussion and Conclusionmentioning
Responding to alarm systems which usually commit a number of false alarms and/or misses involves decision-making under uncertainty. Four laboratory experiments including a total of 256 participants were conducted to gain comprehensive insight into humans' dealing with this uncertainty. Specifically, it was investigated how responses to alarms/non-alarms are affected by the predictive validities of these events, and to what extent response strategies depend on whether or not the validity of alarms/non-alarms can be cross-checked against other data. Among others, the results suggest that, without cross-check possibility (experiment 1), low levels of predictive validity of alarms ( ≤ 0.5) led most participants to use one of two different strategies which both involved non-responding to a significant number of alarms (cry-wolf effect). Yet, providing access to alarm validity information reduced this effect dramatically (experiment 2). This latter result emerged independent of the effort needed for cross-checkings of alarms (experiment 3), but was affected by the workload imposed by concurrent tasks (experiment 4). Theoretical and practical consequences of these results for decision-making and response selection in interaction with alarm systems, as well as the design of effective alarm systems, are discussed.
“…Additionally, we imagine that there are task-and context-specific characteristics that affect how people use and how they react to systems. For instance, future research could manipulate time pressure and/or investigate reactions to decision support systems in multitasking environments (see, for instance, Karpinsky, Chancey, Palmer, and Yamani (2018) and Wiczorek and Manzey (2014) for examples of such studies). Participants could, for instance, receive bonus for fulfilling as many as possible tasks which could lead to more blindly following recommendations by automated systems.…”
Section: Limitations and Future Researchmentioning
To enhance the quality and efficiency of information processing and decision-making, automation based on artificial intelligence and machine learning has increasingly been used to support managerial tasks and duties. In contrast to classical applications of automation (e.g., within production or aviation), little is known about how the implementation of automation for management changes managerial work. In a work design frame, this study investigates how different versions of automated decision support systems for personnel selection as a specific management task affect decision task performance, time to reach a decision, reactions to the task (e.g., enjoyment), and self-efficacy in personnel selection. In a laboratory experiment, participants (N = 122) were randomly assigned to three groups and performed five rounds of a personnel selection task. The first group received a ranking of the applicants by an automated support system before participants processed applicant information (support-before-processing group), the second group received a ranking after they processed applicant information (support-after-processing group), and the third group received no ranking (no-support group). Results showed that satisfaction with the decision was higher for the support-after-processing group. Furthermore, participants in this group showed a steeper increase in self-efficacy in personnel selection compared to the other groups. This study combines human factors, management, and industrial/organizational psychology literature and goes beyond discussions concerning effectiveness and efficiency in the emerging area of automation in management in an attempt to stimulate research on potential effects of automation on managers’ job satisfaction and well-being at work.
“…Binary alarm systems are notable for high sensitivity but lower specificity. A postalarm cross-check activity has shown to improve specificity, but cross-checking can be time consuming, 12 which has implications for provider adoption.…”
ObjectiveTo examine the diagnostic accuracy of a two-stage clinical decision support system for early recognition and stratification of patients with sepsis.DesignObservational cohort study employing a two-stage sepsis clinical decision support to recognise and stratify patients with sepsis. The stage one component was comprised of a cloud-based clinical decision support with 24/7 surveillance to detect patients at risk of sepsis. The cloud-based clinical decision support delivered notifications to the patients’ designated nurse, who then electronically contacted a provider. The second stage component comprised a sepsis screening and stratification form integrated into the patient electronic health record, essentially an evidence-based decision aid, used by providers to assess patients at bedside.SettingUrban, 284 acute bed community hospital in the USA; 16,000 hospitalisations annually.ParticipantsData on 2620 adult patients were collected retrospectively in 2014 after the clinical decision support was implemented.Main outcome measure‘Suspected infection’ was the established gold standard to assess clinical decision support clinimetric performance.ResultsA sepsis alert activated on 417 (16%) of 2620 adult patients hospitalised. Applying ‘suspected infection’ as standard, the patient population characteristics showed 72% sensitivity and 73% positive predictive value. A postalert screening conducted by providers at bedside of 417 patients achieved 81% sensitivity and 94% positive predictive value. Providers documented against 89% patients with an alert activated by clinical decision support and completed 75% of bedside screening and stratification of patients with sepsis within one hour from notification.ConclusionA clinical decision support binary alarm system with cross-checking functionality improves early recognition and facilitates stratification of patients with sepsis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.