With all of the research and investment dedicated to artificial intelligence and other automation technologies, there is a paucity of evaluation methods for how these technologies integrate into effective joint human-machine teams. Current evaluation methods, which largely were designed to measure performance of discrete representative tasks, provide little information about how the system will perform when operating outside the bounds of the evaluation. We are exploring a method of generating Extensibility Plots, which predicts the ability of the human-machine system to respond to classes of challenges at intensities both within and outside of what was tested. In this paper we test and explore the method, using performance data collected from a healthcare setting in which a machine and nurse jointly detect signs of patient decompensation. We explore the validity and usefulness of these curves to predict the graceful extensibility of the system.
Even as vaccination for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) expands in the United States, cases will linger among unvaccinated individuals for at least the next year, allowing the spread of the coronavirus to continue in communities across the country. Detecting these infections, particularly asymptomatic ones, is critical to stemming further transmission of the virus in the months ahead. This will require active surveillance efforts in which these undetected cases are proactively sought out rather than waiting for individuals to present to testing sites for diagnosis. However, finding these pockets of asymptomatic cases (i.e., hotspots) is akin to searching for needles in a haystack as choosing where and when to test within communities is hampered by a lack of epidemiological information to guide decision makers’ allocation of these resources. Making sequential decisions with partial information is a classic problem in decision science, the explore v. exploit dilemma. Using methods—bandit algorithms—similar to those used to search for other kinds of lost or hidden objects, from downed aircraft or underground oil deposits, we can address the explore v. exploit tradeoff facing active surveillance efforts and optimize the deployment of mobile testing resources to maximize the yield of new SARS-CoV-2 diagnoses. These bandit algorithms can be implemented easily as a guide to active case finding for SARS-CoV-2. A simple Thompson sampling algorithm and an extension of it to integrate spatial correlation in the data are now embedded in a fully functional prototype of a web app to allow policymakers to use either of these algorithms to target SARS-CoV-2 testing. In this instance, potential testing locations were identified by using mobility data from UberMedia to target high-frequency venues in Columbus, Ohio, as part of a planned feasibility study of the algorithms in the field. However, it is easily adaptable to other jurisdictions, requiring only a set of candidate test locations with point-to-point distances between all locations, whether or not mobility data are integrated into decision making in choosing places to test.
Despite the promise of a proactive approach to safety, a lack of resources and tangible measures have limited its implementation in organizations. We are exploring Joint Activity Monitoring (JAM) as one key component of a proactive safety program within the domain of infection prevention. However, despite a conceptual alignment to the requirements of a proactive monitoring capability, our experiences instrumenting daily work tools with the capabilities to support continuous, unobtrusive, real-time monitoring have revealed additional organizational and technological requirements. In this paper, we describe our strategies and challenges in developing this capability and discuss implications for supporting successful proactive safety implementations.
This panel discussion will examine the societal awareness of cognitive engineering today. Cognitive engineering celebrated its 30th anniversary in 2018 at the HFES annual meeting. Still, some would say that cognitive engineering is not as well-known as it should be, and that it is applied in an ad hoc manner in the many high-stakes, high-risk technology modernization efforts where it would be useful. As technology advances proliferate for sharp end of the spear decision makers, we are at risk of catastrophic results if CE remains in the shadows; these results are arguably emerging on a daily basis. Each panelist will describe, from their vantage point, CE’s state of the art today, thoughts on barriers to acceptance and application, and how they envision we act towards a future in 2028 in which cognitive engineers engage systematically in complex systems’ development.
We introduce the concept of machine fitness assessment, which is the process of correctly determining the degree of fit between a machine’s inferences on a specific world and the world itself. We describe its importance in complex, high-stakes worlds, including healthcare, and how it will be critically important to realize the potential of consumer health technologies that promise institutional-quality health diagnosis and planning in decidedly non-institutional settings (e.g., our homes, offices, or anywhere else).
Any clinical decision support (CDS) design project integrating computational technologies with clinician workflows will require the merging of multiple perspectives and fields of expertise in multidisciplinary teams. Much like the tools these teams aim to create, the team itself will need to continuously build, monitor, and repair a mutually beneficial relationship between each of its members. From our experience during the early development stages of an AI-enabled CDS tool for hospital-acquired infection (HAI) prevention, we abstract three central tenets of a symbiotic design process we have found to be vital for aligning goals, priorities, mental models, and techniques among a multidisciplinary team: (1) recurrent bottom-up feedback, (2) continual model (re-)alignment, and (3) openness to co-direction. With regards to these tenets, we discuss the successes and challenges our team has faced during the symbiotic design process through a series of vignettes and how these experiences coalescing diverse human design teams can influence the design of human-machine teams.
Quantitative evaluations of human-machine teams (HMTs) are desperately needed to ensure technological implementations are helpful rather than harmful to overall system performance; however, as machines increasingly behave like active cognitive teammates, traditional evaluation strategies risk overestimating HMT capabilities. Areliable HMT evaluation method should include multiple high-resolution, continuous measures for both system performance and system challenges that can be implemented unobtrusively in real-time operations. In our prior work, we proposed joint activity testing (JAT) as acandidate evaluation framework to satisfy these requirements. Preliminary efforts with asingle dimension of performance and challenge have indicated that the method can identify the additive benefits of joint activity with aspecific technology. In this paper, we explore the operationalization of multi-dimensional JAT by synthesizing our work in two intelligence and two healthcare domains. The patterns observed between domains will guide future JAT, reveal paths towards real-time implementation, and spark future research evaluating resilience.
BACKGROUND The Flexible Adaptive Algorithmic Surveillance Testing (FAAST) program represents an innovative approach for detecting cases of infectious disease, deployed here to diagnose SARS-CoV-2. OBJECTIVE This study’s objective was to evaluate a Bayesian search algorithm to target hotspots of viral transmission in the community with the objective of detecting the most cases over time across multiple locations in Columbus, Ohio from August to October 2021. METHODS The algorithm used to direct pop-up SARS-CoV-2 testing for this project is based on Thompson sampling in which the aim is to maximize the expected value of success in finding new cases of SARS-CoV-2 based on sampling from prior probability distributions for each testing site. An academic-governmental partnership between Yale University, The Ohio State University (OSU), Wake Forest University, the Ohio Department of Health (ODH), the Ohio National Guard (ONG) and the Columbus Metropolitan Libraries (CML) conducted a study of bandit algorithms to maximize the detection of new cases in SARS-CoV-2 in this Ohio city in 2021. The initiative established pop-up COVID-19 testing sites at 13 Columbus locations including library branches, recreational and community centers, movie theaters, homeless shelters, family services centers, and community events. Our team conducted between 0 and 56 tests at the 16 testing sessions, with an overall average of 25.3 tests conducted per session and a moving average that increased over time. Small incentives—including gift cards and take-home rapid antigen tests were offered to those who approached the pop-up sites to encourage their participation. RESULTS Over time, as expected, the Bayesian search algorithm directed testing efforts to locations with higher yields of new diagnoses. Surprisingly, the use of the algorithm also maximized the identification of cases among minority residents of under-served communities, particularly African Americans, with the pool of participants over-representing these people relative to the demographic profile of the local ZIP code in which testing sites were located. CONCLUSIONS This study demonstrated that a pop-up testing strategy using a bandit algorithm can be feasibly deployed in an urban setting during a pandemic. It is the first real-world use of these kinds of algorithms for disease surveillance and represents a key step in evaluating the effectiveness of their use in maximizing the detection of undiagnosed cases of SARS-CoV-2 and other infections such as HIV.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.