Artificial Intelligence (AI) is often viewed as the means by which the intelligence community will cope with increasing amounts of data. There are challenges in adoption, however, as outputs of such systems may be difficult to trust, for a variety of factors. We conducted a naturalistic study using the Critical Incident Technique (CIT) to identify which factors were present in incidents where trust in an AI technology used in intelligence work (i.e., the collection, processing, analysis, and dissemination of intelligence) was gained or lost. We found that explainability and performance of the AI were the most prominent factors in responses; however, several other factors affected the development of trust. Further, most incidents involved two or more trust factors, demonstrating that trust is a multifaceted phenomenon. We also conducted a broader thematic analysis to identify other trends in the data. We found that trust in AI is often affected by the interaction of other people with the AI (i.e., people who develop it or use its outputs), and that involving end users in the development of the AI also affects trust. We provide an overview of key findings, practical implications for design, and possible future areas for research.
There are inherent difficulties in designing an effective Human–Machine Interface (HMI) for a first-of-its-kind system. Many leading cognitive research methods rely upon experts with prior experiences using the system and/or some type of existing mockups or working prototype of the HMI, and neither of these resources are available for such a new system. Further, these methods are time consuming and incompatible with more rapid and iterative systems development models (e.g., Agile/Scrum). To address these challenges, we developed a Wargame-Augmented Knowledge Elicitation (WAKE) method to identify information requirements and underlying assumptions in operator decision making concurrently with operational concepts. The developed WAKE method incorporates naturalistic observations of operator decision making in a wargaming scenario with freeze-probe queries and structured analytic techniques to identify and prioritize information requirements for a novel HMI. An overview of the method, required apparatus, and associated analytical techniques is provided. Outcomes, lessons learned, and topics for future research resulting from two different applications of the WAKE method are also discussed.
There is a considerable body of research on trust in Artificial Intelligence (AI). Trust has been viewed almost exclusively as a dyadic construct, where it is a function of various factors between the user and the agent, mediated by the context of the environment. A recent study has found several cases of supradyadic trust interactions, where a user’s trust in the AI is affected by how other people interact with the agent, above and beyond endorsements or reputation. An analysis of these surpradyadic interactions is presented, along with a discussion of practical considerations for AI developers, and an argument for more complex representations of trust in AI.
Mental models describe an internal representation of knowledge of an individual or group, which can be used to interpret interactions with their environment and provide insight into decision-making strategies and prediction of performance. There are several ways to elicit mental models and analyze them; however, there is little guidance for selecting an appropriate elicitation method. Depending on different constraints of research and desired outcomes, different elicitation methods are more appropriate than others. Three criteria were identified as useful for selecting an appropriate elicitation method. These were the interaction level with participants, the number of participants being evaluated, and the resulting level of analytical detail that is required. A process for selecting the most appropriate mental model elicitation method is herein presented. Additionally, an overview of the factors that affect the selection of the mental models, and the different types of mental models are also presented.
This study used queuing networks and discrete event simulation (DES) to investigate the effects of baggage volume and alarm rate at the security screening checkpoint (SSCP) of a small origin and destination airport. A queuing network was applied for theoretical modeling of the SSCP performance, and a DES model using Arena Version 12 was used for an empirical approach. Data were collected from both literature and by manual collection during the peak operating time of the airport that was modeled. The simulation model was verified and validated qualitatively and quantitatively by statistical testing before experimentation. After validation, a sensitivity analysis was performed on baggage volume of passengers (PAX) and the alarm rate of baggage screening devices, where SSCP throughput and PAX cycle time, were the dependent measures. The theoretical queuing network proved an accurate method of predicting cycle time for the system while in steady state but was subject to various assumptions. The empirical model and sensitivity analysis showed that SSCP throughput and cycle time are both highly sensitive to alarm rate. Additionally, the sensitivity analysis showed that SSCP throughput was completely resilient to baggage volume, while cycle time was moderately sensitive to baggage volume. Practical implications and future research were also discussed.
Wargaming is used to facilitate Knowledge Elicitation (KE) during design thinking events for the development of advanced concepts. These wargaming sessions follow brainstorming and consensus building exercises where diverse teams of end users and technical personnel enumerate and vote on innovative features to develop into new systems, or for innovative means to leverage and exploit existing technologies. A tabletop, turn-based board game was used to conduct these wargaming sessions for vetting concepts; however, the time required to execute and evaluate (process) each turn led to the development of a digital version of the game where the mechanics of moving certain game pieces was automated. Although increasing technology levels of tools and processes is generally viewed as an upgrade, unintended consequences of introducing technologies into systems can and do occur. An assessment was performed to empirically assess the effectiveness of the digital version of the simulator. User perceptions were captured with a questionnaire, and user behaviors with the tool were captured through observational methods. The digital wargaming platform succeeded in reducing the amount of time dedicated to process each turn of gameplay; however, there was no observed gain in perceived utility of the new digital tool, nor any observed increase in the quality or quantity of KE. Future research efforts will aim to empirically measure the quantity and quality of discussion during gameplay.
There are several different technical disciplines focused on improving the systems that humans use, creating an ‘alphabet soup’ of acronyms to stay abreast of. While they all build upon a common emphasis of developing systems around their users, there are differences (both perceived and real) across disciplines such as Human Factors Engineering (HFE), Human Systems Integration (HSI), Human Computer Interaction (HCI), User Experience (UX), and Design Thinking (DT). A panel discussed what each of these disciplines are (and what they are not), when and how they get involved in system development, their philosophies and methods for system development, and where they share common interests. Panelists were asked philosophical, practical, and scenario-based questions, before opening the floor to the audience.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.