The framework provided in this article provides a tool for organizing and informing past, present, and future research and development efforts in adaptive systems.
In this paper, we identify the requirements for effective function allocation within teams of human and automated agents. These functions include all the activities in the team's environment required to meet collective work goals, that is, taskwork functions. In addition, the allocation of taskwork functions then creates the need for additional teamwork functions to coordinate between agents. Key requirements include that each agent must be capable of each individual function it is allocated and must be capable of its collective set of functions, including teamwork. Of note, many important attributes may be observed only within the detailed dynamics of simulation or actual operations, particularly when a function allocation requires tightly coupled interactions and when teamwork (including humanautomation interaction) may support or detract from effective performance. Finally, we note that function allocation is a key design decision that should be made deliberately. By addressing function allocation early in design, before technologies and interfaces are created, key trade-offs can be considered and fundamental concerns with human factors addressed.
Function allocation is the design decision in which work functions are assigned to all agents in a team, both human and automated. Building on the preceding companion papers' review of the requirements of effective function allocation and discussion of a computational framework for modeling function allocation, in this paper, we develop specific metrics of function allocation that can be derived from such models as well as from observations in high-fidelity human-inthe-loop simulations or real operations. These metrics span eight issues with function allocation: (a) workload, (b) stability of the work environment, (c) mismatches between responsibility and authority, (d) incoherency in function allocations, (e) interruptive automation, (f) automation's boundary conditions, (g) function allocations limiting human adaptation to context, (h) and mission performance. Some of the metrics measure distinct issues whereas others assess different causes of issues that can manifest in similar ways; collectively, they are intended to be comprehensive in their ability to discriminate for a range of issues. Trade-offs may exist between these metrics, and they need to be examined collectively to identify potential trade-offs or conflicts between them. This paper continues the example given in the preceding companion paper, demonstrating how these metrics of function allocation can be assessed from computational simulations of an air transport flight deck through the descent phase of flight.
The collective taskwork of a team spans the functions required to achieve work goals. Within this context, function allocation is the design decision in which taskwork functions are assigned to all agents in a team, both human and automated. In addition, the allocation of taskwork functions then creates the need for additional teamwork functions to coordinate between agents. In this paper, we identify important requirements for function allocation within teams of human and automated agents. Of note, many important attributes may be observed only within the detailed dynamics of simulation or actual operations, particularly when a function allocation requires tightly coupled interactions. Building on the preceding companion paper's conceptual review of the requirements of effective function allocation, in this paper we develop a modeling framework that increases the number of aspects of function allocation that can be examined simultaneously through both static analysis and dynamic computational simulations. The taskwork and teamwork of a modern air transport flight deck with a range of function allocations is used as an example throughout, highlighting the range of phenomenon these models can describe. A follow-on companion paper discusses specific metrics of function allocation that can be derived both from such models and from observations in high-fidelity human-in-theloop simulations or real operations.
The design and adoption of decision support systems within complex work domains is a challenge for cognitive systems engineering (CSE) practitioners, particularly at the onset of project development. This article presents an example of applying CSE techniques to derive design requirements compatible with traditional systems engineering to guide decision support system development. Specifically, it demonstrates the requirements derivation process based on cognitive work analysis for a subset of human spaceflight operations known as extravehicular activity. The results are presented in two phases. First, a work domain analysis revealed a comprehensive set of work functions and constraints that exist in the extravehicular activity work domain. Second, a control task analysis was performed on a subset of the work functions identified by the work domain analysis to articulate the translation of subject matter states of knowledge to high-level decision support system requirements. This work emphasizes an incremental requirements specification process as a critical component of CSE analyses to better situate CSE perspectives within the early phases of traditional systems engineering design.
Breakdowns in complex systems often occur as a result of system elements interacting in ways unanticipated by analysts or designers. The use of task behavior as part of a larger, formal system model is potentially useful for analyzing such problems because it allows the ramifications of different human behaviors to be verified in relation to other aspects of the system. A component of task behavior largely overlooked to date is the role of human-human interaction, particularly humanhuman communication in complex human-computer systems. We are developing a multi-method approach based on extending the Enhanced Operator Function Model language to address human agent communications (EOFMC). This approach includes analyses via theorem proving and future support for model checking linked through the EOFMC top level XML description.Herein, we consider an aviation scenario in which an air traffic controller needs a flight crew to change the heading for spacing. Although this example, at first glance, seems to be one simple task, on closer inspection we find that it involves local human-human communication, remote human-human communication, multi-party communications, communication protocols, and human-automation interaction. We show how all these varied communications can be handled within the context of EOFMC.
A goal of interactive machine learning (IML) is to enable people with no specialized training to intuitively teach intelligent agents how to perform tasks. Toward achieving that goal, we are studying how the design of the interaction method for a Bayesian Q-Learning algorithm impacts aspects of the human's experience of teaching the agent using human-centric metrics such as frustration in addition to traditional ML performance metrics. This study investigated two methods of natural language instruction: critique and action advice. We conducted a human-in-the-loop experiment in which people trained two agents with different teaching methods but, unknown to each participant, the same underlying reinforcement learning algorithm. The results show an agent that learns from action advice creates a better user experience compared to an agent that learns from binary critique in terms of frustration, perceived performance, transparency, immediacy, and perceived intelligence. We identified nine main characteristics of an IML algorithm's design that impact the human's experience with the agent, including using human instructions about the future, compliance with input, empowerment, transparency, immediacy, a deterministic interaction, the complexity of the instructions, accuracy of the speech recognition software, and the robust and flexible nature of the interaction algorithm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.