There are inherent difficulties in designing an effective Human–Machine Interface (HMI) for a first-of-its-kind system. Many leading cognitive research methods rely upon experts with prior experiences using the system and/or some type of existing mockups or working prototype of the HMI, and neither of these resources are available for such a new system. Further, these methods are time consuming and incompatible with more rapid and iterative systems development models (e.g., Agile/Scrum). To address these challenges, we developed a Wargame-Augmented Knowledge Elicitation (WAKE) method to identify information requirements and underlying assumptions in operator decision making concurrently with operational concepts. The developed WAKE method incorporates naturalistic observations of operator decision making in a wargaming scenario with freeze-probe queries and structured analytic techniques to identify and prioritize information requirements for a novel HMI. An overview of the method, required apparatus, and associated analytical techniques is provided. Outcomes, lessons learned, and topics for future research resulting from two different applications of the WAKE method are also discussed.
Wargaming is used to facilitate Knowledge Elicitation (KE) during design thinking events for the development of advanced concepts. These wargaming sessions follow brainstorming and consensus building exercises where diverse teams of end users and technical personnel enumerate and vote on innovative features to develop into new systems, or for innovative means to leverage and exploit existing technologies. A tabletop, turn-based board game was used to conduct these wargaming sessions for vetting concepts; however, the time required to execute and evaluate (process) each turn led to the development of a digital version of the game where the mechanics of moving certain game pieces was automated. Although increasing technology levels of tools and processes is generally viewed as an upgrade, unintended consequences of introducing technologies into systems can and do occur. An assessment was performed to empirically assess the effectiveness of the digital version of the simulator. User perceptions were captured with a questionnaire, and user behaviors with the tool were captured through observational methods. The digital wargaming platform succeeded in reducing the amount of time dedicated to process each turn of gameplay; however, there was no observed gain in perceived utility of the new digital tool, nor any observed increase in the quality or quantity of KE. Future research efforts will aim to empirically measure the quantity and quality of discussion during gameplay.
The prevalence of unique, disparate satellite command and control (SATC2) systems in current satellite operations is problematic. As such, the United States Air Force aims to consolidate SATC2 systems into an enterprise solution that utilizes a common Human–Machine Interface (HMI). We employed a User-Centered Design (UCD) approach including a variety of methods from design thinking and human factors engineering to develop a solution that is effective, efficient, and meets operator needs. During a summative test event, we found that users had significantly higher situation awareness, lower workload, and higher subjective usability while using the HMI developed via UCD over the existing, or legacy, HMI. This case study serves as evidence to support the assertion that involving users early and often has positive and tangible effects on the development of aerospace systems.
Inefficiencies naturally form as organizations grow in size and complexity. The knowledge required to address these inefficiencies is often stove-piped across different organizational silos, geographic locations, and professional disciplines. Crowdsourcing provides a way to tap into the knowledge and experiences of diverse groups of people to rapidly identify and more effectively solve inefficiencies. We developed a prototype crowdsourcing system based on design thinking practices to allow employees to build a shared mental model and work collaboratively to identify, characterize, and rank inefficiencies, as well as to develop possible solutions. We conducted a study to assess how presenting crowdsourced knowledge (votes/preferences, supporting argumentation, etc.) from employees affected organizational Decision Makers (DMs). In spite of predictions that crowdsourced knowledge would influence their decisions, presenting this knowledge to DMs had no significant effect on their voting for various solutions. We found significant differences in the mental models of employees and DMs. We offer various explanations for this behavior based on rhetorical analysis and other survey responses from DMs and contributors. We further discuss different theoretical explanations, including the effects of various biases and decision inertia, and potential issues with the types of knowledge elicited and presented to DMs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.