Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing this collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden to Department of Defense, Washington Headquarters Services, Directorate for Information Operations and Reports (0704-0188), 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS.
REPORT DATE (DD-MM-YYYY)
ABSTRACTThe Airborne Warning and Control System (AWACS) is a core command and control (C2) function in which sensors, shooters, and refuelers are managed by Weapons Directors (WDs) in an airborne radar and communications command post. Improving the quality of WD training can have profound effects on mission outcome. A basic technology capable of this is "intelligent-agent" technology, which allows more frequent practice via simulated players and embedded decisions aids that display reasonable task options online. We report initial empirical work with an embedded-agent simulation based on the AWACS, namely, the 21 st Century Systems, Inc. WD Intelligent-Agent-Assist platform. Using this platform, we observed how 38 WDs performed during two high-workload missions. One mission was played with a decision aid that recommended target pairings and refuelings, while the other was not. Our sample benefited from the decision aid, but the more experienced WDs benefited the most (counter to our expectations). We discuss the results in terms of interface challenges that decision aids will face in high workload environments. This extends findings in Elliott, Chaiken, Dalrymple, Petrov, Stoyen (2000), Simulationbased agent in a synthetic team