We describe an experiment designed to understand the X-ray security screener task via investigation of how training environment and content influence perceptual learning. We examined both perceptual discrimination and the presence/absence of clutter during training and how this impacted performance. Overall, the data show that performance was generally better when there were clutter items in the training images. We also examined the diagnosticity of a measure of cognitive efficiency, a combinatory metric that simultaneously considers test performance and workload. In terms of cognitive efficiency, participants who trained in the difficult discrimination with clutter present experienced lower workload during the test relative to their actual performance. The discussion centers on how improved analytical techniques are better able to diagnose the relative effectiveness of training interventions.
The capability of measuring human performance objectively is hard to overstate, especially in the context of the instructor and student relationship within the process of learning. In this work, we investigate the automated classification of cognitive load leveraging the aviation domain as a surrogate for complex task workload induction. We use a mixed virtual and physical flight environment, given a suite of biometric sensors utilizing the HTC Vive Pro Eye and the E4 Empatica. We create and evaluate multiple models. And we have taken advantage of advancements in deep learning such as generative learning, multi-modal learning, multi-task learning, and x-vector architectures to classify multiple tasks across 40 subjects inclusive of three subject types --- pilots, operators, and novices. Our cognitive load model can automate the evaluation of cognitive load agnostic to subject, subject type, and flight maneuver (task) with an accuracy of over 80%. Further, this approach is validated with real-flight data from five test pilots collected over two test and evaluation flights on a C-17 aircraft.
This paper illustrates the utility of mental model assessment in discriminating between high and low performers in terms of cognitive and metacognitive processes. Distinct computer-based knowledge elicitation methods were utilized to assess the acquisition of different knowledge types as well as the development of participants' mental models when training for a complex task. Additionally, participants' metacognitive accuracy was also measured. Results suggest that mental model assessment is diagnostic of knowledge acquisition for a complex task and mental model accuracy is related to accuracy in metacognitive processes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.