There is a small but growing literature on the perception of natural acoustic events, but few attempts have been made to investigate complex sounds not systematically controlled within a laboratory setting. The present study investigates listeners' ability to make judgments about the posture (upright-stooped) of the walker who generated acoustic stimuli contrasted on each trial. We use a comprehensive three-stage approach to event perception, in which we develop a solid understanding of the source event and its sound properties, as well as the relationships between these two event stages. Developing this understanding helps both to identify the limitations of common statistical procedures and to develop effective new procedures for investigating not only the two information stages above, but also the decision strategies employed by listeners in making source judgments from sound. The result is a comprehensive, ultimately logical, but not necessarily expected picture of both the source-sound-perception loop and the utility of alternative research tools.
This paper presents a novel approach to diagnosing and measuring teamwork in complex sociotechnical systems. First, the underlying theoretical constructs that have inspired the development and use of a multilevel model to study team phenomena from a general systems perspective are presented. Next, in an attempt to theoretically ground the construct, "flow state" will be presented as an isomorphic variable in a multi-level model, meaning it is represented similarly at the system, team, and individual level. Approaching processes embedded in organizations from this perspective allows diagnosis of the systemic influences that contribute most to the variance in performance, identification of pervasive latent systemic failures, and the development of a tailored taxonomy of behavioral teamwork dimensions, which can then be translated into metrics to measure teamwork within any observable complex process.
Recent investigations of loudness change within stimuli have identified differences as a function of direction of change and power range (e.g., Canévet, Acustica, 62, 2136-2142, 1986; Neuhoff, Nature, 395, 123-124, 1998), with claims of differences between dynamic and static stimuli. Experiment 1 provides the needed direct empirical evaluation of loudness change across static, dynamic, and hybrid stimuli. Consistent with recent findings for dynamic stimuli, quantitative and qualitative differences in pattern of loudness change were found as a function of power change direction. With identical patterns of loudness change, only quantitative differences were found across stimulus type. In Experiment 2, Points of Subjective loudness Equality (PSE) provided additional information about loudness judgments for the static and dynamic stimuli. Because the quantitative differences across stimulus type exceed the magnitude that could be expected based upon temporal integration by the auditory system, other factors need to be, and are, considered.
Visual search is a complex task that involves many neural pathways to identify relevant areas of interest within a scene. Humans remain a critical component in visual search tasks, as they can effectively perceive anomalies within complex scenes. However, this task can be challenging, particularly under time pressure. In order to improve visual search training and performance, an objective, process-based measure is needed. Eye tracking technology can be used to drive real-time parsing of EEG recordings, providing an indication of the analysis process. In the current study, eye fixations were used to generate ERPs during a visual search task. Clear differences were observed following performance, suggesting that neurophysiological signatures could be developed to prevent errors in visual search tasks.
Abstract. Transportation Security Officers (TSOs) are at the forefront of our nation's security, and are tasked with screening every bag boarding a commercial aircraft within the United States. TSOs undergo extensive classroom and simulation-based visual search training to learn how to identify threats within X-ray imagery. Integration of eye tracking technology into simulation-based training could further enhance training by providing in-process measures of traditionally "unobservable" visual search performance. This paper outlines the research and development approach taken to create an innovative training solution for X-ray image threat detection and resolution utilizing advances in eye tracking measurement and training science that provides individualized performance feedback to optimize training effectiveness and efficiency.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.