This paper investigates the control problems associated with autonomous vehicles in formation flight. An inviscid flow model of formation flight explains the increase in efficiency, and describes the effects that the craft in the formation have on each other. Decentralized controllers are investigated for a formation of five aircraft. The formation consists of a single line, with each plane flying one wingspan behind the plane to its left, with its left wingtip aligned with the right wingtip of the leading plane. The controllers are derived using a linear model of the system dynamics, and evaluated in a linear simulation and in a simulation incorporating a vortex-lattice aerodynamics routine. (Author) ABSTRACT It has long been known that aircraft flying in formation achieve greater overall efficiency than is possible for a single craft flying alone. This paper investigates the control problems associated with autonomous vehicles in formation flight. An inviscid flow model of formation flight explains the increase in efficiency, and describes the effects that the craft in the formation have on each other.Decentralized controllers are investigated for a formation of five aircraft. The formation consists of a single line, with each plane flying one wingspan behind the plane to its left, with its left wingtip aligned with the right wingtip of the leading plane. The controllers are derived using a linear model of the system dynamics, and evaluated in a linear simulation and in a simulation incorporating a vortex-lattice aerodynamics routine.
The extraction of the distance between an object and an observer is fast when angular declination is informative, as it is with targets placed on the ground. To what extent does angular declination drive performance when viewing time is limited? Participants judged target distances in a real-world environment with viewing durations ranging from 36–220 ms. An important role for angular declination was supported by experiments showing that the cue provides information about egocentric distance even on the very first glimpse, and that it supports a sensitive response to distance in the absence of other useful cues. Performance was better at 220 ms viewing durations than for briefer glimpses, suggesting that the perception of distance is dynamic even within the time frame of a typical eye fixation. Critically, performance in limited viewing trials was better when preceded by a 15 second preview of the room without a designated target. The results indicate that the perception of distance is powerfully shaped by memory from prior visual experience with the scene. A theoretical framework for the dynamic perception of distance is presented.
Visual perception of absolute distance (between an observer and an object) is based upon multiple sources of information that must be extracted during scene viewing. The viewing duration needed to fully extract distance information, particularly in navigable real-world environments, is unknown. In a visually-directed walking task, a sensitive response to distance was observed with 9-ms glimpses when floor- and eye-level targets were employed. However, response compression occurred with eye-level targets when angular size was rendered uninformative. Performance at brief durations was characterized by underestimation, unless preceded by a block of extended-viewing trials. The results indicate a role for experience in the extraction of information during brief glimpses. Even without prior experience, the extraction of useful information is virtually immediate when the cues of angular size or angular declination are informative.
Humans are typically able to keep track of brief changes in their head and body orientation, even when visual and auditory cues are temporarily unavailable. Determining the magnitude of one's displacement from a known location is one form of self-motion updating. Most research on self-motion updating during body rotations has focused on the role of a restricted set of sensory signals (primarily vestibular) available during self-motion. However, humans can and do internally represent spatial aspects of the environment, and little is known about how remembered spatial frameworks may impact angular self-motion updating. Here, we describe an experiment addressing this issue. Participants estimated the magnitude of passive, non-visual body rotations (40 degrees -130 degrees ), using non-visual manual pointing. Prior to each rotation, participants were either allowed full vision of the testing environment, or remained blindfolded. Within-subject response precision was dramatically enhanced when the body rotations were preceded by a visual preview of the surrounding environment; constant (signed) and absolute (unsigned) error were much less affected. These results are informative for future perceptual, cognitive, and neuropsychological studies, and demonstrate the powerful role of stored spatial representations for improving the precision of angular self-motion updating.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.