This paper investigates the control problems associated with autonomous vehicles in formation flight. An inviscid flow model of formation flight explains the increase in efficiency, and describes the effects that the craft in the formation have on each other. Decentralized controllers are investigated for a formation of five aircraft. The formation consists of a single line, with each plane flying one wingspan behind the plane to its left, with its left wingtip aligned with the right wingtip of the leading plane. The controllers are derived using a linear model of the system dynamics, and evaluated in a linear simulation and in a simulation incorporating a vortex-lattice aerodynamics routine. (Author) ABSTRACT It has long been known that aircraft flying in formation achieve greater overall efficiency than is possible for a single craft flying alone. This paper investigates the control problems associated with autonomous vehicles in formation flight. An inviscid flow model of formation flight explains the increase in efficiency, and describes the effects that the craft in the formation have on each other.Decentralized controllers are investigated for a formation of five aircraft. The formation consists of a single line, with each plane flying one wingspan behind the plane to its left, with its left wingtip aligned with the right wingtip of the leading plane. The controllers are derived using a linear model of the system dynamics, and evaluated in a linear simulation and in a simulation incorporating a vortex-lattice aerodynamics routine.
The extraction of the distance between an object and an observer is fast when angular declination is informative, as it is with targets placed on the ground. To what extent does angular declination drive performance when viewing time is limited? Participants judged target distances in a real-world environment with viewing durations ranging from 36–220 ms. An important role for angular declination was supported by experiments showing that the cue provides information about egocentric distance even on the very first glimpse, and that it supports a sensitive response to distance in the absence of other useful cues. Performance was better at 220 ms viewing durations than for briefer glimpses, suggesting that the perception of distance is dynamic even within the time frame of a typical eye fixation. Critically, performance in limited viewing trials was better when preceded by a 15 second preview of the room without a designated target. The results indicate that the perception of distance is powerfully shaped by memory from prior visual experience with the scene. A theoretical framework for the dynamic perception of distance is presented.
Visual perception of absolute distance (between an observer and an object) is based upon multiple sources of information that must be extracted during scene viewing. The viewing duration needed to fully extract distance information, particularly in navigable real-world environments, is unknown. In a visually-directed walking task, a sensitive response to distance was observed with 9-ms glimpses when floor- and eye-level targets were employed. However, response compression occurred with eye-level targets when angular size was rendered uninformative. Performance at brief durations was characterized by underestimation, unless preceded by a block of extended-viewing trials. The results indicate a role for experience in the extraction of information during brief glimpses. Even without prior experience, the extraction of useful information is virtually immediate when the cues of angular size or angular declination are informative.
Humans are typically able to keep track of brief changes in their head and body orientation, even when visual and auditory cues are temporarily unavailable. Determining the magnitude of one's displacement from a known location is one form of self-motion updating. Most research on self-motion updating during body rotations has focused on the role of a restricted set of sensory signals (primarily vestibular) available during self-motion. However, humans can and do internally represent spatial aspects of the environment, and little is known about how remembered spatial frameworks may impact angular self-motion updating. Here, we describe an experiment addressing this issue. Participants estimated the magnitude of passive, non-visual body rotations (40 degrees -130 degrees ), using non-visual manual pointing. Prior to each rotation, participants were either allowed full vision of the testing environment, or remained blindfolded. Within-subject response precision was dramatically enhanced when the body rotations were preceded by a visual preview of the surrounding environment; constant (signed) and absolute (unsigned) error were much less affected. These results are informative for future perceptual, cognitive, and neuropsychological studies, and demonstrate the powerful role of stored spatial representations for improving the precision of angular self-motion updating.
We present a study of the transfer of satellites between elliptic Keplerian orbits using Lyapunov stability theory specific to this problem. The construction of Lyapunov functions is based on the fact that a non-degenerate Keplerian orbit is uniquely described by its angular momentum and Laplace (-Runge-Lenz) vectors. We suggest a Lyapunov function, which gives a feedback controller such that the target elliptic orbit becomes a locally asymptotically stable periodic orbit in the closed-loop dynamics. We show how to perform a global transfer between two arbitrary elliptic orbits based on the local transfer result. Finally, a second Lyapunov function is presented that works only for circular target orbits.
In this paper, the possibility of performing severe collision avoidance maneuvers using trajectory optimization is investigated. A two degree of freedom vehicle model was used to represent dynamics of the vehicle. First, a linear tire model was used to calculate the required steering angle to perform the desired evasive maneuver, and a neighboring optimal controller was designed. Second, direct trajectory optimization algorithm was used to find the optimal trajectory with a nonlinear tire model. To evaluate the results, the calculated steering angles were fed to a full vehicle dynamics model. It was shown that the neighboring optimal controller was able to accommodate the introduced disturbances. Comparison of the resultant trajectories with other desired trajectories showed that it results in a lower lateral acceleration profile and a smaller maximum lateral acceleration; thus the time to perform an obstacle avoidance maneuver can be reduced using this method. A simulation case study of a limited lateral acceleration with constrained direct trajectory optimization shows that using the proposed trajectory optimization technique requires less time than that of trapezoidal acceleration profile for a lane change maneuver.
In order to gain insight into the nature of human spatial representations, the current study examined how those representations are affected by blind rotation. Evidence was sought on the possibility that whereas certain environmental aspects may be updated independently of one another, other aspects may be grouped (or chunked) together and updated as a unit. Participants learned the locations of an array of objects around them in a room, then were blindfolded and underwent a succession of passive, whole-body rotations. After each rotation, participants pointed to remembered target locations. Targets were located more precisely relative to each other if they were (a) separated by smaller angular distances, (b) contained within the same regularly configured arrangement, or (c) corresponded to parts of a common object. A hypothesis is presented describing the roles played by egocentric and allocentric information within the spatial updating system. Results are interpreted in terms of an existing neural systems model, elaborating the model's conceptualization of how parietal (egocentric) and medial temporal (allocentric) representations interact. Keywords spatial memory; spatial updating; egocentric; allocentric; chunking On the basis of perceptual experience with the immediate environment, humans and other animals construct internal representations of the landmarks, boundaries, and objects that make up that environment. Evidence of these persisting internal representations is provided by the ability to locate objects and landmarks in the absence of ongoing perceptual support (e.g., Kosslyn, Ball, & Reiser, 1978;McNamara, 1986) and by neurophysiological data (e.g., Burgess & O'Keefe, 1996;Cressant, Muller, & Poucet, 1997;Ekstrom et al., 2003). As an organism navigates through the environment, these internal representations are updated to reflect the changing relationship between the organism and its surroundings (e.g., Müller & Wehner, 1988;Philbeck, Loomis, & Beall, 1997;Rieser, 1989;Waller, Montello, Richardson, & Hegarty, 2002). In the current study, we examined errors that accrue over this spatial updating process for evidence that a representation of a room-sized environment may be composed of "chunks," each of which contains location information for a different part of that environment.Correspondence concerning this article should be addressed to Jesse Sargent, who is now at the Department of Psychology, Washington University in St. Louis, Campus Box 1125, One Brookings Drive, St. Louis, MO 63130-4899. jsargent@artsci.wustl.edu. NIH Public Access NIH-PA Author ManuscriptNIH-PA Author Manuscript NIH-PA Author ManuscriptPrevious work provides a precedent for the possibility that this type of chunking might occur in spatial memory. Brockmole (2003a, 2003b) provided evidence that as humans navigate through larger environments (e.g., college campuses), they only actively update spatial aspects of the subenvironment (e.g., room) currently inhabited. Thus, it appears that spatial memory for larger environm...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.