One of the major problems in sensor fusion is that sensors frequently provide spurious observations which are difficult to predict and model. The spurious measurements from sensors must be identified and eliminated since their incorporation in the fusion pool might lead to inaccurate estimation. This paper presents a unified sensor fusion strategy based on a modified Bayesian approach that can automatically identify the inconsistency in sensor measurements so that the spurious measurements can be eliminated from the data fusion process. The proposed method adds a term to the commonly used Bayesian formulation. This term is an estimate of the probability that the data is not spurious, based upon the measured data and the unknown value of the true state. In fusing two measurements, it has the effect of increasing the variance of the posterior distribution when measurement from one of the sensors is inconsistent with respect to the other. The increase or decrease in variance can be estimated using the information theoretic measure "entropy." The proposed strategy was verified with the help of extensive computations performed on simulated data from three sensors. A comparison was made between two different fusion schemes: centralized fusion in which data obtained from all sensors were fused simultaneously, and a decentralized or sequential Bayesian scheme that proved useful for identifying and eliminating spurious data from the fusion process. The simulations verified that the proposed strategy was able to identify spurious sensor measurements and eliminate them from the fusion process, thus leading to a better overall estimate of the true state. The proposed strategy was also validated with the help of experiments performed using stereo vision cameras, one infrared proximity sensor, and one laser proximity sensor. The information from these three sensing sources was fused to obtain an occupancy profile of the robotic workspace.Index Terms-Bayesian approach, decentralized fusion, sensor fusion, sequential fusion, spurious data.
This paper presents a sensor fusion strategy based on Bayesian method that can identify the inconsistency in sensor data so that spurious data can be eliminated from the sensor fusion process. The proposed method adds a term to the commonly used Bayesian technique that represents the probabilistic estimate corresponding to the event that the data is not spurious conditioned upon the data and the true state. This term has the effect of increasing the variance of the posterior distribution when data from one of the sensors is inconsistent with respect to the other. The proposed strategy was verified with the help of extensive simulations. The simulations showed that the proposed method was able to identify inconsistency in sensor data and also confirmed that the identification of inconsistency led to a better estimate of desired state variable.
Techniques originally developed for robot motion planning are applied to compute ingress paths for autonomous air vehicles, such as cruise missiles or Uninhabited Aerial Vehicles (UAVs). This approach is particularly useful in multiobjective optimization problems such as intercepting a target while also maneuvering to minimize observability to ground-based tracking stations. In this case, paths prescribing both position and orientation in threedimensional space are chosen based on empirical measurements of the airframe's RADAR Cross-section (RCS) as well as target state information. This six-degreeof-freedom motion planning formulation is an alternative to the traditional separation of guidance and autopilot functions and results in an unprecedented degree of guidance and control subsystem integration. This paper presents preliminary results and lays the groundwork for the development of future highly integrated guidance and control systems.
The major thrust of this paper is to develop a sensor model based on a probabilistic approach that could accurately provide information about individual sensor’s uncertainties and limitations. The sensor model aims to provide a most informative likelihood function that can be used to obtain a statistical and probabilistic estimate of uncertainties and errors due to some environmental parameters or parameters of any feature extraction algorithm used in estimation based on sensor’s outputs. This paper makes use of a neural network that has been trained with the help of a novel technique that obtains training signal from a maximum likelihood estimator. The proposed technique was applied to model stereo-vision sensors and Infra-Red (IR) proximity sensor, and information from these sensors were fused in a Bayesian framework to obtain a three-dimensional occupancy profile of objects in robotic workspace. The capability of the proposed technique in accurately obtaining three-dimensional occupancy profile and efficiently removing individual sensor uncertainties was demonstrated and validated via experiments carried out in the Robotics and Manufacturing Automation (RAMA) Laboratory at Duke University.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.