Performance measures are derived for data-adaptive hypothesis testing by systems trained on stochastic data. The measures consist of the averaged performance of the system over the ensemble of training sets. The training set-based measures are contrasted with maximum aposteriori probability (MAP) test measures. It is shown that the training set-based and MAP test probabilities are equal if the training set is proportioned according to the prior probabilities of the hypotheses. Applications of training set-based measures are suggested for neural net and training set design.Hypothesis testing by a data-adaptive system, such as a neural net, is fundamentally different from classical hypothesis testing. In the former a representative data set, corresponding to known hypotheses, is used to train the system. System parameters are varied until the system training sethypothesis space mapping best approximates the known map. The assumptions of a sufficiently representative training set and the ability of the system to associate are required to extend the map to arbitrary data [1]. In contrast, classical hypothesis testing derives from an assumed model for the data, often a signal in Gaussian noise, from which optimum tests are defined {2].In this paper performance measures are derived based only on the procedure by which an adaptive system is trained. It is assumed that, if a system is perfectly trained on a representative data set for each hypothesis, an appropriate performance estimate is the averaged performance over the ensemble of training sets. This averaged performance, which is computed in terms of training set size and data distributions, reflects an uncertainty inherent in learning from a finite representation of the data. Of course an exact measure of system performance is obtained by testing the system on an ensemble of independent performance sets. However, in order to predict this performance an exact model of the system mapping must be known. This is difficult for model-based systems in general; but even more difficult for adaptive systems in which the exact mapping is training set-dependent.In the following training set-based performance measures are derived for a data-adaptive system on an arbitrary data-based N-hypothesis test. The nearest neighbor classifier is a typical realization of a training set-based system which has been implemented in artificial neural networks [3,4]. A maximum aposteriori probability (MAP) test is also formulated and represented for a decisioning system with output in [0, 11N• A possible neural net representation of the MAP test contains N output neurons. For a net input x the jth deepest layer neuron literally outputs p(H2Ix)c[0, 1], which is the conditional probability for hypothesis I1, i = 1, . . . , N. This rather stringent condition was obtained in Ref.[5] using a Boltzmann/perceptron net combination to implement the MAP test. It has also been proven that, assuming the training set is sThis work was sponsored by the Department of the Air Force under contract F19628-90-C-0...
Architecture for neural net multi-sensor data fusion is introduced and analyzed. This architecture consists of a set of independent sensor neural nets, one for each sensor, coupled to a fusion net. The neural net of each sensor is trained (from a representative data set of the particular sensor) to map to a hypothesis space output. The decision outputs from the sensor nets are used to train the fusion net to an overall decision. To begin the processing, the 3D point cloud LIDAR data is classified based on a multi-dimensional mean-shift segmentation and classification into clustered objects. Similarly, the multi-band HSI data is spectrally classified by the Stochastic Expectation-Maximization (SEM) into a classification map containing pixel classes. For sensor fusion, spatial detections and spectral detections complement each other. They are fused into final detections by a cascaded neural network, which consists of two levels of neural nets. The first layer is the sensor level consisting of two neural nets: spatial neural net and spectral neural net. The second level consists of a single neural net, that is the fusion neural net. The success of the system in utilizing sensor synergism for an enhanced classification is clearly demonstrated by applying this architecture for classifying on November 2010 airborne data collection of LIDAR and HSI over the Gulfport, MS, area.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.