2015
DOI: 10.1364/josaa.32.000549
|View full text |Cite
|
Sign up to set email alerts
|

Method for optimizing channelized quadratic observers for binary classification of large-dimensional image datasets

Abstract: We present a new method for computing optimized channels for channelized quadratic observers (CQO) that is feasible for high-dimensional image data. The method for calculating channels is applicable in general and optimal for Gaussian distributed image data. Gradient-based algorithms for determining the channels are presented for five different information-based figures of merit (FOMs). Analytic solutions for the optimum channels for each of the five FOMs are derived for the case of equal mean data for both cl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 43 publications
0
5
0
Order By: Relevance
“…2 In this case, an optimal channel matrix can be constructed by using L eigenvectors of K −1 2 K 1 for the channels, with corresponding eigenvalues κ l . We showed in 2 that for J, the Bhattacharyya distance, and the AUC the eigenvectors are chosen to have the L largest values of κ l + κ −1 l .…”
Section: Methodsmentioning
confidence: 99%
“…2 In this case, an optimal channel matrix can be constructed by using L eigenvectors of K −1 2 K 1 for the channels, with corresponding eigenvalues κ l . We showed in 2 that for J, the Bhattacharyya distance, and the AUC the eigenvectors are chosen to have the L largest values of κ l + κ −1 l .…”
Section: Methodsmentioning
confidence: 99%
“…The Bhattacharyya distance has also been used to optimize a single PSG/PSA measurement without a closed-form gradient [5]. The mathematical and empirical relationship between J and AUC is described in [23].…”
Section: Psg/psa Optimizationmentioning
confidence: 99%
“…Fukunaga and Koontz were the first to suggest covariance matrix eigendecomposition for detection and classification tasks [14]. With certain eigenspectrum assumptions, the Fukunaga-Koontz transform (FKT) is the low-rank approximation to the optimal classifier for zero mean, heteroscadastic, and normally distributed data [15,16]. An adaptation of FKT widely used in pattern recognition, is called a tuned basis function (TBF) [17].…”
Section: Mathematical Backgroundmentioning
confidence: 99%
“…The FKT matrix is populated by L eigenvectors of K −1 2 K 1 with corresponding eigenvalues κ l . The IO AUC can be maximized when these eigenvectors are chosen to have the L largest values of κ l + κ −1 l [16]. Then the compressed data is t = T g,…”
Section: Channelized Images: Linear Data Transformationmentioning
confidence: 99%