In the supervised context, we intend to introduce a system which is composed of a series of novel and efficient algorithms that is able to realize a non parametric Bayesian classifier for high dimension. The proposed system tries to search for the best discriminate sub space in the mean of the minimum of the probability error of classification which is computed by using a modified kernel estimate of the conditional probability density functions. Therefore, Bayesian classification rule is applied in the reduced sub space. Such heuristic consists of four tasks. First, we maximize a novel estimate of the quadratic measure of the probabilistic dependence in order to realize multivariate extractors resulting from a number of different initializations of a given numerical optimizing procedure. Second, an estimation of the miss classification error is computed for each solution by the kernel estimate of the conditional probability density functions with the optimal band-with parameter in the sense of the Mean Integrate Square Error (MISE) which is obtained with the Plug in algorithm. Third, the sub space which presents the minimum of the miss classification values is thus chosen. After that, the Bayesian classification rule is operated in the reduced sub space with the optimal MISE of the modified kernel estimate. Finally, different algorithms will be applied to a base of images in grayscale representing classes of faces, showing its interest in the case of real data.
Nous évaluons ici les performances d'un estimateur de la L 2-mesure de dépendance probabiliste en vue de la réduction de dimension bidimensionnelle linéaire dans le cas multiclasse. Cette quantité qui présente un lien direct avec la probabilité d'erreur de classification, est construite à l'aide des séries de Fourier généralisées. Nous comparons l'algorithme proposé d'une part, à l'analyse discriminante linéaire introduite par Fisher et, d'autre part, à une version généralisée au cas multiclasse se basant sur l'extracteur linéaire récursif de la L 2-mesure de dépendance probabiliste. Dans le cas non gaussien, cette évaluation sera faite au sens de l'erreur des k plus proches voisins. L'estimateur à noyau des densités de probabilité est calculé dans le contexte du paramètre de lissage optimisé au sens de la moyenne quadratique intégrée. Ce dernier servira à l'estimation de la probabilité d'erreur de classification des mélanges de vecteurs gaussiens. Nous montrons sur un exemple de bases d'images de visages l'intérêt du réducteur de dimension proposé relativement aux méthodes conventionnelles. ABSTRACT. We introduce an estimate of the L 2-probabilistic dependence measure constructed with the generalized Fourier series which is able to realize a linear vector feature dimensional reduction in the discriminate multi-class problem. It generalizes the Patrick-Fischer distance estimate generally used for dimensionality reduction of the feature space in the case of binary classification. It has a direct relationship with the probability classification error. We compare the proposed algorithm with the well known linear discriminate analysis (LDA) and with a generalized version of a multi class recursive linear extractor based on the L 2-probabilistic dependence measure (R1D L 2-PMD). For vector Gaussian mixtures such comparison is done in the mean of the probability error of classification which is estimated by a multivariate Kernel probability density function. The corresponding smoothing parameters are optimized analytically in the sense of the Mean Integrated Square Error (MISE). The non Gaussian case is evaluated with the error of the k nearest neighborhood classifier. Finally we will illustrate the importance of the proposed method by testing it in the context of the face recognition.
Here, we intend to introduce a new estimate of the L 2 probabilistic dependence measure by Fourier series for 2-dimensional reduction. Its performance is compared to the Fischer Linear Discriminate Analysis (LDA) and the Approximate Chernoff Criterion (ACC) in the mean of classification probability error.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.