“…Examples of successful extensions in feature extraction include kernel principal component analysis (KPCA) [3], kernel discriminant analysis (KDA) [4,2,5], kernel-based orthogonal subspace projection [6], kernel Foley-Sammon optimal discriminant vectors [7], and kernel-based matched subspace detectors [8]. In addition to feature extraction, the kernel trick has also been applied to learning machines [9][10][11] and other applications, [12][13][14], such as blind source separation [15] and object tracking [16][17][18][19][20], to increase discrimination between the objects of interest. To evaluate the quality of a kernel space, a pool of useful distance measures have been derived and extended to their kernel versions, including Chernoff distance, Bhattacharyya distance, Kullback-Leibler divergence, Patrick-Fisher distance and Mahalanobis distance [21,22].…”