The theory of illumination subspaces is well developed and has been tested extensively on the Yale Face Database B (YDB) and CMU-PIE (PIE) data sets. This paper shows that if face recognition under varying illumination is cast as a problem of matching sets of images to sets of images, then the minimal principal angle between subspaces is sufficient to perfectly separate matching pairs of image sets from nonmatching pairs of image sets sampled from YDB and PIE. This is true even for subspaces estimated from as few as six images and when one of the subspaces is estimated from as few as three images if the second subspace is estimated from a larger set (10 or more). This suggests that variation under illumination may be thought of as useful discriminating information rather than unwanted noise.
Recent work has established that digital images of a human face, collected under various illumination conditions, contain discriminatory information that can be used in classification. In this paper we demonstrate that sufficient discriminatory information persists at ultralow resolution to enable a computer to recognize specific human faces in settings beyond human capabilities. For instance, we utilized the Haar wavelet to modify a collection of images to emulate pictures from a 25pixel camera. From these modified images, a low-resolution illumination space was constructed for each individual in the CMU-PIE database. Each illumination space was then interpreted as a point on a Grassmann manifold. Classification that exploited the geometry on this manifold yielded error-free classification rates for this data set. This suggests the general utility of a low-resolution illumination camera for set-based image recognition problems.
We propose a novel method to detect and correct drift in non-raster scanning probe microscopy. In conventional raster scanning drift is usually corrected by subtracting a fitted polynomial from each scan line, but sample tilt or large topographic features can result in severe artifacts. Our method uses self-intersecting scan paths to distinguish drift from topographic features. Observing the height differences when passing the same position at different times enables the reconstruction of a continuous function of drift. We show that a small number of self-intersections is adequate for automatic and reliable drift correction. Additionally, we introduce a fitness function which provides a quantitative measure of drift correctability for any arbitrary scan shape.
We consider the challenge of detection of chemical plumes in hyperspectral image data. Segmentation of gas is difficult due to the diffusive nature of the cloud. The use of hyperspectral imagery provides non-visual data for this problem, allowing for the utilization of a richer array of sensing information. We consider several videos of different gases taken with the same background scene. We investigate a technique known as "manifold denoising" to delineate different features in the hyperspectral frames. With manifold denoising, we can bring more pertinent eigenvectors to the forefront. One can also simultaneously analyze frames from multiple videos using efficient algorithms for high dimensional data such as spectral clustering combined with linear algebra methods that leverage either subsampling or sparsity in the data. Analysis of multiple frames by the Nyström extension shows the ability to differentiate between different gasses while being able to group the similar items together, such as gasses or background signatures.
Recent work has established that digital images of a human face, when collected with a fixed pose but under a variety of illumination conditions, possess discriminatory information that can be used in classification. In this paper we perform classification on Grassmannians to demonstrate that sufficient discriminatory information persists in feature patch (e.g., nose or eye patch) illumination spaces. We further employ the use of Karcher mean on the Grassmannians to demonstrate that this compressed representation can accelerate computations with relatively minor sacrifice on performance. The combination of these two ideas introduces a novel perspective in performing face recognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.