Person re-identification has been widely studied due to its importance in surveillance and forensics applications. In practice, gallery images are high resolution (HR), while probe images are usually low resolution (LR) in the identification scenarios with large variation of illumination, weather, or quality of cameras. Person re-identification in this kind of scenarios, which we call super-resolution (SR) person re-identification, has not been well studied. In this paper, we propose a semi-coupled low-rank discriminant dictionary learning (SLDL) approach for SR person re-identification task. With the HR and LR dictionary pair and mapping matrices learned from the features of HR and LR training images, SLDL can convert the features of the LR probe images into HR features. To ensure that the converted features have favorable discriminative capability and the learned dictionaries can well characterize intrinsic feature spaces of the HR and LR images, we design a discriminant term and a low-rank regularization term for SLDL. Moreover, considering that low resolution results in different degrees of loss for different types of visual appearance features, we propose a multi-view SLDL (MVSLDL) approach, which can learn the type-specific dictionary pair and mappings for each type of feature. Experimental results on multiple publicly available data sets demonstrate the effectiveness of our proposed approaches for the SR person re-identification task.
Based on the assumption that low-resolution (LR) and high-resolution (HR) manifolds are locally isometric, the neighbor embedding super-resolution algorithms try to preserve the geometry (reconstruction weights) of the LR space for the reconstructed HR space, but neglect the geometry of the original HR space. Due to the degradation process of the LR image (e.g., noisy, blurred, and down-sampled), the neighborhood relationship of the LR space cannot reflect the truth. To this end, this paper proposes a coarse-to-fine face super-resolution approach via a multilayer locality-constrained iterative neighbor embedding technique, which intends to represent the input LR patch while preserving the geometry of original HR space. In particular, we iteratively update the LR patch representation and the estimated HR patch, and meanwhile an intermediate dictionary learning scheme is employed to bridge the LR manifold and original HR manifold. The proposed method can faithfully capture the intrinsic image degradation shift and enhance the consistency between the reconstructed HR manifold and the original HR manifold. Experiments with application to face super-resolution on the CAS-PEAL-R1 database and real-world images demonstrate the power of the proposed algorithm.
Image annotation has attracted a lot of research interest, and multi-label learning is an effective technique for image annotation. How to effectively exploit the underlying correlation among labels is a crucial task for multi-label learning. Most existing multi-label learning methods exploit the label correlation only in the output label space, leaving the connection between the label and the features of images untouched. Although, recently some methods attempt toward exploiting the label correlation in the input feature space by using the label information, they cannot effectively conduct the learning process in both the spaces simultaneously, and there still exists much room for improvement. In this paper, we propose a novel multi-label learning approach, named multi-label dictionary learning (MLDL) with label consistency regularization and partial-identical label embedding MLDL, which conducts MLDL and partial-identical label embedding simultaneously. In the input feature space, we incorporate the dictionary learning technique into multi-label learning and design the label consistency regularization term to learn the better representation of features. In the output label space, we design the partial-identical label embedding, in which the samples with exactly same label set can cluster together, and the samples with partial-identical label sets can collaboratively represent each other. Experimental results on the three widely used image datasets, including Corel 5K, IAPR TC12, and ESP Game, demonstrate the effectiveness of the proposed approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.