Abstract. Small sample size is one of the most challenging problems in face recognition due to the difficulty of sample collection in many real-world applications. By representing the query sample as a linear combination of training samples from all classes, the so-called collaborative representation based classification (CRC) shows very effective face recognition performance with low computational cost. However, the recognition rate of CRC will drop dramatically when the available training samples per subject are very limited. One intuitive solution to this problem is operating CRC on patches and combining the recognition outputs of all patches. Nonetheless, the setting of patch size is a nontrivial task. Considering the fact that patches on different scales can have complementary information for classification, we propose a multiscale patch based CRC method, while the ensemble of multi-scale outputs is achieved by regularized margin distribution optimization. Our extensive experiments validated that the proposed method outperforms many state-of-the-art patch based face recognition algorithms.
Abstract:Local feature based face recognition (FR) methods, such as Gabor features encoded by local binary pattern, could achieve state-of-the-art FR results in large-scale face databases such as FERET and FRGC. However, the time and space complexity of Gabor transformation are too high for many practical FR applications. In this paper, we propose a new and efficient local feature extraction scheme, namely monogenic binary coding (MBC), for face representation and recognition. Monogenic signal representation decomposes an original signal into three complementary components: amplitude, orientation and phase. We encode the monogenic variation in each local region and monogenic feature in each pixel, and then calculate the statistical features (e.g., histogram) of the extracted local features. The local statistical features extracted from the complementary monogenic components (i.e., amplitude, orientation and phase) are then fused for effective FR. It is shown that the proposed MBC scheme has significantly lower time and space complexity than the Gabor-transformation based local feature methods. The extensive FR experiments on four large scale databases demonstrated the effectiveness of MBC, whose performance is competitive with and even better than state-of-the-art local feature based FR methods.
In this paper we present a novel approach of using mathematical models and stochastic simulations to guide and inform security investment and policy change decisions. In particular, we investigate vulnerability management policies, and explore how effective standard patch management and emergency escalation based policies are, and how they can be combined with earlier, pre-patch mitigation measures to reduce the potential exposure window.The paper describes the model we constructed to represent typical vulnerability management processes in large organizations, which captures the external threat environment and the internal security processes and decision points. We also present the results from the experimental simulations, and show how changes in security solutions and policies, such as speeding up patch deployment and investing in early mitigation measures, affect the overall exposure window in terms of the time it takes to reduce the potential risk.We believe that this type of mathematical modelling and simulation-based approach provides a novel and useful way of considering security investment decisions, which is quite distinct from traditional risk analysis.
A reliable and accurate identification of the type of tumors is crucial to the proper treatment of cancers. In recent years, it has been shown that sparse representation (SR) by 1 l -norm minimization is robust to noise, outliers and even incomplete measurements, and SR has been successfully used for classification. This paper presents a new SR based method for tumor classification using gene expression data. A set of metasamples are extracted from the training samples, and then an input testing sample is represented as the linear combination of these metasamples by l 1 -regularized least square method. Classification is achieved by using a discriminating function defined on the representation coefficients. Since l 1 -norm minimization leads to a sparse solution, the proposed method is called metasample based SR classification (MSRC).Extensive experiments on publicly available gene expression datasets show that MSRC is efficient for tumor classification, achieving higher accuracy than many existing representative schemes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.