Face benchmarks empower the research community to train and evaluate high-performance face recognition systems. In this paper, we contribute a new million-scale recognition benchmark, containing uncurated 4M identities/260M faces (WebFace260M) and cleaned 2M identities/42M faces (WebFace42M) training data, as well as an elaborately designed time-constrained evaluation protocol. Firstly, we collect 4M name lists and download 260M faces from the Internet. Then, a Cleaning Automatically utilizing Self-Training (CAST) pipeline is devised to purify the tremendous WebFace260M, which is efficient and scalable. To the best of our knowledge, the cleaned WebFace42M is the largest public face recognition training set and we expect to close the data gap between academia and industry. Referring to practical deployments, Face Recognition Under Inference Time conStraint (FRUITS) protocol and a new test set with rich attributes are constructed. Besides, we gather a large-scale masked face sub-set for biometrics assessment under COVID-19. For a comprehensive evaluation of face matchers, three recognition tasks are performed under standard, masked and unbiased settings, respectively. Equipped with this benchmark, we delve into million-scale face recognition problems. A distributed framework is developed to train face recognition models efficiently without tampering with the performance. Enabled by WebFace42M, we reduce 40% failure rate on the challenging IJB-C set and rank 3rd among 430 entries on NIST-FRVT. Even 10% data (WebFace4M) shows superior performance compared with the public training sets. Furthermore, comprehensive baselines are established under the FRUITS-100/500/1000 milliseconds protocols. The proposed benchmark shows enormous potential on standard, masked and unbiased face recognition scenarios. Our WebFace260M website is https://www.face-benchmark.org.
In this paper, we contribute a new million-scale face benchmark containing noisy 4M identities/260M faces (WebFace260M) and cleaned 2M identities/42M faces (WebFace42M) training data, as well as an elaborately designed time-constrained evaluation protocol. Firstly, we collect 4M name list and download 260M faces from the Internet. Then, a Cleaning Automatically utilizing Self-Training (CAST) pipeline is devised to purify the tremendous WebFace260M, which is efficient and scalable. To the best of our knowledge, the cleaned WebFace42M is the largest public face recognition training set and we expect to close the data gap between academia and industry. Referring to practical scenarios, Face Recognition Under Inference Time conStraint (FRUITS) protocol and a test set are constructed to comprehensively evaluate face matchers.Equipped with this benchmark, we delve into millionscale face recognition problems. A distributed framework is developed to train face recognition models efficiently without tampering with the performance. Empowered by Web-Face42M, we reduce relative 40% failure rate on the challenging IJB-C set, and ranks the 3rd among 430 entries on NIST-FRVT. Even 10% data (WebFace4M) shows superior performance compared with public training set. Furthermore, comprehensive baselines are established on our rich-attribute test set under FRUITS-100ms/500ms/1000ms protocol, including MobileNet, EfficientNet, AttentionNet, ResNet, SENet, ResNeXt and RegNet families. Benchmark website is https://www.face-benchmark.org.
The study of the architectural heritage of the Chinese diaspora has an important role and significance in China’s historical and cultural background in the preservation of cultural data, the restoration of images, and in the analysis of human social and ideological conditions. The images from the architectural heritage of the Chinese diaspora usually include frescos, decorative patterns, chandelier base patterns, various architectural styles and other major types of architecture. Images of the architectural heritage of the Chinese diaspora in Jiangmen City, Guangdong Province, China are the research object of this study. A total of 5073 images of diaspora Chinese buildings in 64 villages and 16 towns were collected. In view of the fact that different types of image vary greatly in features while there are only small differences among the features of the same type of image, this study uses the depth learning method to design the Convolutional Neural Network Attention Retrieval Framework (CNNAR Framework). This approach can be divided into two stages. In the first stage, the transfer learning method is used to classify the image in question by transferring the trained parameters of the Paris500K datasets image source network to the target network for training, and thus the classified image is obtained. The advantage of this method is that it narrows the retrieval range of the target image. In the second stage, the fusion attention mechanism is used to extract the features of the images that have been classified, and the distance between similar images of the same type is reduced by loss of contrast. When we retrieve images, we can use the features extracted in the second stage to measure the similarities among them and return the retrieval results. The results show that the classification accuracy of the proposed method reaches 98.3% in the heritage image datasets of the JMI Chinese diaspora architectures. The mean Average Precision (mAP) of the proposed algorithm can reach 76.6%, which is better than several mainstream model algorithms. At the same time, the image results retrieved by the algorithm in this paper are very similar to those of the query image. In addition, the CNNAR retrieval framework proposed in this paper achieves accuracies of 71.8% and 72.5% on the public data sets Paris500K and Corel5K, respectively, which can be greatly generalized and can, therefore, also be effectively applied to other topics datasets. The JMI architectural heritage image database constructed in this study, which is rich in cultural connotations of diaspora Chinese homeland life, can provide strong and reliable data support for the follow-up study of the zeitgeist of the culture reflected in architecture and the integration of Chinese and Western aesthetics. At the same time, through the rapid identification, classification, and retrieval of precious architectural images stored in the database, similar target images can be retrieved reasonably and accurately; then, accurate techniques can be provided to restore old and damaged products of an architectural heritage.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.