In daily life, we can see images of real-life objects on posters, television, or virtually any type of smooth physical surfaces. We seldom confuse these images with the objects per se mainly with the help of the contextual information from the surrounding environment and nearby objects. Without this contextual information, distinguishing an object from an image of the object becomes subtle; it is precisely an effect that a large immersive display aims at achieving. In this work, we study and address a problem that mirrors the above-mentioned recognition problem, i.e., distinguishing images of true natural scenes and those from recapturing. Being able to detect recaptured images, robot vision can be more intelligent and a single-image-based countermeasure for re-broadcast attack on a face authentication system becomes feasible. This work is timely as the face authentication system is getting common on consumer mobile devices such as smart phones and laptop computers. In this work, we present a physical model for image recapturing and the features derived from the model are used in a recaptured image detector. Our physics-based method out-performs a statistics-based method by a significant margin on images of VGA (640×480) and QVGA (320×240) resolutions which are common for mobile devices. In our study, we find that apart from the contextual information, the unique properties for the recaptured image rendering process are crucial for the recognition problem.
Image recapture detection (IRD) is to distinguish real-scene images from the recaptured ones. Being able to detect recaptured images, a single image based counter-measure for rebroadcast attack on a face authentication system becomes feasible. Being able to detect recaptured images, general object recognition can differentiate the objects on a poster from the real ones, so that robot vision is more intelligent. Being able to detect recaptured images, composite image can be detected when recapture is used as a tool to cover the composite clues. As more and more methods have been proposed for IRD, an open database is indispensable to provide a common platform to compare the performance of different methods and to expedite further research and collaboration in the field of IRD.This paper describes a recaptured image database captured by smart phone cameras. The cameras of smart phones represent the middle to low-end market of consumer cameras. The database includes real-scene images and the corresponding recaptured ones, which targets to evaluate the performance of image recapture detection classifiers as well as provide a reliable data source for modeling the physical process to obtain the recaptured images. There are three main contributions in this work. Firstly, we construct a challenging database of recaptured images, which is the only publicly open database up to date. Secondly, the database is constructed by the smart phone cameras, which will promote the research of algorithms suitable for consumer electronic applications. Thirdly, the contents of the real-scene images and the recaptured images are in pair, which makes the modeling of the recaptured process possible.
Commonly used template classification for celestial spectra always fails dealing with low signal-to-noise ratio (S/N) spectra, which are very numerous in spectroscopic surveys. In the sixth data release of Large sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST DR6 V1), more than 0.7 million bad quality data were refused to classify by LAMOST pipeline and archived as "UNKNOWN." To recognize as many objects with low S/N spectra as possible in the "UNKNOWN" data set, one-dimensional convolutional neural network (CNN) based classifier was adapted from the widely used two-dimensional CNN. In this work, two CNN based classifier were applied, a classifier for distinguishing galaxy, QSO and star, and a classifier for discriminating subtypes of stars. To solve the problem caused by imbalanced training samples among different classes for the stellar classifier, a semi supervised learning algorithm by two CNNs and Spectral Generative Adversarial Network (SGAN) was introduced to produce artificial spectra for the minority O type. The SGAN solution is better than over-sampling in solving overfitting caused by imbalanced training set. The trained CNN classifiers were applied to classify "UNKNOWN" spectra into candidates of galaxies, QSOs, and stars. and further classify star candidates into spectral subclasses of O to M. Each spectra can be recognized to a spectral type with a probability by CNN algorithm, and 101,082 stellar spectra were remained with the probability larger than 99%, making up a supplemental star catalog of LAMOST DR6, which includes 294 O, 2 850 B, 269 A, 6 431 F, 626 G, 60 527 K, and 30 085 M types. To verify the catalog, the distances to corresponding templates from recognized spectra in each class were also checked comparing with known spectra. In addition, 200 O type stars were manually confirmed from 294 automatically identified O type stars in the catalog, because O type spectra have weak features and easily to be confused with no signal spectra. The classification result as a part of this work are available athttp://paperdata.china-vo.org/Classification_SGAN/result.zip.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.