In the present paper, we propose a source camera identification (SCI) method for mobile devices based on deep learning. Recently, convolutional neural networks (CNNs) have shown a remarkable performance on several tasks such as image recognition, video analysis or natural language processing. A CNN consists on a set of layers where each layer is composed by a set of high pass filters which are applied all over the input image. This convolution process provides the unique ability to extract features automatically from data and to learn from those features. Our proposal describes a CNN architecture which is able to infer the noise pattern of mobile camera sensors (also known as camera fingerprint) with the aim at detecting and identifying not only the mobile device used to capture an image (with a 98% of accuracy), but also from which embedded camera the image was captured. More specifically, we provide an extensive analysis on the proposed architecture considering different configurations. The experiment has been carried out using the images captured from different mobile devices cameras (MICHE-I Dataset) and the obtained results have proved the robustness of the proposed method.
Physiological measures are widely studied from a medical point of view. Most applications lie in the field of diagnosis of heart attacks, as regards the ECG, or the detection of epileptic events, in the case of the EEG. In the last ten years, these signals are being investigated also from a biometric point of view, in order to exploit the discriminative capability provided by these measures in recognizing individuals. The present work proposes a multimodal biometric recognition system based on the fusion of the first lead (i) of the electrocardiogram (ECG) with six different bands of the electroencephalogram (EEG). The proposed approach is based on the extraction of fiducial features (peaks) from the ECG combined with spectrum features of the EEG. A dataset has been created, by composing the signals of two well-known databases. The results, reported by means of EER values, AUC values and ROC curves, show good recognition performances
Despite the success obtained in face detection and recognition over the last ten years of research, the analysis of facial attributes still represents a trend topic. Keeping the full face recognition aside, exploring the potentials of soft biometric traits, i.e. singular facial traits like the nose, the mouth, the hair and so on, is yet considered a fruitful field of investigation. Being able to infer the identity of an occluded face, e.g. voluntary occluded by sunglasses or accidentally due to environmental factors, can be useful in a wide range of operative fields where user collaboration cannot be considered as an assumption. This especially happens when dealing with forensic scenarios in which is not unusual to have partial face photos or partial fingerprints. In this paper, an unsupervised clustering approach is described. It consists in a neural network model for face attributes recognition based on transfer learning whose goal is grouping faces according to common facial features. Moreover, we use the features collected in each cluster to provide a compact and comprehensive description of the faces belonging to each cluster and deep learning as a mean for task prediction in partially visible faces.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.