Abstract-Most digital image forgery detection techniques require the doubtful image to be uncompressed and in high quality. However, most image acquisition and editing tools use the JPEG standard for image compression. The histogram of Discrete Cosine Transform coefficients contains information on the compression parameters for JPEGs and previously compressed bitmaps. In this paper we present a straightforward method to estimate the quantization table from the peaks of the histogram of DCT coefficients. The estimated table is then used with two distortion measures to deem images as untouched or forged. Testing the procedure on a large set of images gave a reasonable average estimation accuracy of 80% that increases up to 88% with increasing quality factors. Forgery detection tests on four different types of tampering resulted in an average false negative rate of 7.95% and 4.35% for the two measures respectively.
Despite the importance of the liver segmentation in the medical images for efficient noninvasive diagnosis, few studies found in the literatures for fully automated methods for liver segmentation in Magnetic Resonance Imaging (MRI) compared to that in Computed Tomography (CT) scans. Motivated by this, we propose an adaptive fully automatic liver segmentation method for MRI images based on thresholding and Bayesian classification. Bayesian classifications have proved to be highly robust to various image degradations. It only requires a small amount of training data to estimate the parameters necessary for classification, which is a huge advantage in medical applications. Furthermore, the Bayesian model is robust when large uncertainties are involved in medical image analysis problems. The proposed method is successfully tested on many MRI cases acquired from different patients, in various sizes. Experiments proved the robustness of the proposed liver automatic segmentation process even on data from different scanner types. The segmentation accuracy of the model has a mean Dice Similarity Coefficient (DSC) of 95.5% for MRI datasets.
Abstract-The face recognition applications are widely used in different fields like security and computer vision. The recognition process should be done in real time to take fast decisions. Principle Component Analysis (PCA) considered as feature extraction technique and is widely used in facial recognition applications by projecting images in new face space. PCA can reduce the dimensionality of the image. However, PCA consumes a lot of processing time due to its high intensive computation nature. Hence, this paper proposes two different parallel architectures to accelerate training and testing phases of PCA algorithm by exploiting the benefits of distributed memory architecture. The experimental results show that the proposed architectures achieve linear speed-up and system scalability on different data sizes from the Facial Recognition Technology (FERET) database.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.