Achieving better recognition rate for text in video action images is challenging due to multi-type texts with unpredictable backgrounds. We propose a new method for the classification of captions (which is edited text) and scene texts (which is part of an image in video images of Yoga, Concert, Teleshopping, Craft, and Recipe classes). The proposed method introduces a new fusion criterion-based on DCT and Fourier coefficients to extract features that represent good clarity and visibility of captions to separate them from scene texts. The variances for coefficients of corresponding pixels of DCT and Fourier images are computed to derive the respective weights. The weights and coefficients are further used to generate a fused image. Furthermore, the proposed method estimates sparsity in Canny edge image of each fused image to derive rules for classifying caption and scene texts. Lastly, the proposed method is evaluated on images of five above-mentioned action image classes to validate the derived rules. Comparative studies with the state-of-the-art methods on the standard databases show that the proposed method outperforms the existing methods in terms of classification. The recognition experiments before and after classification show that the recognition performance rate improves significantly after classification.
As more and more office documents are captured, stored, and shared in digital format, and as image editing software are becoming increasingly more powerful, there is a growing concern about document authenticity. To prevent illicit activities, this paper presents a new method for detecting altered text in document images. The proposed method explores the relationship between positive and negative coefficients of DCT to extract the effect of distortions caused by tampering by fusing reconstructed images of respective positive and negative coefficients, which results in Positive-Negative DCT coefficients Fusion (PNDF). To take advantage of spatial information, we propose to fuse R, G, and B color channels of input images, which results in RGBF (RGB Fusion). Next, the same fusion operation is used for fusing PNDF and RGBF, which results in a fused image for the original input one. We compute a histogram to extract features from the fused image, which results in a feature vector. The feature vector is then fed to a deep neural network for classifying altered text images. The proposed method is tested on our own dataset and the standard datasets from the ICPR 2018 Fraud Contest, Altered Handwriting (AH), and faked IMEI number images. The results show that the proposed method is effective and the proposed method outperforms the existing methods irrespective of image type.
Rapid advances in artificial intelligence have made it possible to produce forgeries good enough to fool an average user. As a result, there is growing interest in developing robust methods to counter such forgeries. This study presents a new Fourier spectrum‐based method for detecting forged text in video images. The authors' premise is that brightness distribution and the spectrum shape exhibit irregular patterns (inconsistencies) for forged text, while appearing more regular for original text. The method divides the spectrum of an input image into sectors and tracks to highlight these effects. Specifically, positive and negative coefficients for sectors and tracks are extracted to quantify the brightness distribution. Variations in the shape of the spectrum are analysed by determining the angular relationship between the principal axes and the sectors/tracks of the spectrum. Next, it combines these two features to detect forged text in the images of IMEI (International Mobile Equipment Identity) numbers and document. For evaluation, the following datasets are used: own video dataset and standard datasets, namely, IMEI number, ICPR 2018 Fraud Document Contest, and a natural scene text dataset. Experimental results show that the proposed method outperforms existing methods in terms of average classification rate and F‐score.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.