Multi-focus image fusion objective is to add relevant information from multiple images of the same scene but with different focuses into a sharper image that is more suitable for visual sensor networks. Natural and artificially obtained multifocus color images are considered for fusion. The existing fusion methods like multi scale and multi-resolution transforms are proved to be good in Multi-focus Image Fusion. However, they suffer from computational complexity in kernels calculation. In this paper, Multi-focus color Image Fusion based on Walsh-Hadamard Transform and sum-modified-Laplacian focus measure is proposed. Walsh-Hadamard Transform is a non-sinusoidal, orthogonal transform with symmetry, seperability and orthogonal properties. These properties make it more apt for image fusion than other transforms. And the sum-modified-Laplacian focus measure helps to get sharper image. Proposed method performance is evaluated in terms of reference and non-reference measures. The experimental results indicate that proposed method not only produces sharp details in fused image but also reduces the computational complexity.
Due to their many applications the optical character recognition (OCR) systems have been developed even for scripts like Telugu. Due to the huge number of symbols utilization, identifying the Telugu words are very much complicated. Pre-computed symbol features have been stored by these types of systems to be recognized or to retrieve in a database. Hence, searching of Telugu script from the database is a challenging task due to the complication in finding the features of the Telugu word images or scripts. Here, we had implemented novel Telugu script recognition and retrieval based on the extraction of texture properties features using iterative partitioned clustering (IPC) for classification of word images. In addition, the statistical feature extraction and similarity matching performance is further improved that measures the similarity between trained and test templates.For testing purpose, we utilized noisy, corrupted and occlusion scanned documents as a query input word images, also considered multi conjunct vowel consonant clustered word images. Our extensive simulation analysis shown that the proposed methodology finds most relevant word images from database even under such conditions. Our proposed scheme has performed superior to the conventional approaches presented in the literature in terms of mean Average Precision (mAP) and mean Average Recall (mAR).
The abnormal growth of skin cells leads to skin cancer which occurs due to the unrepaired DNA impairment to the skin cells. Worldwide every year more than 1.23 lakhs of skin cancers are diagnosed, out of which Melanoma is the deadliest one. The aim of this research work is Recognition of Melanoma and skin lesion classification from the Dermoscopic images. Feature extraction of Dermoscopic images can be done by Shape, Color and Texture features. Texture features comprises of Gray Level Co-occurrence Matrix (GLCM) and statistical texture features calculated from the coefficients of the Multiresolution transforms such as Discrete Wavelet Transform (DWT), Curvelet, Tetrolet, and Spectral Graph Wavelet Transform (SGWT). The novelty in this work is using SGWT for extraction of texture features. The superiority of SGWT over conventional wavelet transform is its ability to work on irregular shaped images. Weighted graphs are the base for the SGWT which can be obtained in the form of meshes for irregular shapes. In the present work, skin lesions are obtained from the International Skin Imaging Collaboration (ISIC) 2016 archive. The features obtained from the Dermoscopic images are classified using Naïve Baye's, K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) classifiers. The proposed method using Shape, Color and Texture based features for Melanoma Recognition with SGWT results in an Area Under the Curve (AUC) of 0.951 with Accuracy of 96.79 %, Sensitivity of 88 % and Specificity of 98.26 %. Further, the AUC of skin lesion classification such as Melanoma vs Nevus, Seborrheic Keratosis vs Squamous Cell Carcinoma, Melanoma vs Seborrheic Keratosis, Melanoma vs Basal Cell Carcinoma and Nevus vs Basal Cell Carcinoma using SGWT is 0.895, 0.945, 0.9645, 0.945 and 0.98 respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.