In this work, we are motivated by the desire to classify skin lesions as malignants or benigns from color photographic slides of the lesions. Thus, we use color images of skin lesions, image processing techniques and artificial neural network classifier to distinguish melanoma from benign pigmented lesions. As the first step of the data set analysis, a preprocessing sequence is implemented to remove noise and undesired structures from the color image. Second, an automated segmentation approach localizes suspicious lesion regions by region growing after a preliminary step based on fuzzy sets. Then, we rely on quantitative image analysis to measure a series of candidate attributes hoped to contain enough information to differentiate melanomas from benign lesions. At last, the selected features are supplied to an artificial neural network for classification of tumor lesion as malignant or benign. For a preliminary balanced training/testing set, our approach is able to obtain 79.1% of correct classification of malignant and benign lesions on real skin lesion images.
Magnetic resonance imaging (MRI) and positron emission tomography (PET) image fusion is a recent hybrid modality used in several oncology applications. The MRI image shows the brain tissue anatomy and does not contain any functional information, while the PET image indicates the brain function and has a low spatial resolution. A perfect MRI–PET fusion method preserves the functional information of the PET image and adds spatial characteristics of the MRI image with the less possible spatial distortion. In this context, the authors propose an efficient MRI–PET image fusion approach based on non‐subsampled shearlet transform (NSST) and simplified pulse‐coupled neural network model (S‐PCNN). First, the PET image is transformed to YIQ independent components. Then, the source registered MRI image and the Y‐component of PET image are decomposed into low‐frequency (LF) and high‐frequency (HF) subbands using NSST. LF coefficients are fused using weight region standard deviation (SD) and local energy, while HF coefficients are combined based on S‐PCCN which is motivated by an adaptive‐linking strength coefficient. Finally, inverse NSST and inverse YIQ are applied to get the fused image. Experimental results demonstrate that the proposed method has a better performance than other current approaches in terms of fusion mutual information, entropy, SD, fusion quality, and spatial frequency.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.