Melanoma is the skin cancer caused by the ultraviolet radiation from the Sun and has only 15-20% of survival rate. Late diagnosis of melanoma leads to the severe malignancy of disease, and metastasis expands to the other body organs i.e. liver, lungs and brain. The dermatologists analyze the pigmented lesions over the skin to discriminate melanoma from other skin diseases. However, the imprecise analysis results in the form of a series of biopsies and it complicates the treatment. Meanwhile, the process of melanoma detection can be expedited through computer vision methods by analyzing the dermoscopic images automatically. However, the visual similarity between the normal and infected skin regions, and artifacts like gel bubbles, hair and clinical marks indicate low accuracy rates for these approaches. To overcome these challenges, in this article, a melanoma detection and segmentation approach is presented that brings significant improvement in terms of accuracy against state-of-the-art approaches. As a first step, the artifacts like hairs, gel bubbles, and clinical marks are removed from the dermoscopic images by applying the morphological operations, and image regions are sharpen. Afterwards, for infected region detection, we used YOLOv4 object detector by tuning it for melanoma detection to discriminate the highly correlated infected and non-infected regions. Once the bounding boxes against the melanoma regions are obtained, the infected melanoma regions are extracted by applying the active contour segmentation approach. For performance evaluation, the proposed approach is evaluated on ISIC2018 and ISIC2016 datasets and results are compared against state-of-the-art melanoma detection, and segmentation techniques. Our proposed approach achieves average dice score as 1 and Jaccard coefficient as 0.989. The segmentation result validates the practical bearing of our method in development of clinical decision support system for melanoma diagnosis in contrast to state-of-the-art methods. The YOLOv4 detector is capable to detect multiple skin diseases of same patient and multiple diseases of various patients.
Human action recognition has the potential to predict the activities of an instructor within the lecture room. Evaluation of lecture delivery can help teachers analyze shortcomings and plan lectures more effectively. However, manual or peer evaluation is time-consuming, tedious and sometimes it is difficult to remember all the details of the lecture. Therefore, automation of lecture delivery evaluation significantly improves teaching style. In this paper, we propose a feedforward learning model for instructor’s activity recognition in the lecture room. The proposed scheme represents a video sequence in the form of a single frame to capture the motion profile of the instructor by observing the spatiotemporal relation within the video frames. First, we segment the instructor silhouettes from input videos using graph-cut segmentation and generate a motion profile. These motion profiles are centered by obtaining the largest connected components and normalized. Then, these motion profiles are represented in the form of feature maps by a deep convolutional neural network. Then, an extreme learning machine (ELM) classifier is trained over the obtained feature representations to recognize eight different activities of the instructor within the classroom. For the evaluation of the proposed method, we created an instructor activity video (IAVID-1) dataset and compared our method against different state-of-the-art activity recognition methods. Furthermore, two standard datasets, MuHAVI and IXMAS, were also considered for the evaluation of the proposed scheme.
Abstract. Medical images contain precious anatomical information for clinical procedures. Improved understanding of medical modality may contribute significantly in arena of medical image analysis. This paper investigates enhancement of monochromatic medical modality into colorized images. Improving the contrast of anatomical structures facilitates precise segmentation. The proposed framework starts with preprocessing to remove noise and improve edge information. Then colour information is embedded to each pixel of a subject image. A resulting image has a potential to portray better anatomical information than a conventional monochromatic image. To evaluate the performance of colorized medical modality, the structural similarity index and the peak signal to noise ratio are computed. Supremacy of proposed colorization is validated by segmentation experiments and compared with greyscale monochromatic images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.