Melanoma, most threatening type of skin cancer, is on the rise. In this paper an implementation of a deep-learning system on a computer server, equipped with graphic processing unit (GPU), is proposed for detection of melanoma lesions. Clinical (non-dermoscopic) images are used in the proposed system, which could assist a dermatologist in early diagnosis of this type of skin cancer. In the proposed system, input clinical images, which could contain illumination and noise effects, are preprocessed in order to reduce such artifacts. Afterward, the enhanced images are fed to a pre-trained convolutional neural network (CNN) which is a member of deep learning models. The CNN classifier, which is trained by large number of training samples, distinguishes between melanoma and benign cases. Experimental results show that the proposed method is superior in terms of diagnostic accuracy in comparison with the state-of-the-art methods.
Population of old generation is growing in most countries. Many of these seniors are living alone at home. Falling is among the most dangerous events that often happen and may need immediate medical care. Automatic fall detection systems could help old people and patients to live independently. Vision-based systems have advantage over wearable devices. These visual systems extract some features from video sequences and classify fall and normal activities. These features usually depend on camera's view direction. Using several cameras to solve this problem increases the complexity of the final system. In this paper, we propose to use variations in silhouette area that are obtained from only one camera. We use a simple background separation method to find the silhouette. We show that the proposed feature is view invariant. Extracted feature is fed into a support vector machine for classification. Simulation of the proposed method using a publicly available dataset shows promising results.
Ultrasound imaging is a standard examination during pregnancy that can be used for measuring specific biometric parameters towards prenatal diagnosis and estimating gestational age. Fetal head circumference (HC) is one of the significant factors to determine the fetus growth and health. In this paper, a multi-task deep convolutional neural network is proposed for automatic segmentation and estimation of HC ellipse by minimizing a compound cost function composed of segmentation dice score and MSE of ellipse parameters. Experimental results on fetus ultrasound dataset in different trimesters of pregnancy show that the segmentation results and the extracted HC match well with the radiologist annotations. The obtained dice scores of the fetal head segmentation and the accuracy of HC evaluations are comparable to the state-of-the-art. I. INTRODUCTION Ultrasound (US) imaging is a safe non-invasive procedure for diagnosing internal body organs. Ultrasound imaging as compared to other imaging tools, such as computed tomography (CT) and magnetic resonance imaging (MRI), is cheaper, portable and more prevalent [1]. It helps to diagnose the causes of pain, swelling, and infection in internal organs, for evaluation and treatment of medical conditions [2].Ultrasound imaging has turned into a general checkup method for prenatal diagnosis. It is used to investigate and measure fetal biometric parameters, such as the baby's abdominal circumference, head circumference, biparietal diameter, femur and humerus length, and crown-rump length. Furthermore, the fetal head circumference (HC) is measured for estimating the gestational age, size and weight, growth monitoring and detecting fetus abnormalities [3].Despite all the benefits and typical applications of US imaging, this imaging modality suffers from various artifacts such as motion blurring, missing boundaries, acoustic shadows, speckle noise, and low signal-to-noise ratio. This makes the US images very challenging to interpret, which requires expert operators. As shown in US image samples of
Abstract-Melanoma is amongst most aggressive types of cancer. However, it is highly curable if detected in its early stages. Prescreening of suspicious moles and lesions for malignancy is of great importance. Detection can be done by images captured by standard cameras, which are more preferable due to low cost and availability. One important step in computerized evaluation of skin lesions is accurate detection of lesion's region, i.e. segmentation of an image into two regions as lesion and normal skin. Accurate segmentation can be challenging due to burdens such as illumination variation and low contrast between lesion and healthy skin. In this paper, a method based on deep neural networks is proposed for accurate extraction of a lesion region. The input image is preprocessed and then its patches are fed to a convolutional neural network (CNN). Local texture and global structure of the patches are processed in order to assign pixels to lesion or normal classes. A method for effective selection of training patches is used for more accurate detection of a lesion's border. The output segmentation mask is refined by some post processing operations. The experimental results of qualitative and quantitative evaluations demonstrate that our method can outperform other state-of-the-art algorithms exist in the literature.Index Terms-Convolutional neural network, deep learning, medical image segmentation, melanoma, skin cancer.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.