The number of Cardiac Magnetic Resonance Images produced for a patient is overwhelming, and this leads to several issues such as labour intensive, time-consuming and detected contours error. The current practice to evaluate the Cardiac Magnetic Resonance Images by the experts is either manual or semi-automatic. Ideally, an automatic evaluation is preferred to assist the cardiac experts in their clinical evaluation. The automatic segmentation for the left ventricle that is the endocardium (Endo) and epicardium (Epi) is currently lacking. The two most usable segmentation model approaches namely Level Set Model (LSM) and Variational LSM (VLSM) are very popular because of their fast iterating of the contours shape objects. Due to the unstructured LV shape and papillary muscles of the left ventricle, both models issue was reinitialisation on detected contours. In this paper presented, the combined Sign-Euclidean distance function takes the distance measurement, from centre to endocardium contour towards Epicontour. The distance measurement function using the distance mapping technique is guided by the curves line using energy function to reduce segmentation error. The experiments were conducted utilizing the Sunnybrook and Pusat Jantung Sarawak (PJS) cardiac datasets. The results shows that the Sign-Euclidean distance function reduces segmentation error between segmented contours, the highest error identified in Endo-contour is; HF-I-05 (Endo-14.74); HF-NI-11 (Endo-8.79); P-A004 (Endo-8.04); in Epi-contours; HF-I-08 (Epi-3.08); HF-NI-07 (Epi-2.81); P-A001 (Epi-3.34). This paper aims to develop a combined Sign-Euclidean distance function that measures among segmented contours and reduces segmentation error against ground-truth contour.
Melanoma is a type of a skin cancer or lesion which has the detrimental ramifications on the human health but with early diagnosis it can be cured easily. The actual identification of skin lesion is very challenging because of factors like a very minute difference between lesion and skin and it is very difficult to differentiate among skin cancer types due to visual comparability. Hence an autonomous system for the diagnosis of true skin cancer type is very useful. In this article, we took the leverage of ensemble learning by combining the features of deep learning architectures with traditional features extraction approaches. For segmentation, we have two pipelines for the feature extraction. We extract the features through traditional split and merge approach as well as from deep learning algorithms of contextual encoding along with the attention mechanism. Later we combine the features of both architectures and predict the segmented region through intersection over union mechanism. After that segmented region is classified into three types of skin lesion using hybrid features of Alex-Net and VGG-16 through the transfer learning approach. The evaluation has been performed using the ISIC and PH2 datasets for which achieved segmentation accuracy is 97.8% and 96.7%, respectively. Moreover, hybrid classification network able to attain the 98.2% accuracy.
Image-based data integration in eHealth and life sciences is typically concerned with the method used for anatomical space mapping, needed to retrieve, compare and analyse large volumes of biomedical data. In mapping one image onto another image, a mechanism is used to match and find the corresponding spatial regions which have the same meaning between the source and the matching image. Image-based data integration is useful for integrating data of various information structures. Here we discuss a broad range of issues related to data integration of various information structures, review exemplary work on image representation and mapping, and discuss the challenges that these techniques may bring.
Image search is a challenging process in the field of Content Based Image Retrieval (CBIR). Image search-by-example, search-bykeyword and search-by-sketch methods seldom provide user interface that allows user to accurately formulate their search intent easily. To overcome such issue, a novel image search interface-Semantic Visual Query Builder (SeVQer) is proposed as a non-verbal interface which allows user to drag and drop from the image data provided to formulate user query. The drag and drop mechanism minimizes the difficulty of verbalizing query image into keywords or sketching a correct drawing of the query image. SeVQer was implemented and compared with 3 image search methods (search-by-example, search-by-keyword and search-by-sketch) in terms of task completion time and user satisfaction using traffic images. SeVQer achieved statistically significant lower task completion time with an average of 28 sec, a promising 50% reduction than search-by-sketch (average of 56 sec). The significance of this work is two-fold: the SeVQer user interface allows user to easily formulate intent specific query, while the novel architecture and methodology reduces the semantic gap in general.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.