Retrieving Medical Images from a large inter-domain dataset requires multiple high efficiency processing models. These models include, but are not limited to, image classification, domain specific feature extraction & selection, ranking and post processing. A wide variety of system models have been designed to perform these tasks, but have limited accuracy, and retrieval performance due to improper cross-domain feature processing. In order to improve performance of cross-domain medical image retrieval systems, this text proposes a transfer learning mechanism, that learns from features of one domain, and applies the trained models to other domains. The proposed method uses a combination of VGGNet19, AlexNet, InceptionNet and Xception Net models for ensemble learning, along with wavelet and bag of features (WBoF) for efficient feature extraction. Each of the individual models were applied to different medical domains, and their retrieval accuracies were evaluated. Based on this evaluation, it is observed that VGGNet19 has better performance on Computer Tomography (CT) images, AlexNet model has better performance on Magnetic Resonance Imaging (MRI) images, InceptionNet model has better performance on Positron Emission Tomography (PET) images, while Xception Net has better retrieval performance for ultrasound (USG) images. Using this observation, a highly efficient augmentation model is designed, which achieves an accuracy of 98.06%, precision of 65.9%, recall of 76.1%, and area under the curve (AUC) performance of 98.9% on different datasets. This performance is compared with a wide variety of medical image datasets including Center for Invivo Microscopy (CIVM), Embrionic and Neonatal Mouse (H&E, MR), LONI image data archive, The Open Access Series of Imaging Studies (OASIS), & CT scans for Colon Cancer (CSCC). It was observed that the proposed model outperforms most of the recent state-of-the-art models, and achieves consistent parametric results across multiple domain medical images.Categories: H.3.1, H.3.2, H.3.3, H.3.7, H.5.1
Recognizing the actions performed by any person is the most successful applications in pattern recognition. Detecting the action in a moving camera influences dynamic view changes, is based on spatio-temporal information at multiple temporal scales. In this paper, we are presenting a system that is dependent on actions based on multi-view information. These multi-view features are extracted from various temporal scales. The GMM and Prewitt edge filter is used for detecting background and foreground image. The Nearest Mean Classifier is used to cluster features vector's of moving object. The experiment results demonstrated using Kth dataset producing 98% of accuracy.
Technological advances have evolved in all the directions including the biomedical, because of which a record number of lives are saved every day. The advancement has now surpassed the tools level, now the doctors with the help of new tools can also detect diseases, which saves the response time. In this paper, we will work on one such technique which will help in retrieving the similar type of images with the help of their features. In this paper, the features such as Texture features, LBP features, Retrieval feature, which are processed with hash coding and relevance feedback to get the final results. The framework provides the output utilizing a hash coding classifiers which predict the image from the database of the images. The images are classified on a global level with the help of multiple low-level features.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.