This paper presents an automatic content-based image retrieval (CBIR) system for brain tumors on T1-weighted contrast-enhanced magnetic resonance images (CE-MRI). The key challenge in CBIR systems for MR images is the semantic gap between the low-level visual information captured by the MRI machine and the high-level information perceived by the human evaluator. The traditional feature extraction methods focus only on low-level or high-level features and use some handcrafted features to reduce this gap. It is necessary to design a feature extraction framework to reduce this gap without using handcrafted features by encoding/combining low-level and high-level features. Deep learning is very powerful for feature representation that can depict low-level and high-level information completely and embed the phase of feature extraction in self-learning. Therefore, we propose a deep convolutional neural network VGG19-based novel feature extraction framework and apply closed-form metric learning to measure the similarity between the query image and database images. Furthermore, we adopt transfer learning and propose a block-wise fine-tuning strategy to enhance the retrieval performance. The extensive experiments are performed on a publicly available CE-MRI dataset that consists of three types of brain tumors (i.e., glioma, meningioma, and pituitary tumor) collected from 233 patients with a total of 3064 images across the axial, coronal, and sagittal views. Our method is more generic, as we do not use any handcrafted features; it requires minimal preprocessing, tested as robust on fivefold cross-validation, can achieve a fivefold mean average precision of 96.13%, and outperforms the state-of-the-art CBIR systems on the CE-MRI dataset. INDEX TERMS Brain tumor retrieval, block-wise fine-tuning, closed-form metric learning, convolutional neural networks, feature extraction, transfer learning.
Aim and Objective:
Cancer is a dangerous disease worldwide, caused by somatic
mutations in the genome. Diagnosis of this deadly disease at an early stage is exceptionally new
clinical application of microarray data. In DNA microarray technology, gene expression data have a
high dimension with small sample size. Therefore, the development of efficient and robust feature
selection methods is indispensable that identify a small set of genes to achieve better classification
performance.
Materials and Methods:
In this study, we developed a hybrid feature selection method that
integrates correlation-based feature selection (CFS) and Multi-Objective Evolutionary Algorithm
(MOEA) approaches which select the highly informative genes. The hybrid model with Redial base
function neural network (RBFNN) classifier has been evaluated on 11 benchmark gene expression
datasets by employing a 10-fold cross-validation test.
Results:
The experimental results are compared with seven conventional-based feature selection and
other methods in the literature, which shows that our approach owned the obvious merits in the
aspect of classification accuracy ratio and some genes selected by extensive comparing with other
methods.
Conclusion:
Our proposed CFS-MOEA algorithm attained up to 100% classification accuracy for
six out of eleven datasets with a minimal sized predictive gene subset.
In real-world surveillance systems, the person images captured by the camera network consists of various low-resolution (LR) images. It creates a resolution mismatching problem when compared against high-resolution images of a targeted person. It significantly affects the performance of person re-Identification. This problem is known as Low-Resolution Person re-identification (LR PREID). An efficient strategy would be to exploit image super-resolution (SR) with person re-identification as a mutual learning approach. In this paper, we propose a novel method MSA-SR-PREID to solve this problem. The model takes low-resolution images on different resolutions and resized them to pre-defined fixed resolution. The design of the super-resolution network consists of ESRGAN and the de-Noising module to generate superresolution images. The SR images are later passed to the re-identification network to learn the unique descriptors to recognize a person identity. The performance of this model is evaluated on four competitive benchmarks, MLR-VIPeR, MLR-DukeMTMC-reID, VR-MSMT17, and VR-Market1501. The comparison with similar state-of-the-art demonstrates the superiority of our model.
In this paper stacked configuration of microstrip antenna is used to produce dual wide band which is suitable for various wireless applications. Using triangular slot and stacking of foam substrate of dielectric constant 1, two bands of bandwidth 18.70% and 12.10% is obtained. The antenna is fed by coaxial probe feeding technique. The proposed patch antenna is designed on the foam substrate and simulated on the Zeland IE3D software.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.