Abstract:Image fusion based on the sparse representation (SR)has become the primary research direction of the transform domain method. However, the SR-based image fusion algorithm has the characteristics of high computational complexity and neglecting the local features of an image, resulting in limited image detail retention and a high registration misalignment sensitivity. In order to overcome these shortcomings and the noise existing in the image of the fusion process, this paper proposes a new signal decomposition … Show more
“…Furthermore, the lack of local image features in SR-based image fusion leads to low detail and registration misalignment sensitivity. To address these issues, a study [25] introduced gradient regularization convolution SR multi-source image fusion. The highand low-frequency image components were separated.…”
Background: The field of clinical or medical imaging is beginning to experience significant advancements in recent years. Various medical imaging methods such as computed tomography (CT), X-radiation (X-ray), and magnetic resonance imaging (MRI) produce images with distinct resolution differences, goals, and noise levels, making it challenging for medical experts to diagnose diseases.
Objective: The limitations of a single medical image modality have increased the necessity for medical image fusion. The proposed solution is to create a fusion method of merging two types of medical images, such as MRI and CT. Therefore, this study aimed to develop a software solution that swiftly identifies the precise region of a brain tumor, speeding up the diagnosis and treatment planning.
Methods: The proposed methodology combined clinical images by using discrete wavelet transform (DWT) and inverse discrete wavelet transform (IDWT). This strategy depended on a multi-goal decay of the image information using DWT, and high-frequency sub-bands of the disintegrated images were combined using a weighted averaging method. Meanwhile, the low-frequency sub-bands were straight-forwardly replicated in the resulting image. The combined high-quality image was recreated using the IDWT. This method can handle images with various modalities and resolutions without the need for previous data.
Results: The results showed that the outcomes of the proposed method were assessed by different metrics such as accuracy, recall, F1-score, and visual quality. The method showed a high accuracy of 98% over the familiar neural network techniques.
Conclusion: The proposed method was found to be computationally effective and produced high-quality medical images to assist professionals. Furthermore, the method can be stretched out to other image modalities and exercised by hybrid techniques of wavelet transform and neural networks and used for different clinical image analysis tasks.
Keywords: CT and MRI, Image fusion, brain tumor, wavelet transform methods, medical images, machine learning, CNN
“…Furthermore, the lack of local image features in SR-based image fusion leads to low detail and registration misalignment sensitivity. To address these issues, a study [25] introduced gradient regularization convolution SR multi-source image fusion. The highand low-frequency image components were separated.…”
Background: The field of clinical or medical imaging is beginning to experience significant advancements in recent years. Various medical imaging methods such as computed tomography (CT), X-radiation (X-ray), and magnetic resonance imaging (MRI) produce images with distinct resolution differences, goals, and noise levels, making it challenging for medical experts to diagnose diseases.
Objective: The limitations of a single medical image modality have increased the necessity for medical image fusion. The proposed solution is to create a fusion method of merging two types of medical images, such as MRI and CT. Therefore, this study aimed to develop a software solution that swiftly identifies the precise region of a brain tumor, speeding up the diagnosis and treatment planning.
Methods: The proposed methodology combined clinical images by using discrete wavelet transform (DWT) and inverse discrete wavelet transform (IDWT). This strategy depended on a multi-goal decay of the image information using DWT, and high-frequency sub-bands of the disintegrated images were combined using a weighted averaging method. Meanwhile, the low-frequency sub-bands were straight-forwardly replicated in the resulting image. The combined high-quality image was recreated using the IDWT. This method can handle images with various modalities and resolutions without the need for previous data.
Results: The results showed that the outcomes of the proposed method were assessed by different metrics such as accuracy, recall, F1-score, and visual quality. The method showed a high accuracy of 98% over the familiar neural network techniques.
Conclusion: The proposed method was found to be computationally effective and produced high-quality medical images to assist professionals. Furthermore, the method can be stretched out to other image modalities and exercised by hybrid techniques of wavelet transform and neural networks and used for different clinical image analysis tasks.
Keywords: CT and MRI, Image fusion, brain tumor, wavelet transform methods, medical images, machine learning, CNN
“…Algorithms related to RGB-T tracking fall into the following three categories [9]: (i) sparse representation model based trackers; (ii) correlation filter (CF) based trackers; (iii) deep learning based trackers. Due to the strong anti-noise and anti-error capabilities, sparse representation models have been successfully applied to many image processing tasks [10], which are extensively used in target tracking to fuse multiple features, such as multiple sparse representation models [11,12], collaborative representation models [13] and collaborative discriminant learning [14].…”
In challenging situations, such as low illumination, rain, and background clutter, the stability of the thermal infrared (TIR) spectrum can help red, green, blue (RGB) visible spectrum to improve tracking performance. However, the high-level image information and the modality-specific features have not been sufficiently studied. The proposed correlation filter uses the fused saliency content map to improve filter training and extracts different features of modalities. The fused content map is introduced into the spatial regularization term of correlation filter to highlight the training samples in the content region. Furthermore, the fused content map can avoid the incompleteness of the content region caused by challenging situations. Additionally, different features are extracted according to the modality characteristics and are fused by the designed response-level fusion strategy. The alternating direction method of multipliers (ADMM) algorithm is used to solve the tracker training efficiently. Experiments on the large-scale benchmark datasets show the effectiveness of the proposed tracker compared to the state-of-theart traditional trackers and the deep learning based trackers.
“…Heterogeneous image fusion is helpful to realize multimodal fusion, but the primary task of fusion is to complete multisource image registration. 8,9 For the work of image registration, there have been many experts and scholars to study, and now we summarize as follows. In 2004, Lowe 10 proposed a scale-invariant feature transformation (SIFT) method to describe local features between images, mainly through constructing feature pyramids for Gaussian blur and iteration.…”
Section: Introductionmentioning
confidence: 99%
“…At present, there are many research directions of heterogeneous image fusion, such as infrared and visible image fusion, 1 over- and underexposed image fusion, 2 multifocus image fusion, 3 – 5 multispectral image fusion, 6 , 7 etc. Heterogeneous image fusion is helpful to realize multimodal fusion, but the primary task of fusion is to complete multisource image registration 8 , 9 …”
High-quality image registration is the basis of multimodal fusion in the field of industrial welds. Due to the influence of contrast, scale, and illumination, the registration accuracy and robustness of existing algorithms are not high enough for the industrial weld field, and registration performance still needs further improvement. In this case, we propose a hybrid double branch image registration (HDBR) algorithm for multisource images, which has a parallel composite structure. The experimental result demonstrates that compared with the radiation invariant feature transformation and histogram of absolute phase consistency gradients algorithms, HDBR algorithm has higher matching accuracy and lower registration loss, and the registration performance is improved by 1.12% and 15.55%. The composite structure registration algorithm we proposed is mainly applied to the multiview weld image field, and finally high-quality multiview weld data set is obtained.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.