Nowadays, deep learning has made rapid progress in the field of multi-exposure image fusion. However, it is still challenging to extract available features while retaining texture details and color. To address this difficult issue, in this paper, we propose a coordinated learning network for detail-refinement in an endto-end manner. Firstly, we obtain shallow feature maps from extreme over/under-exposed source images by a collaborative extraction module. Secondly, smooth attention weight maps are generated under the guidance of a self-attention module, which can draw a global connection to correlate patches in different locations. With the cooperation of the two aforementioned used modules, our proposed network can obtain a coarse fused image. Moreover, by assisting with an edge revision module, edge details of fused results are refined and noise is suppressed effectively. We conduct subjective qualitative and objective quantitative comparisons between the proposed method and twelve state-ofthe-art methods on two available public datasets, respectively. The results show that our fused images significantly outperform others in visual effects and evaluation metrics. In addition, we also perform ablation experiments to verify the function and effectiveness of each module in our proposed method. The source code can be achieved at https://github.com/lok-18/LCNDR.
At present, multimodal medical image fusion technology has become an essential means for researchers and doctors to predict diseases and study pathology. Nevertheless, how to reserve more unique features from different modal source images on the premise of ensuring time efficiency is a tricky problem. In order to handle this issue, we propose a flexible semantic-guided architecture with a mask-optimized framework in an end-to-end manner, termed as GeSeNet. Specifically, a region mask module is devised to deepen the learning of important information while pruning redundant computation for reducing the runtime. An edge enhancement module and a global refinement module are presented to modify the extracted features for boosting the edge textures and adjusting overall visual performance. In addition, we introduce a semantic module which is cascaded with the proposed fusion network to deliver semantic information into our generated results. Sufficient qualitative and quantitative comparative experiments (i.e., MRI-CT, MRI-PET, MRI-SPECT) are deployed between our proposed method and ten state-of-theart methods, which shows our generated images lead the way. Moreover, we also conduct operational efficiency comparisons and ablation experiments to prove that our proposed method can perform excellently in the field of multimodal medical image fusion.
The goal of multi-exposure image fusion is to generate synthetic results with abundant details and balanced exposure from low dynamic range(LDR) images. The existing multiexposure fusion methods often use convolution operations to extract features. However, these methods only consider the pixel values in local view field and ignore the long-range dependencies between pixels. To solve the aforementioned problem, we propose a global-local aggregation network for fusing extreme exposure images in an unsupervised way. Firstly, we design a collaborative aggregation module, composed of two sub-modules covering a non-local attention inference module and a local adaptive learning module, to mine the relevant features from source images. So that we successfully formulate a feature extraction mechanism with aggregating global and local information. Secondly, we provide a special fusion module to reconstruct fused images, which effectively avoids artifacts and suppresses information decay. Moreover, we further fine-tune the fusion results by a recursive refinement module to capture more textural details from source images. The results of both comparative and ablation analyses on two datasets demonstrate that our work is superior to ten existing state-of-the-art fusion methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.