2023
DOI: 10.1109/tcsvt.2022.3202692
|View full text |Cite
|
Sign up to set email alerts
|

Learning a Coordinated Network for Detail-Refinement Multiexposure Image Fusion

Abstract: Nowadays, deep learning has made rapid progress in the field of multi-exposure image fusion. However, it is still challenging to extract available features while retaining texture details and color. To address this difficult issue, in this paper, we propose a coordinated learning network for detail-refinement in an endto-end manner. Firstly, we obtain shallow feature maps from extreme over/under-exposed source images by a collaborative extraction module. Secondly, smooth attention weight maps are generated und… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 20 publications
(8 citation statements)
references
References 79 publications
0
4
0
Order By: Relevance
“…Consequently, some useful multi-layer information is lost in the deep cascaded network, resulting in unfriendly visual perception. In addition, some non-end-to-end methods [ 11 , [15] , [16] , [17] , 27 , 28 ] generate unsatisfied fusion results due to unreasonable fusion rules. To this end, in this work, we focus on developing more effective GAN frameworks that explicitly deal with the scale-space problems faced by visible and infrared image fusion task in an end-to-end fashion.…”
Section: Technical Backgroundsmentioning
confidence: 99%
See 1 more Smart Citation
“…Consequently, some useful multi-layer information is lost in the deep cascaded network, resulting in unfriendly visual perception. In addition, some non-end-to-end methods [ 11 , [15] , [16] , [17] , 27 , 28 ] generate unsatisfied fusion results due to unreasonable fusion rules. To this end, in this work, we focus on developing more effective GAN frameworks that explicitly deal with the scale-space problems faced by visible and infrared image fusion task in an end-to-end fashion.…”
Section: Technical Backgroundsmentioning
confidence: 99%
“…Zheng [ 13 ] achieved feature extraction at different scales and levels using HINBlock. Reference [ [14] , [15] , [16] , [17] ] employed convolution kernels of different sizes to extract common and unique features of source images. Reference [ [18] , [19] , [20] ] captured the multilevel features of the source images via residual learning.…”
Section: Introductionmentioning
confidence: 99%
“…In this setup, the three conventional down-sampling and three up-sampling encoding stages in the traditional U-Net are supplanted by the communal cells. To boost the robustness of our stitching framework, the communal cell is adapted with the similar candidate operator to the homography estimation and the inter connection is determined through the search learning (Li et al 2022(Li et al , 2023. A detailed depiction of the structure can be seen in Fig.…”
Section: Robust Stitching Modelmentioning
confidence: 99%
“…Differential attack is to change specific elements of the plaintext image, corresponding to the degree of influence of different ciphertexts, to obtain as much of the key as possible. The ability of an encryption algorithm to resist differential attacks can be measured by two important parameters: number of pixel change rate (NPCR) and uniform average change intensity (UACI) [44], which are calculated as Equation ( 49)- (51):…”
Section: Information Entropy Analysismentioning
confidence: 99%