2021
DOI: 10.1049/ipr2.12317
|View full text |Cite
|
Sign up to set email alerts
|

Multi‐exposure image fusion based on feature evaluation with adaptive factor

Abstract: The authors present a new multi‐exposure images fusion method based on feature evaluation with adaptive factor. It is noticed the existing multi‐exposure fusion algorithm is not well adapted to the input images, which are overall bright or dark, the fused image quality is not pretty good, and the details are not preserved completely. So an adaptive factor to adapt the intensity of input images is presented here. First, the exposure assessment weight, texture change weight, and colour intensity weight are calcu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(11 citation statements)
references
References 26 publications
0
4
0
Order By: Relevance
“…Then, a convolutional neural network (CNN) was used to obtain fusion weights and fuse the aligned images. The contributions of this paper were: (1) presenting the first study on deep learning MEF; (2) the fusion effects of the three CNN architectures were discussed and compared; and (3) a dataset suitable for MEF was created. Since then, many MEF algorithms based on deep learning have been proposed.…”
Section: Supervised Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Then, a convolutional neural network (CNN) was used to obtain fusion weights and fuse the aligned images. The contributions of this paper were: (1) presenting the first study on deep learning MEF; (2) the fusion effects of the three CNN architectures were discussed and compared; and (3) a dataset suitable for MEF was created. Since then, many MEF algorithms based on deep learning have been proposed.…”
Section: Supervised Methodsmentioning
confidence: 99%
“…Regardless, it can be used as an evaluation reference. Huang [1]; Yang [13]; MEF-GAN [17]; Liu [58]; Yang [62]; Martorell [64]; Li [77]; Liu [80]; Chen [81]; Deepfuse [84]; MEFNet [86]; U2fusion [88]; Gao [89]; LXN [123]; Shao [134]; Wu [135]; Merianos [136] The larger, the better 2 Q AB/F Nie [6]; Liu [38]; LST [42]; Hayat [115]; Shao [134] The larger, the better 3 MEF-SSIMc Martorell [64]; UMEF [87]; Shao [134] The larger, the better 4 Mutual information (MI) Nie [6]; Wang [34]; Gao [89]; Choi [137] The larger, the better 5 Peak signal-to-noise ratio (PSNR) Kim [7]; MEF-GAN [17]; Chen [81]; U2fusion [88]; Gao [89]; Shao [134] The larger, the better 6 Natural image quality evaluator (NIQE) Huang [1]; Hayat [115]; Wu [135]; Xu [138] The smaller, the better 7 Standard deviation (SD) MEF-GAN [17]; Gao [89]; Wu [135] The larger, the better 8 Entropy (EN) Gao [89]; Wu…”
Section: Objective Quantitative Comparisonmentioning
confidence: 99%
See 1 more Smart Citation
“…The luminance span in natural scenes is usually large, ranging from starlight at night to dazzling sunlight, with a brightness range of nine orders of magnitude. 1 Restricted by the optical design parameters of the lens, the sensitivity, the full well charge of the detector, and other factors, the dynamic range of the existing imaging equipment is far lower than that of the natural scene. Therefore, it is difficult to record the details of different brightnesses in the background through one single shot.…”
Section: Introductionmentioning
confidence: 99%
“…For example, Liu et al [6] proposed a MEF method based on dense scale-invariant feature transform (SIFT), which uses dense SIFT operators to extract local details from source images, suitable for both static and dynamic scenes, and is able to satisfactorily remove ghosting artifacts in dynamic scenes. Huang et al [7] proposed a multi-exposure fusion algorithm based on feature evaluation, which can adaptively evaluate the exposure weight of the image, and the obtained fusion image has better brightness. Moreover, it can retain some details but details will be lost in areas with large differences in brightness.…”
Section: Introductionmentioning
confidence: 99%