2021
DOI: 10.1016/j.inffus.2020.08.022
|View full text |Cite
|
Sign up to set email alerts
|

MFF-GAN: An unsupervised generative adversarial network with adaptive and gradient joint constraints for multi-focus image fusion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
63
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 177 publications
(66 citation statements)
references
References 39 publications
1
63
0
1
Order By: Relevance
“…Hence, we can only compare the performance of our method with the available state-of-the-art supervised methods. These include the Non-Subsampled Contournet Transform (NSCT) [ 29 ], Guided Filtering (GF) [ 35 ], Dense SIFT (DSIFT) [ 33 ], as well as the methods based on Boundary Finding (BF) [ 57 ], Convolutional Neural Network (CNN) [ 18 ], the U-net [ 41 ]; deep unsupervised algorithms FusionDN [ 43 ], MFF-GAN [ 44 ] and U2Fusion [ 42 ]. We implemented these algorithms using code acquired from their respective authors.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Hence, we can only compare the performance of our method with the available state-of-the-art supervised methods. These include the Non-Subsampled Contournet Transform (NSCT) [ 29 ], Guided Filtering (GF) [ 35 ], Dense SIFT (DSIFT) [ 33 ], as well as the methods based on Boundary Finding (BF) [ 57 ], Convolutional Neural Network (CNN) [ 18 ], the U-net [ 41 ]; deep unsupervised algorithms FusionDN [ 43 ], MFF-GAN [ 44 ] and U2Fusion [ 42 ]. We implemented these algorithms using code acquired from their respective authors.…”
Section: Resultsmentioning
confidence: 99%
“…We used a desktop machine with 3.4 GHz Intel i7 CPU (32 RAM) and NVIDIA Titan Xp GPU (12 GB Memory) to evaluate our algorithm. For fairness, we compared our proposed algorithm with other deep unsupervised algorithms [ 42 , 43 , 44 ] since others did not perform with a GPU. The average run-time of our proposed MFNet , FusionDN, MFF-GAN and U2Fusion is 4.33, 1.55, 1.60 and 1.78 s, respectively.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The obtained sparse coefficients of the source images were merged, and a fused image was obtained. Generative adversarial networks (GANs) have also been exploited for increasing the depth of field, e.g., [ 60 , 61 , 62 ].…”
Section: Background and Literature Reviewmentioning
confidence: 99%
“…For example, Liu [ 14 ] proposed a multi-focus image fusion method with a deep convolutional neural network; Zhong [ 15 ] proposed a remote sensing image fusion method with a convolutional neural network; Ma [ 12 ] is the first one who applied generative adversarial networks (GAN) into infrared and visible image fusion and achieved good fusion results; however, the construction of the existing FusionGAN-based method is simple, and loss fusion is imperfect, which can lead to incomplete information transfer in the fused image. Improved GAN-based methods have been proposed for image fusion, such as Zhang [ 16 ], who proposed a new generative adversarial network with adaptive and gradient joint constraints to fuse multi-focus images. Nevertheless, the methods mentioned above are effective for other kinds of source images rather than remote sensing images, and they are sensitive to the noise in an image.…”
Section: Introductionmentioning
confidence: 99%