2019
DOI: 10.1109/jsen.2019.2928818
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Focus Image Fusion Using U-Shaped Networks With a Hybrid Objective

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 44 publications
(17 citation statements)
references
References 41 publications
0
17
0
Order By: Relevance
“…Hence, we can only compare the performance of our method with the available state-of-the-art supervised methods. These include the Non-Subsampled Contournet Transform (NSCT) [ 29 ], Guided Filtering (GF) [ 35 ], Dense SIFT (DSIFT) [ 33 ], as well as the methods based on Boundary Finding (BF) [ 57 ], Convolutional Neural Network (CNN) [ 18 ], the U-net [ 41 ]; deep unsupervised algorithms FusionDN [ 43 ], MFF-GAN [ 44 ] and U2Fusion [ 42 ]. We implemented these algorithms using code acquired from their respective authors.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Hence, we can only compare the performance of our method with the available state-of-the-art supervised methods. These include the Non-Subsampled Contournet Transform (NSCT) [ 29 ], Guided Filtering (GF) [ 35 ], Dense SIFT (DSIFT) [ 33 ], as well as the methods based on Boundary Finding (BF) [ 57 ], Convolutional Neural Network (CNN) [ 18 ], the U-net [ 41 ]; deep unsupervised algorithms FusionDN [ 43 ], MFF-GAN [ 44 ] and U2Fusion [ 42 ]. We implemented these algorithms using code acquired from their respective authors.…”
Section: Resultsmentioning
confidence: 99%
“…Most recently, with the U-net being successfully applied in image-to image translation [ 39 ] and pixel-wise regression [ 40 ]; a U-net based end-to-end multifocus image algorithm was introduced in [ 41 ]. This method also needs the ground truth for training the U-net fusion network model.…”
Section: Related Workmentioning
confidence: 99%
“…Multi-focus image fusion has also been achieved through deep neural networks in several ways; the majority of these techniques rely on the detection of the focused region [ 50 ]. In the fusion method presented in [ 50 ], features are extracted through a u-shaped network to obtain high- and low-level frequency texture information. It directly maps multi-focus images to fused images instead of detecting focused regions.…”
Section: Background and Literature Reviewmentioning
confidence: 99%
“…The size of each image is 1936×1216 pixels, and some example images are shown in Figure 9. In the following experiments on the multi-focus microscopic image dataset of cancer cells, the proposed method is compared with other well-known multi-focus image fusion methods, namely the LP [1], non-subsampled contourlet transform (NSCT) [6], DWT [3], DTCWT [7], curvelet transform (CVT) [5], CNN-shaped transform [24], and U-shaped transform [27]. In addition, the parameters of these methods are set to the recommended values from their original papers.…”
Section: Experimental Setmentioning
confidence: 99%
“…that are not visible in an end-to-end manner. Li et al [27] used the U-NET to perform semantic segmentation on two non-fully focused images to be fused to form a decision graph to guide fusion. Compared with the CNN-based methods, this method can better distinguish the focus and defocus areas in the source images and reduce the time complexity.…”
Section: Introductionmentioning
confidence: 99%