2018
DOI: 10.1007/s00521-018-3441-1
|View full text |Cite
|
Sign up to set email alerts
|

Convolutional neural network-based multimodal image fusion via similarity learning in the shearlet domain

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
39
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 90 publications
(43 citation statements)
references
References 44 publications
0
39
0
Order By: Relevance
“…The optimization of the loss function of this method is still the direction of future research. Following Liu, Hermessi et al [60] proposed the CNN+shearlet fusion method to achieve a good fusion effect. Using the full convolution siamese architecture, the training framework is the famous MatConvNet.…”
Section: Image Fusion Based On Deepmentioning
confidence: 99%
“…The optimization of the loss function of this method is still the direction of future research. Following Liu, Hermessi et al [60] proposed the CNN+shearlet fusion method to achieve a good fusion effect. Using the full convolution siamese architecture, the training framework is the famous MatConvNet.…”
Section: Image Fusion Based On Deepmentioning
confidence: 99%
“…The core ideas of CNN are the local receptive field, weight sharing, and pooling layer, which greatly reduce the number of parameters in the neural network and effectively alleviates or avoids the overfitting phenomenon in the network model. Gradient descent method is adopted in CNN to minimize the loss function, and the weight parameters of the network is reversely adjusted, in which way the identification accuracy of the CNN model is improved through much iterative training [32].…”
Section: Convolutional Neural Network Modelmentioning
confidence: 99%
“…Owing to the random initialized kernels, training the end-to-end model is unstable and difficult. An effective way to handle this issue is using a well-trained feature extraction model [33,34]. Thus, we choose pre-trained Resnet V1 [35] as the feature extraction layers.…”
Section: Generatormentioning
confidence: 99%