2022
DOI: 10.3788/irla20210290
|View full text |Cite
|
Sign up to set email alerts
|

基于并行注意力机制的地面红外目标检测方法(特邀)

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Secondly, we add an 'correlation layer' 15 to the fusion of two branches for feature matching. In addition, we add attention mechanism [16][17] in the middle of encoding and decoding to enhance the feature extraction and processing capability of the network. In unsupervised training, how to construct a loss function based on the predicted displacement data and the actual physical constraint relationship between the reference and target speckle image is a key issue in unsupervised learning.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Secondly, we add an 'correlation layer' 15 to the fusion of two branches for feature matching. In addition, we add attention mechanism [16][17] in the middle of encoding and decoding to enhance the feature extraction and processing capability of the network. In unsupervised training, how to construct a loss function based on the predicted displacement data and the actual physical constraint relationship between the reference and target speckle image is a key issue in unsupervised learning.…”
Section: Methodsmentioning
confidence: 99%
“…In this section, we conducted experiments to validate the effectiveness of our proposed DICNet and unsupervised neural network for displacement field measurement. We used an open-source dataset 17 The sample size in the dataset is 480 pixels. Since the maximum displacement of samples in this dataset is 16 pixels, in order to reduce the error caused by missing image edge displacement information, the experimental data below are all obtained after the sample is cropped to 464×464 pixels.…”
Section: Dataset and Network Training Modesmentioning
confidence: 99%