2023
DOI: 10.1109/tip.2022.3231135
|View full text |Cite
|
Sign up to set email alerts
|

ReDFeat: Recoupling Detection and Description for Multimodal Feature Learning

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 21 publications
(5 citation statements)
references
References 41 publications
0
2
0
Order By: Relevance
“…1) Performance comparison to other methods: The performance of the proposed methods is quantitatively and qualitatively compared with some popular multimodal image registration methods, including RIFT [8], ASS [9], sRIFD [43], Pix2pix [44], pix2pixHD [45], KCG-GAN [18], and ReDFeat [46]. Ten pairs are picked at random, and table II lists the NCM results for eight algorithms.…”
Section: Resultsmentioning
confidence: 99%
“…1) Performance comparison to other methods: The performance of the proposed methods is quantitatively and qualitatively compared with some popular multimodal image registration methods, including RIFT [8], ASS [9], sRIFD [43], Pix2pix [44], pix2pixHD [45], KCG-GAN [18], and ReDFeat [46]. Ten pairs are picked at random, and table II lists the NCM results for eight algorithms.…”
Section: Resultsmentioning
confidence: 99%
“…DescNet [28] performs a multilevel convolutional pooling operation on the input image block to obtain the deep features of the image block and finally outputs a 128-dimensional vector as a feature descriptor. ReDFeat [29] re-couples the independent constraints of detection and description of multi-constrained feature learning with a mutual weighting strategy and thus does not directly suppress the probability of detecting an ambiguous feature. SuperGlue [30] constructs a Graph Neural Network (GNN) for jointly finding correspondences and rejecting non-matchable points by treating the feature matching problem as solving a differentiable optimal transport problem.…”
Section: Related Workmentioning
confidence: 99%
“…The latest algorithms include LoFTR [31], which consists of two coarse matching strategies, the dual-softmax method adopted by the original text, called LoFTR-DS, and the optimal transport method adopted by SuperGlue [30], called LoFTR-OT, and we tested them against the outdoor data trained with each of the two algorithms. In addition, we compared our algorithm to the latest ReDFeat [29] multimodal algorithm, which uses the corresponding VIS-NIR weights, and the multimodal module uses the best result among the weights. The test results in this study were obtained using a computer with a 4-core, 8-thread Intel i3-12100F@3.3/4.3 GHz CPU and an 8GB RTX4060 GPU.…”
Section: Baseline and Metricsmentioning
confidence: 99%
“…Hu Moments refer to a collection of mathematical descriptors or moments utilized for characterizing an object's shape in the fields of image analysis and pattern recognition. 16 The role of normalized central moments is to provide a standardized measure of an object's shape or distribution of intensity values in an image, independent of factors like translation, scale, and rotation. Variation caused by these factors can be eliminated by normalizing the central moments, making the moments more robust and suitable for comparison and recognition tasks.…”
Section: Flaws Characteristics Extraction Of Surface Defects and Data...mentioning
confidence: 99%