2022
DOI: 10.1109/lra.2022.3210880
|View full text |Cite
|
Sign up to set email alerts
|

AdaFusion: Visual-LiDAR Fusion With Adaptive Weights for Place Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 23 publications
(7 citation statements)
references
References 29 publications
0
7
0
Order By: Relevance
“…Recall@1 Recall@5 Recall@10 max F In the following experiments, we compare the performance of our LCPR with the state-of-the-art baselines including the vision-based method NetVLAD [6], the radarbased method AutoPlace [38], the LiDAR-based methods PointNetVLAD [10] and OverlapTransformer [2], and multimodal methods including MinkLoc++ [13], PIC-Net [14], and AdaFusion [12]. We use the released open-source codes of the baseline methods for evaluation except for PIC-Net [14], which is implemented ourselves according to their original paper.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Recall@1 Recall@5 Recall@10 max F In the following experiments, we compare the performance of our LCPR with the state-of-the-art baselines including the vision-based method NetVLAD [6], the radarbased method AutoPlace [38], the LiDAR-based methods PointNetVLAD [10] and OverlapTransformer [2], and multimodal methods including MinkLoc++ [13], PIC-Net [14], and AdaFusion [12]. We use the released open-source codes of the baseline methods for evaluation except for PIC-Net [14], which is implemented ourselves according to their original paper.…”
Section: Methodsmentioning
confidence: 99%
“…Pointwise image features are first extracted using the method described in [34], then point cloud features and pointwise image features are aggregated by a convolution layer. To better exploit the discriminativeness of the descriptors from different modalities, Lai et al [12] (AdaFusion) design an attention branch in their network to weight modalities adaptively.…”
Section: Fusion-based Place Recognitionmentioning
confidence: 99%
See 1 more Smart Citation
“…The final global descriptor is concatenated with the 2D image descriptor and 3D point cloud descriptor. AdaFusion (Lai, Yin, and Scherer 2022) leverages a multi-scale attention module that hierarchically aggregates multi-modal features.…”
Section: Visual Place Recognitionmentioning
confidence: 99%
“…Place recognition provides the current global location of the vehicle in the previously seen environments, which is an important component of robotic simultaneous localization and mapping (SLAM) and global localization. During online operation, it retrieves the reference scan in the database most similar to the current query by directly regressing the similarity [1], [2] or descriptor matching [3], [4], [5]. LiDARbased place recognition methods [6], [7], [8] can be applied to large-scale outdoor environments due to their robustness to illumination and weather changes.…”
Section: Introductionmentioning
confidence: 99%