Proceedings of the 2020 International Conference on Multimedia Retrieval 2020
DOI: 10.1145/3372278.3390693
|View full text |Cite
|
Sign up to set email alerts
|

DAGC: Employing Dual Attention and Graph Convolution for Point Cloud based Place Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
29
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 41 publications
(30 citation statements)
references
References 35 publications
1
29
0
Order By: Relevance
“…Unfortunately, the method suffers from PointNet weakness to capture high-level features. Therefore, many solutions like [12], [13], [15] focuse on the data representation problem leaving the NetVLAD part intact. PCAN [12] improves PointNet by estimating the significance of each point.…”
Section: B 3d Lidar Place Recognitionmentioning
confidence: 99%
See 2 more Smart Citations
“…Unfortunately, the method suffers from PointNet weakness to capture high-level features. Therefore, many solutions like [12], [13], [15] focuse on the data representation problem leaving the NetVLAD part intact. PCAN [12] improves PointNet by estimating the significance of each point.…”
Section: B 3d Lidar Place Recognitionmentioning
confidence: 99%
“…PCAN [12] improves PointNet by estimating the significance of each point. DAGC [15] uses graph CNN to combine information at multiple scales. LPD-NET [13] computes hand-crafted features, which are later processed using a pipeline similar to PointNet architecture.…”
Section: B 3d Lidar Place Recognitionmentioning
confidence: 99%
See 1 more Smart Citation
“…There are two main challenges to perform visual place recognition based on the differences of scenes: one is the appearance change caused by illumination condition and seasonal changes, and the other is the viewpoint change caused by revisiting one place from different viewpoints [1]. In the VPR literature, various feature extraction methods have been developed for visual place recognition, including deep convolutional feature-based methods [6][7][8][9][10][11][12][13], handicraft feature-based methods [2,18], semantic information-based methods [19][20][21][22][23][24][25], sequence-based methods [26,27], and graph-based methods [19,20,[28][29][30][31][32]. Overall, most of these studies focus on the image processing module of the visual place recognition system, which aims to extract and describe features that are robust in the different challenge conditions as mentioned above.…”
Section: Visual Place Recognitionmentioning
confidence: 99%
“…With the recent monumental innovations in sensor technology, a wide variety of DL-based 3D object [25][26][27][28] and place recognition approaches [29][30][31] have been developed for different types of sensors. LiDAR and camera are two frequently used and increasingly popular sensors [32] that have been employed for object and place recognition in robotic systems.…”
Section: Introductionmentioning
confidence: 99%