2018 IEEE International Conference on Robotics and Automation (ICRA) 2018
DOI: 10.1109/icra.2018.8461146
|View full text |Cite
|
Sign up to set email alerts
|

Assigning Visual Words to Places for Loop Closure Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
48
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 64 publications
(48 citation statements)
references
References 21 publications
0
48
0
Order By: Relevance
“…To measure the correctness of our results, we compare them with a provided ground-truth (GT). GT is a binary matrix, whose rows and columns correspond to images at different timestamps (Tsintotas et al, 2018). When GT ij = 1, there is a loop closure event.…”
Section: Ground-truthmentioning
confidence: 99%
See 2 more Smart Citations
“…To measure the correctness of our results, we compare them with a provided ground-truth (GT). GT is a binary matrix, whose rows and columns correspond to images at different timestamps (Tsintotas et al, 2018). When GT ij = 1, there is a loop closure event.…”
Section: Ground-truthmentioning
confidence: 99%
“…Considering the approach of creating a BoVW, these appearance‐based methods can be divided into two categories: off‐line and on‐line (Tsintotas et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Other approaches that belong to this category were real-time appearance-based mapping (RTAB-Map) using SURF features [19] and appearancebased Loop Closure Detection Approach using Incremental Bags of Binary Words (iBow-LCD) [20]. Tsintotas et al [21] presented a real-time loop closure detection approach based on the on-line clustering of Visual Words (VWs) without the requirement of a pre-trained procedure.…”
Section: A Hand-crafted Representation For Loop Closure Detectionmentioning
confidence: 99%
“…requires a re-training step whenever the environment changes with regard to the available, pre-trained visual vocabulary. To overcome this shortcoming, our proposal adopts an incremental dictionary-based approach [1]- [3], [11], [12] that avoids the pre-training. Furthermore, to solve the unavoidable spatial verification process for loop hypothesis validation, our solution relies only on 2D image data, contrary to other studies that require 3D information supplied by either a stereo camera or a previous mapping process [6], [7].…”
Section: Introductionmentioning
confidence: 99%