2022
DOI: 10.1007/s10514-021-10032-7
|View full text |Cite
|
Sign up to set email alerts
|

Appearance-based loop closure detection combining lines and learned points for low-textured environments

Abstract: Hand-crafted point descriptors have been traditionally used for visual loop closure detection. However, in low-textured environments, it is usually difficult to find enough point features and, hence, the performance of such algorithms degrade. Under this context, this paper proposes a loop closure detection method that combines lines and learned points to work, particularly, in scenarios where hand-crafted points fail. To index previous images, we adopt separate incremental binary Bag-of-Words (BoW) schemes fo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 63 publications
(108 reference statements)
0
5
0
Order By: Relevance
“…Traditional visual LCD solutions calculate global features (Dalal & Triggs, 2005; Jegou et al, 2010; Siagian & Itti, 2009; Ulrich & Nourbakhsh, 2000) to embed the whole image into a single matrix/vector, or extract hand‐crafted local features (Bay et al, 2008; Lowe, 2004; Rublee et al, 2011) to detect massive salient key points while describing patches of images by descriptors as a compact representation, such as SURF (Bay et al, 2008) in FAB‐MAP (Cummins & Newman, 2008) and FAB‐MAP 2.0 (Cummins & Newman, 2011), ORB (Rublee et al, 2011) in DLoopDetector (Galvez‐López & Tardos, 2012). Some current works (Company‐Corcoles et al, 2020; Han et al, 2021) also utilize line features to enhance LCD methods. In recent years, motivated by the success of deep CNN in other computer vision tasks, many novel learned features (Arandjelovi et al, 2018; DeTone et al, 2018; Dusmanu et al, 2019; Noh et al, 2017; Sarlin et al, 2019) have been proposed and shown improved robustness against varying illuminations and viewpoints than traditional counterparts.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Traditional visual LCD solutions calculate global features (Dalal & Triggs, 2005; Jegou et al, 2010; Siagian & Itti, 2009; Ulrich & Nourbakhsh, 2000) to embed the whole image into a single matrix/vector, or extract hand‐crafted local features (Bay et al, 2008; Lowe, 2004; Rublee et al, 2011) to detect massive salient key points while describing patches of images by descriptors as a compact representation, such as SURF (Bay et al, 2008) in FAB‐MAP (Cummins & Newman, 2008) and FAB‐MAP 2.0 (Cummins & Newman, 2011), ORB (Rublee et al, 2011) in DLoopDetector (Galvez‐López & Tardos, 2012). Some current works (Company‐Corcoles et al, 2020; Han et al, 2021) also utilize line features to enhance LCD methods. In recent years, motivated by the success of deep CNN in other computer vision tasks, many novel learned features (Arandjelovi et al, 2018; DeTone et al, 2018; Dusmanu et al, 2019; Noh et al, 2017; Sarlin et al, 2019) have been proposed and shown improved robustness against varying illuminations and viewpoints than traditional counterparts.…”
Section: Related Workmentioning
confidence: 99%
“…BoW model generally considers an image as a set of visual words and associates local features with visual words in a trained visual vocabulary so that an image can be compactly represented by a statistical histogram of visual words. Some BoW‐based LCD methods (Company‐Corcoles et al, 2020; Garcia‐Fidalgo & Ortiz, 2018; Khan & Wollherr, 2015; Nicosevici & Garcia, 2012; Tsintotas et al, 2019), which build vocabularies in an on‐line or incremental manner, do not need a prior training stage in which massive features are clustered to construct a vocabulary. This strategy seems easy to adapt to varying scenes but it needs large computational complexity to update the vocabularies in real time.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…SLAM can be classified as visual SLAM [1][2][3][4][5] and laser SLAM [6][7][8][9][10][11] depending on which sensor is used to perceive the environment. Both lidar and cameras play an important role in autonomous vehicles, but lidar is expensive.…”
Section: Introductionmentioning
confidence: 99%
“…Mainstream algorithms that solve for loop closure detection are appearance-based. In which, visual bag-of-words (BoW) model that builds on feature point SLAM is most commonly used [2] . Given the concept of a dictionary, the identified elements in BoW model are regarded as words.…”
Section: Introductionmentioning
confidence: 99%