2021
DOI: 10.48550/arxiv.2103.13090
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Greedy-Based Feature Selection for Efficient LiDAR SLAM

Abstract: Modern LiDAR-SLAM (L-SLAM) systems have shown excellent results in large-scale, real-world scenarios. However, they commonly have a high latency due to the expensive data association and nonlinear optimization. This paper demonstrates that actively selecting a subset of features significantly improves both the accuracy and efficiency of an L-SLAM system. We formulate the feature selection as a combinatorial optimization problem under a cardinality constraint to preserve the information matrix's spectral attrib… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 42 publications
0
2
0
Order By: Relevance
“…As reported in [41], the approximation ratio of a greedy approach was proven to be 1 − 1/e, thus the lazier greedy reaches a (1 − 1/e − ) approximation guarantee in expectation of the optimal solution. Let f (•) be the non-negative monotone and submodular function, and set the size of the random set as s. Differently from the adaptive adjustment of M K with online degeneracy evaluation in [42], we set a constant ratio of all features M K = 0.1F K for time efficiency. Let G * K be the optimal set and G sub K the result of the lazier-greedy algorithm; G sub K achieves the approximation guarantee in expectation:…”
Section: Feature Selectionmentioning
confidence: 99%
“…As reported in [41], the approximation ratio of a greedy approach was proven to be 1 − 1/e, thus the lazier greedy reaches a (1 − 1/e − ) approximation guarantee in expectation of the optimal solution. Let f (•) be the non-negative monotone and submodular function, and set the size of the random set as s. Differently from the adaptive adjustment of M K with online degeneracy evaluation in [42], we set a constant ratio of all features M K = 0.1F K for time efficiency. Let G * K be the optimal set and G sub K the result of the lazier-greedy algorithm; G sub K achieves the approximation guarantee in expectation:…”
Section: Feature Selectionmentioning
confidence: 99%
“…The Mutual Information [12] is often used in filter methods to measure the information between a given feature and the desired label [22]. As exhaustive feature selection search is typically intractable, greedy feature selection algorithms are often used [9], [21], [8]. Note, greedy feature selection is related to matching pursuit in the sparse approximation literature [20] and has applications in compressed sensing [2].…”
Section: Feature Selectionmentioning
confidence: 99%