2023
DOI: 10.1016/j.eswa.2022.119068
|View full text |Cite
|
Sign up to set email alerts
|

“Focusing on the right regions” — Guided saliency prediction for visual SLAM

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 31 publications
0
1
0
Order By: Relevance
“…To some extent, efficiency is sacrificed to enhance the robustness and accuracy of VO/VSLAM systems. To this end, it is vital to strike a balance between accuracy and efficiency while deploying deep learning-based features into VO/VSLAM systems [23]- [25], especially for UAV platforms with limited payload capacity. To the best of our knowledge, the GCNv2tiny based SLAM [26] is the only learned feature-based VSLAM system that achieves real-time performance on the most popular UAV onboard computing platform, the Nvidia Jetson TX2.…”
Section: Introductionmentioning
confidence: 99%
“…To some extent, efficiency is sacrificed to enhance the robustness and accuracy of VO/VSLAM systems. To this end, it is vital to strike a balance between accuracy and efficiency while deploying deep learning-based features into VO/VSLAM systems [23]- [25], especially for UAV platforms with limited payload capacity. To the best of our knowledge, the GCNv2tiny based SLAM [26] is the only learned feature-based VSLAM system that achieves real-time performance on the most popular UAV onboard computing platform, the Nvidia Jetson TX2.…”
Section: Introductionmentioning
confidence: 99%