2022
DOI: 10.1109/jsen.2021.3101370
|View full text |Cite
|
Sign up to set email alerts
|

Visual-Inertial RGB-D SLAM With Encoders for a Differential Wheeled Robot

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 19 publications
0
5
0
Order By: Relevance
“…4-Fig. 6, our proposed method has better accuracy than ORBSLAM3 and VEOS2 proposed by method 18 . This is because we optimize the camera trajectory using co-constrained constraints from different sensor data, and we also use planar constraints, but the average running time is slightly higher than the other two methods, due to the fact that our proposed method also needs to process encoder data, but our proposed method can still meet the realtime requirements.…”
Section: Analyzementioning
confidence: 77%
See 3 more Smart Citations
“…4-Fig. 6, our proposed method has better accuracy than ORBSLAM3 and VEOS2 proposed by method 18 . This is because we optimize the camera trajectory using co-constrained constraints from different sensor data, and we also use planar constraints, but the average running time is slightly higher than the other two methods, due to the fact that our proposed method also needs to process encoder data, but our proposed method can still meet the realtime requirements.…”
Section: Analyzementioning
confidence: 77%
“…We use the rosbag tool in the ROS robot operating system to load the dataset. Table 1 and Table 2 show the experimental results of our proposed method at different frequencies, and compare with the methods proposed by method 11,17,18 . Global BA is used after SLAM is executed.…”
Section: Experimental Data and Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Robotic grasping has made great progress in recent years [5]. The development of low-cost red-green-blue and depth (RGB-D) sensors has led to more and more applications of depth information in robotics [6], [7], [8]; however, since most advanced grasp detection algorithms heavily rely on depth information from these RGB-D sensors, they cannot be applied to transparent and reflective objects since these RGB-D sensors cannot construct a complete depth image for transparent and reflective objects.…”
Section: Introductionmentioning
confidence: 99%