2022
DOI: 10.1109/lra.2022.3150844
|View full text |Cite
|
Sign up to set email alerts
|

STEP: State Estimator for Legged Robots Using a Preintegrated Foot Velocity Factor

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 21 publications
(11 citation statements)
references
References 24 publications
0
11
0
Order By: Relevance
“…Two robots collected datasets in several outdoor environments while traveling over 1.5 km with an average velocity of 0.5 m/s. Note that our robots move at a much faster speed than prior works (for example, [9] is 0.125 m/s and [4] is 0.25 m/s). In each dataset, the robot moves in a large loop and we evaluate the final position estimation drift after the robot returns to the starting point.…”
Section: B Outdoor Experimentsmentioning
confidence: 99%
See 2 more Smart Citations
“…Two robots collected datasets in several outdoor environments while traveling over 1.5 km with an average velocity of 0.5 m/s. Note that our robots move at a much faster speed than prior works (for example, [9] is 0.125 m/s and [4] is 0.25 m/s). In each dataset, the robot moves in a large loop and we evaluate the final position estimation drift after the robot returns to the starting point.…”
Section: B Outdoor Experimentsmentioning
confidence: 99%
“…In each dataset, the robot moves in a large loop and we evaluate the final position estimation drift after the robot returns to the starting point. We also note that 1% drift is equivalent to 0.1m of the 10M Relative Translation Error (RTE) metric used in [4] and [9]. Details of datasets can be found in the open-source code base.…”
Section: B Outdoor Experimentsmentioning
confidence: 99%
See 1 more Smart Citation
“…State estimation from only leg odometry and IMU such as in [1], [8], [9], [10] has limitations in observability of state variables such as yaw rotation or absolute position in a world reference frame. To this end, several approaches combine proprioceptive and IMU measurements with exteroceptive sensors such as vision [11], [12], [13], [14], [15], LiDAR [16], or both [4], [17]. Vision sensors are particularly lightweight compared to LiDARs.…”
Section: Related Workmentioning
confidence: 99%
“…The approach uses visual odometry estimates as relative pose factors. Kim et al [15] tightly integrate visual keypoint depth estimation with inertial measurement and preintegrated leg velocity factors. Our approach integrates absolute yaw and position measurements by the VIO, while height drift of the VIO wrt.…”
Section: Related Workmentioning
confidence: 99%