2021
DOI: 10.1109/lra.2021.3066375
|View full text |Cite
|
Sign up to set email alerts
|

RigidFusion: Robot Localisation and Mapping in Environments With Large Dynamic Rigid Objects

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 21 publications
(25 citation statements)
references
References 23 publications
0
24
1
Order By: Relevance
“…A similar effect occurs when multiple dynamic targets appear in the visual field at the same time, resulting in large areas of invalid visual features. The introducing an ego-motion prior is a promising approach to cope with such rigid object occlusion problem [26].…”
Section: Discussionmentioning
confidence: 99%
“…A similar effect occurs when multiple dynamic targets appear in the visual field at the same time, resulting in large areas of invalid visual features. The introducing an ego-motion prior is a promising approach to cope with such rigid object occlusion problem [26].…”
Section: Discussionmentioning
confidence: 99%
“…Robot proprioception, such as IMU and wheel odometry, can be fused with visual sensors to improve the accuracy and robustness of localisation in dynamic environments [16], [17]. Kim et al [16] uses the camera motion prior from an IMU to compensate the camera motion and select static keypoints based on motion vectors.…”
Section: Related Workmentioning
confidence: 99%
“…Kim et al [16] uses the camera motion prior from an IMU to compensate the camera motion and select static keypoints based on motion vectors. RigidFusion (RF) [17] uses both camera and object motion priors to simultaneously reconstruct the static background and one rigid dynamic object where the major part of the camera view is occluded. However, both of these methods are unable to track multiple dynamic objects independently.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In a TSDF approach based on [21], MID-Fusion [15] combines the segmentation of [9] with motion cues to reconstruct multiple moving objects. Long et al [22] additionally include motion tracking to reconstruct a single large moving object.…”
Section: B Object-centric Mappingmentioning
confidence: 99%