2022
DOI: 10.1109/lra.2021.3119379
|View full text |Cite
|
Sign up to set email alerts
|

ROFT: Real-Time Optical Flow-Aided 6D Object Pose and Velocity Tracking

Abstract: 6D object pose tracking has been extensively studied in the robotics and computer vision communities. The most promising solutions, leveraging on deep neural networks and/or filtering and optimization, exhibit notable performance on standard benchmarks. However, to our best knowledge, these have not been tested thoroughly against fast object motions. Tracking performance in this scenario degrades significantly, especially for methods that do not achieve real-time performance and introduce non negligible delays… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(5 citation statements)
references
References 30 publications
0
2
0
Order By: Relevance
“…In addition to those render-and-compare algorithms, PoseRBPF [14] uses a Rao-Blackwellized particle filter and pose-representative latent codes [44]. Also, TP-AE [45] proposed a temporally primed framework with auto encoders, while ROFT [46] synchronizes low framerate pose estimates with fast optical flow.…”
Section: Related Workmentioning
confidence: 99%
“…In addition to those render-and-compare algorithms, PoseRBPF [14] uses a Rao-Blackwellized particle filter and pose-representative latent codes [44]. Also, TP-AE [45] proposed a temporally primed framework with auto encoders, while ROFT [46] synchronizes low framerate pose estimates with fast optical flow.…”
Section: Related Workmentioning
confidence: 99%
“…[26] integrates offthe-shelf segmentation convolutional neural networks, multihypothesis point cloud registration and a Kalman Filter to track the object pose and velocity. A follow up work [29] leverages real-time optical flow to improve tracking performance in fast motion. In [27], for each RGB-D frame, the segmentation mask of the object of interest is combined with the depth frame to produce a partial point cloud of the object which is further refined by outlier removal.…”
Section: Related Workmentioning
confidence: 99%
“…[39] and recently Ref. [40] introduced variational and CNN-based methods for flow-aided pose estimation, based on fulfilled brightness assumption. Nevertheless, an automatic and light resistant flow-based pose-estimation method that works correspondence-free, and takes geometrical, textural and coherent scene motion into account has never been addressed before.…”
Section: Related Workmentioning
confidence: 99%