2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2020
DOI: 10.1109/iros45743.2020.9340942
|View full text |Cite
|
Sign up to set email alerts
|

FlowControl: Optical Flow Based Visual Servoing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 30 publications
0
6
0
Order By: Relevance
“…Furthermore, unlike prior work [35] which iteratively minimizes flow with pick and place actions, or other work [36] which uses optical flow on tactile sensors, we use flow to derive continuous tool motions in 3D space from visual input. A recent work [37] estimates optical flow using RGBD images from the current frame to the demonstration and extracts a transformation to align them. In contrast, we do not use flow for aligning frames to demonstrations but for deriving the transformations the tool should follow.…”
Section: Related Workmentioning
confidence: 99%
“…Furthermore, unlike prior work [35] which iteratively minimizes flow with pick and place actions, or other work [36] which uses optical flow on tactile sensors, we use flow to derive continuous tool motions in 3D space from visual input. A recent work [37] estimates optical flow using RGBD images from the current frame to the demonstration and extracts a transformation to align them. In contrast, we do not use flow for aligning frames to demonstrations but for deriving the transformations the tool should follow.…”
Section: Related Workmentioning
confidence: 99%
“…Another set of methods include approaches that design task-specific controllers (such as for pick and place [2] or cloth folding [11]), and can imitate variations within this task. Similar to our work, FlowControl [1], utilises optical flow for aligning to demonstration frames to complete a task. Unlike our method however, it relies on manually supplied object segmentations, and its core flow computation does not work well when the deployment frames differ significantly from the demonstration frames, or when there are background flows, which would occur with the addition of distractor objects.…”
Section: Related Workmentioning
confidence: 99%
“…[32], [33] attempt to align the robot's current image observation with a goal image, but require task-specific controllers to be manually defined for object interaction, whereas our method can learn from just a demonstration. [34] use visual servoing to directly track a demonstration, but require close initial alignment and only evaluate on tasks with very limited object interaction, whereas our method generalises across a wide task space and can facilitate complex interaction trajectories.…”
Section: Related Workmentioning
confidence: 99%