Proceedings of IEEE International Conference on Robotics and Automation
DOI: 10.1109/robot.1996.506889
|View full text |Cite
|
Sign up to set email alerts
|

Automatic dismantling integrating optical flow into a machine vision-controlled robot system

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(9 citation statements)
references
References 10 publications
0
9
0
Order By: Relevance
“…While the range of techniques proposed for these purposes is vast, not many of them provide fully automated systems; most of the approaches developed for these tasks rely on overly simple initialization steps [11], cumbersome manual pose initialization procedures [12], or do not consider the issue at all [9].…”
Section: Related Workmentioning
confidence: 99%
“…While the range of techniques proposed for these purposes is vast, not many of them provide fully automated systems; most of the approaches developed for these tasks rely on overly simple initialization steps [11], cumbersome manual pose initialization procedures [12], or do not consider the issue at all [9].…”
Section: Related Workmentioning
confidence: 99%
“…Many of the reported systems use manual pose initialization where the user establishes the correspondence between the model and object features [10,14]. Although there are systems for which this step is performed automatically, [13,20,27] the proposed approaches are time consuming and not appealing for real-time applications.…”
Section: Related Workmentioning
confidence: 99%
“…As it can be seen, the pose of the target is used as measurement rather than image features, as commonly used in the literature, [9,13]. An approach similar to the one presented here was considered in Ref.…”
Section: Prediction and Updatementioning
confidence: 99%
“…Many similar systems use manual pose initialization where the correspondence between the model and object features is given by the user (Giordana et al 2000;Drummond and Cipolla 2000). Although there are systems where this step is performed automatically, the approaches are time-consuming and not appealing for real-time applications (Gengenbach et al 1996;Lowe 1985). One additional problem, in our case, is that the objects to be manipulated by the robot are highly textured (see Figure 9) and therefore not suited for matching approaches based on, for example, line features (Koller, Daniilidis, and Nagel 1993;Vincze, Ayromlou, and Kubinger 1999;.…”
Section: Model-based Visual Servoingmentioning
confidence: 99%