Proceedings of the 2003 American Control Conference, 2003.
DOI: 10.1109/acc.2003.1244065
|View full text |Cite
|
Sign up to set email alerts
|

Range identification for perspective vision systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
46
0

Publication Types

Select...
4
1
1

Relationship

2
4

Authors

Journals

citations
Cited by 31 publications
(47 citation statements)
references
References 9 publications
1
46
0
Order By: Relevance
“…2-5 indicate that the proposed observer can be used to identify the range and hence, the Euclidean coordinates of an object feature moving with affine motion dynamics and a nonaffine PDS, provided the observability conditions are satisfied. These results are comparable to the results obtained in [5] which applied the range identification observer to a planar imaging surface.…”
Section: Discussionsupporting
confidence: 93%
See 3 more Smart Citations
“…2-5 indicate that the proposed observer can be used to identify the range and hence, the Euclidean coordinates of an object feature moving with affine motion dynamics and a nonaffine PDS, provided the observability conditions are satisfied. These results are comparable to the results obtained in [5] which applied the range identification observer to a planar imaging surface.…”
Section: Discussionsupporting
confidence: 93%
“…Remark 3: Once 4 ( ) is identified, the complete 3D Euclidean coordinates of the object feature can be determined using (5) and (13). Provided the observability conditions given in (15) and (16) are satisfied 4 ( ) can be identified ifˆ ( ) approaches ( ) as (i.e.,ˆ 1 ( ),ˆ 2 ( ) and 3 ( ) approach 1 ( ), 2 ( ) and 3 ( ) as ) since the parameters ( ) = 1 2 3 are assumed to be known, and 1 ( ), 2 ( ) and 3 ( ) are measurable.…”
Section: B Estimator Design and Error Systemmentioning
confidence: 99%
See 2 more Smart Citations
“…Given the coordinates p i of a set of feature points at different times, there are numerous methods to solve for the relative displacement of the camera to the points, the motion of the points over time, the motion of the camera over time, etc. These methods include multi-view (epipolar) geometry [19][20][21], Kalman filtering [22][23][24] and nonlinear estimation [25,26]. Using this data in the feedback loop of a control system constitutes vision-based controls.…”
Section: Camera Projection Modelmentioning
confidence: 99%