Robot Vision 2010
DOI: 10.5772/9309
|View full text |Cite
|
Sign up to set email alerts
|

A Visual Navigation Strategy Based on Inverse Perspective Transformation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0
1

Year Published

2012
2012
2014
2014

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 25 publications
0
2
0
1
Order By: Relevance
“…All frames were undistorted to correct the error in the image point position due to the distortion introduced by the lens. For a varied set of scenes differing in light conditions and/or floor texture, the optimum β had a coincident value of 20mm [2]. The window used to find edge pixels near an obstacle point and to track down the obstacle contours is longer in the vertical direction to overcome possible discontinuities in the obstacle vertical borders.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…All frames were undistorted to correct the error in the image point position due to the distortion introduced by the lens. For a varied set of scenes differing in light conditions and/or floor texture, the optimum β had a coincident value of 20mm [2]. The window used to find edge pixels near an obstacle point and to track down the obstacle contours is longer in the vertical direction to overcome possible discontinuities in the obstacle vertical borders.…”
Section: Resultsmentioning
confidence: 99%
“…More specifically, once a feature is matched between two frames, we assume it lies on the floor and we backproject the corresponding image points: the resulting world coordinates from both frames must coincide when the hypothesis is true, but they must be different when the feature comes from an elevated scene point. A threshold (β) can be defined as the maximum difference admissible between the backprojections on the floor [2].…”
Section: Inverse Perspective Transformation Feature Detection and Fementioning
confidence: 99%
“…[2] и Ortiz в 2010 г. [3]. Положим 1) I = (u, v) ∈ E 2 -плоское изображение в 2D пространстве; 2) W = (x, y, z) ∈ E 3 -3D изображение реальной дорожной сцены.…”
unclassified