IEEE Conference on Robotics, Automation and Mechatronics, 2004.
DOI: 10.1109/ramech.2004.1438907
|View full text |Cite
|
Sign up to set email alerts
|

Vision-based pirouettes using radial obstacle profile

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Publication Types

Select...
2
2

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 14 publications
0
3
0
Order By: Relevance
“…It is remotely controlled by a PC via a wireless modem. A wireless colour camera on top of the robot is used to locate obstacles in the surrounding environment [13]. By performing a pirouette (spinning around on the spot) the robot builds a 360° view of its surroundings.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…It is remotely controlled by a PC via a wireless modem. A wireless colour camera on top of the robot is used to locate obstacles in the surrounding environment [13]. By performing a pirouette (spinning around on the spot) the robot builds a 360° view of its surroundings.…”
Section: Methodsmentioning
confidence: 99%
“…The procedure for obtaining an ROP and creating a map has been described in our previous work [13]. Therefore step 1 above is not covered here.…”
mentioning
confidence: 99%
“…A distance estimate is often done in one of two ways: based on positional information or size information. Referring to the findings of the research by Taylor et al [36], if an object is on the same plane as the camera, the position of the object in pixel coordinates can be converted to the real-world coordinates. The distance between the object and the camera can be estimated by a single image from a monocular camera, of which the FOV is known.…”
Section: Distance Estimation Using Position Informationmentioning
confidence: 99%
“…In the case of the Palmbot, a point to note is how the 3-dimensional perspective can be taken into account in the state formation -in Figure 5 the near state ranges across the bottom 4 grid squares, compared to the far state which only ranges across the top 2 grid squares. This is because an 2-dimensional image of a 3-dimensional space has more horizontal distance in that space associated with the grid spaces higher in the image (which are closer to the horizon) [19]. This is easily accounted for in the distance abstraction membership functions of near, medium and far (shown in Figure 6 corresponding to the right side of Figure 5).…”
Section: Using Computer Visionmentioning
confidence: 99%