2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops 2008
DOI: 10.1109/cvprw.2008.4563136
|View full text |Cite
|
Sign up to set email alerts
|

Autonomous navigation and mapping using monocular low-resolution grayscale vision

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
8
0

Year Published

2009
2009
2017
2017

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 53 publications
(35 reference statements)
1
8
0
Order By: Relevance
“…After describing these individual measures, we present an integrated system that is able to autonomously explore an unknown indoor environment, recovering from difficult situations like corners, blank walls, and initial heading toward a wall. This approach extends our previous work [29], in which reactive behavior was demonstrated with a subset of measures, but without explicit estimation of orientation or distance to the end. All of this behavior is accomplished at a rate of 1000 Hz on a standard computer using only 0.02% of the pixels available from a standard 30 Hz color VGA (640 × 480) video camera, discarding 99.98% of the information.…”
Section: Introductionsupporting
confidence: 50%
See 1 more Smart Citation
“…After describing these individual measures, we present an integrated system that is able to autonomously explore an unknown indoor environment, recovering from difficult situations like corners, blank walls, and initial heading toward a wall. This approach extends our previous work [29], in which reactive behavior was demonstrated with a subset of measures, but without explicit estimation of orientation or distance to the end. All of this behavior is accomplished at a rate of 1000 Hz on a standard computer using only 0.02% of the pixels available from a standard 30 Hz color VGA (640 × 480) video camera, discarding 99.98% of the information.…”
Section: Introductionsupporting
confidence: 50%
“…The only input to the system consisted of the 32 × 24 downsampled grayscale images from the 30 Hz camera. In our previous work we estimated orientation using only the median of bright pixels, and distance to the end of the corridor was largely determined by entropy [29,27]. In this work we show that a linear combination (weighted average) of five (orientation) and three (distance to the end) complementary measures is more effective for achieving success in multiple environments.…”
Section: Exploration In An Unknown Environmentmentioning
confidence: 93%
“…Recent self-guided vehicles used in the DARPA LAGR programme have led to significant advances in robotic perception systems (Hadsell et al, 2009;Sofman et al, 2006;Kim et al, 2007), however the multiple sensors and complexity of these systems do not address the needs of low-cost autonomous robots (Katramados et al, 2009;Murali and Birchfield, 2008). A commonly available web camera presents a desirable alternative that will be used in this work, and is motivated by the human ability to interpret 2D low resolution images (Murali and Birchfield, 2008).…”
Section: Introductionmentioning
confidence: 99%
“…Similarly, sensory-motor learning has been used to map visual inputs to turning commands, but the resulting algorithms have been too computationally demanding for real-time performance (Giovannangeli et al, 2006). Other researchers have developed mapless algorithms for low-level functionality like corridor following or obstacle avoidance (Nelson & Aloimonos, 1988, Santos-Victor et al, 1995, Barrows et al, 2002, LeCun et al, 2005, Michels et al, 2005, Murali & Birchfield, 2008, but these techniques are not applicable to following a specific arbitrary path. The open circle coincides with both the camera focal point and the robot position, the arrow indicates the heading direction,  is the image plane, and  is the angle between the optical axis and the projection ray from the landmark.…”
mentioning
confidence: 99%