Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World
DOI: 10.1109/iros.1997.649086
|View full text |Cite
|
Sign up to set email alerts
|

Visually-guided obstacle avoidance in unstructured environments

Abstract: This paper presents an autonomous vision-based obstacle avoidance system. The system consists of three independent vision modules for obstacle detection, each of which is computationally simple and uses a di erent criterion for detection purposes. These criteria are based on brightness gradients, RGB Red, Green, Blue color, and HSV Hue, Saturation, Value color, respectively. Selection of which modules are used to command the robot proceeds exclusively from the outputs of the modules themselves. The system is i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
74
0
2

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 102 publications
(84 citation statements)
references
References 10 publications
0
74
0
2
Order By: Relevance
“…39) [157], [158], [12], [81], [99]. Sometimes in these sorts of applications, the robot is supposed to just wander around, exploring the vicinity of the robot without a clearcut goal [91], [136]. However, in other applications, the task to be performed requires that the robot follow a specific path to a goal position.…”
Section: Unstructured Outdoor Navigationmentioning
confidence: 99%
See 1 more Smart Citation
“…39) [157], [158], [12], [81], [99]. Sometimes in these sorts of applications, the robot is supposed to just wander around, exploring the vicinity of the robot without a clearcut goal [91], [136]. However, in other applications, the task to be performed requires that the robot follow a specific path to a goal position.…”
Section: Unstructured Outdoor Navigationmentioning
confidence: 99%
“…Some of the earliest systems to use color to distinguish shadows from obstacles are by Thorpe et al [143] and Turk et al [151]. Another more recent example of this is in [91], where the authors have addressed the problem of vision-guided obstacle avoidance using three redundant vision modules, one for intensity (BW), one for RGB, and one for HSV. The goal is to figure out the position of obstacles in the scene while the robot wanders and explores the environment.…”
Section: Illumination and Outdoor Navigationmentioning
confidence: 99%
“…Similarly, the ALVINN neural network controlled the autonomous CMU Navlab using just 30 × 32 images as input [33]. Robust obstacle avoidance was achieved by Lorigo et al [23] using 64 × 48 images. In contrast to this historical work, our approach is driven not by hardware limitations by rather inspired by the limits of possibility, as in Torralba et al [40,41] and in Basu and Li [3], who argue that different resolutions should be used for different robotic tasks.…”
Section: Previous Workmentioning
confidence: 99%
“…When a robot is running in an unstructured environment such as a natural environment [8] or the surface of Mars [9], terrain classification and obstacle avoidance become the primary challenges [10]. In such cases, advanced sensors such as stereo cameras, Laser RADAR (LADAR), and appropriate sensor fusion techniques are necessary to deal with the complex environment [11]- [13].…”
Section: Related Workmentioning
confidence: 99%
“…Due to the inherent difficulties in understanding natural objects and changing environments, autonomous driving is still in its infancy. However, existing results such as motion planning with 3D vision and the use of multiple classifiers [10], [14] shed light on a different class of problems, where roads do not disappear completely.…”
Section: Related Workmentioning
confidence: 99%