In this paper we investigate SURF features for visual terrain classification for outdoor mobile robots. The image is divided into a grid and SURF features are calculated on the intersections of this grid. These features are then used to train a classifier that can differentiate between different terrain classes. Images of five different terrain types are taken using a single camera mounted on a mobile outdoor robot. We further introduce another descriptor, which is a modified form of the dense Daisy descriptor. Random forests are used for classification on each descriptor. Classification results of SURF and Daisy descriptors are compared with the results from traditional texture descriptors like LBP, LTP and LATP. It is shown that SURF features perform better than other descriptors at higher resolutions. Daisy features, although not better than SURF features, also perform better than the three texture descriptors at high resolution.
Abstract. Terrain classification is a fundamental task in outdoor robot navigation to detect and avoid impassable terrain. Camera-based approaches are wellstudied and provide good results. A drawback of these approaches, however, is that the quality of the classification varies with the prevailing lighting conditions. 3D laser scanners, on the other hand, are largely illumination-invariant. In this work we present easy to compute features for 3D point clouds using range and intensity values. We compare the classification results obtained using only the laser-based features with the results of camera-based classification and study the influence of different lighting conditions.
Abstract-In this paper we present a comparison of multiple approaches to visual terrain classification for outdoor mobile robots based on local features. We compare the more traditional texture classification approaches, such as Local Binary Patterns, Local Ternary Patterns and a newer extension Local Adaptive Ternary Patterns, and also modify and test three non-traditional approaches called SURF, DAISY and CCH. We drove our robot under different weather and ground conditions and captured images of five different terrain types for our experiments. We did not filter out blurred images which are due to robot motion and other artifacts caused by rain, etc. We used Random Forests for classification, and cross-validation for the verification of our results. The results show that most of the approaches work well for terrain classification in a fast moving mobile robot, despite image blur and other artifacts induced due to extremely variant weather conditions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.