2014
DOI: 10.3390/s140610753
|View full text |Cite
|
Sign up to set email alerts
|

Obstacle Classification and 3D Measurement in Unstructured Environments Based on ToF Cameras

Abstract: Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF) cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robot's movement from the scene. In the second step, regions of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 24 publications
(13 citation statements)
references
References 53 publications
(69 reference statements)
0
11
0
Order By: Relevance
“…Infrared rays are easily affected by sunlight [ 44 ]. The Kinect sensor depends on emitted infrared rays to generate a depth map, so the Kinect sensor has some hardware limitations.…”
Section: Resultsmentioning
confidence: 99%
“…Infrared rays are easily affected by sunlight [ 44 ]. The Kinect sensor depends on emitted infrared rays to generate a depth map, so the Kinect sensor has some hardware limitations.…”
Section: Resultsmentioning
confidence: 99%
“…An enhanced CHT [ 9 ] is employed for estimating the trajectory of a spherical target in three dimensions to improve tracking accuracy. For robotic navigation in unstructured environments, a method based on ToF cameras [ 10 ] was provided for 3D obstacle detection and classification. Meanwhile, in the area of industrial and daily application [ 11 ], the robot mechanism presents a promising prospect.…”
Section: Related Workmentioning
confidence: 99%
“…In general, environmental understanding is the essential prerequisite for ensuring the stable and robust operations of autonomous robots [ 7 ], and many methods have been proposed up to now [ 8 , 9 ]. Optical camera-based methods have become very popular [ 10 , 11 ]. However, camera-based methods have several limitations such as the lack of geospatial and reflectivity intensity information, as well as image distortions and illumination variations.…”
Section: Introductionmentioning
confidence: 99%