2014
DOI: 10.3389/fnins.2013.00275
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive pulsed laser line extraction for terrain reconstruction using a dynamic vision sensor

Abstract: Mobile robots need to know the terrain in which they are moving for path planning and obstacle avoidance. This paper proposes the combination of a bio-inspired, redundancy-suppressing dynamic vision sensor (DVS) with a pulsed line laser to allow fast terrain reconstruction. A stable laser stripe extraction is achieved by exploiting the sensor's ability to capture the temporal dynamics in a scene. An adaptive temporal filter for the sensor output allows a reliable reconstruction of 3D terrain surfaces. Laser st… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
34
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 47 publications
(34 citation statements)
references
References 19 publications
0
34
0
Order By: Relevance
“…The event timing from the DVS is used to determine scan angle, establishing projector-camera correspondence for each pixel. The DVS was used previously for SL scanning by Brandli et al [5] in a pushbroom setup that sweeps an affixed camera-projector module across the scene. This technique is useful for large area terrain mapping, but ineffective for 3D scanning of dynamic scenes.…”
Section: ) Introductionmentioning
confidence: 99%
“…The event timing from the DVS is used to determine scan angle, establishing projector-camera correspondence for each pixel. The DVS was used previously for SL scanning by Brandli et al [5] in a pushbroom setup that sweeps an affixed camera-projector module across the scene. This technique is useful for large area terrain mapping, but ineffective for 3D scanning of dynamic scenes.…”
Section: ) Introductionmentioning
confidence: 99%
“…Such devices require low bandwidth and power, making them ideal for robot embodiment. The DVS has been used for classification tasks including recognising hand gestures [4], classifying the DVS version of the MNIST postcode data set [5] and extracting pulsed laser line for terrain reconstruction [6]. DVS data have also been used in tracking tasks, such as tracking people [7], particles [8], optic flow [9], ball and car trajectories [10,11] and LED markers with the camera mounted on a robot [12].…”
Section: Introductionmentioning
confidence: 99%
“…There are a variety of ways to handle time and space in existing event-based sensory data studies (for a detailed review see Section 2.3). Some studies group events into frames or time slices imposing fixed timescales [6][7][8][9][10][11][12][13][14][15]. Other studies process the data stream on an event-by-event basis [16][17][18][19][20][21][22][23][24][25][26][27][28][29], with the timing of an event determining its relevance to an internal model.…”
Section: Processing the Temporal And Spatial Information In Event-basmentioning
confidence: 99%
“…• Hand gesture recognition [6], different data sets with stereo cameras [32] • MNIST post code digit classification [8,29,33] • Classifying feature maps generated by spike input events [41] • Recognising rotating propellers and symbols on flipping cards [24,31,42] • Classifying if a motion performed by a human is a fall [43] • Object (cup, ball and box) recognition in dark environment [44] • Full body gesture recognition with stereo cameras [14] • Moving and non-moving object (balls and books) recognition with stereo cameras embedded with the iCub robot [21] • Recognising patterns of cars passing under a bridge over a freeway [45,46] • Detecting the proximity of hands and touch screens and different types of light source recognition [7] • Recognition of pulsed laser line extraction from terrain construction [11] • Classifying dynamic objects with a moving camera [39] Continued on next page Table 2.1 -continued from previous page Task Event-based visual data studies…”
Section: Classification and Recognitionmentioning
confidence: 99%
See 1 more Smart Citation