2012 IEEE International Symposium on Circuits and Systems 2012
DOI: 10.1109/iscas.2012.6272137
|View full text |Cite
|
Sign up to set email alerts
|

Live demonstration: On the distance estimation of moving targets with a Stereo-Vision AER system

Abstract: Distance calculation is always one of the most important goals in a digital stereoscopic vision system. In an AER system this goal is very important too, but it cannot be calculated as accurately as we would like. This demonstration shows a first approximation in this field, using a disparity algorithm between both retinas. The system can make a distance approach about a moving object, more specifically, a qualitative estimation. Taking into account the stereo vision system features, the previous retina positi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 13 publications
0
6
0
Order By: Relevance
“…A VGA-sized DVS would generate about 18 times more data than the 128 × 128 sensor used for this paper if the objects filled a proportionally larger number of pixels, but even then the processing of the estimated 400 keps from the sensor would barely load a present-day's microprocessor CPU load and would be within the capabilities of modestly-powered embedded processors. As demonstrated by this work and other implementations (Linares-Barranco et al, 2007; Conradt et al, 2009a; Domínguez-Morales et al, 2012; Ni et al, 2013), the use of event-driven sensors can enable faster and lower-power robots of the future.…”
Section: Resultsmentioning
confidence: 61%
See 1 more Smart Citation
“…A VGA-sized DVS would generate about 18 times more data than the 128 × 128 sensor used for this paper if the objects filled a proportionally larger number of pixels, but even then the processing of the estimated 400 keps from the sensor would barely load a present-day's microprocessor CPU load and would be within the capabilities of modestly-powered embedded processors. As demonstrated by this work and other implementations (Linares-Barranco et al, 2007; Conradt et al, 2009a; Domínguez-Morales et al, 2012; Ni et al, 2013), the use of event-driven sensors can enable faster and lower-power robots of the future.…”
Section: Resultsmentioning
confidence: 61%
“…Other related work that has integrated an event-based neuromorphic vision sensor in a robot includes CAVIAR, a completely spike-hardware based visual tracking system (Serrano-Gotarredona et al, 2009), a pencil balancing robot using a pair of embedded-processor DVS cameras (Conradt et al, 2009a), which was first prototyped using two DVS cameras interfaced by USB (Conradt et al, 2009b), a demonstration of real-time stereo distance estimation computed on an FPGA with 2 DVS cameras (Domínguez-Morales et al, 2012), an embedded FPGA-based visual feedback system using a DVS (Linares-Barranco et al, 2007), and a micro gripper haptic feedback system (Ni et al, 2013) which uses a DVS as one of the two input sensors.…”
Section: Introductionmentioning
confidence: 99%
“…In 2012 two projects were completed, VULCANO (ultra-fast frame-less vision by events. Application to automotion and anthropomorphic cognitive robotics) [198] which was begun in 2010, and SAMANTA I and II (Multi-chip address-eventrepresentation Vision system for robotics platform I & II) [199] which was begun in 2003. Currently, the group is working on the BIOSENSE project (Bioinspired event-based system for sensory fusion and neurocortical processing) [200] which aims to create a robotic platform based on modular AER technology.…”
Section: Other Projectsmentioning
confidence: 99%
“…The circuit can be also designed to achieve the efficiency of low power consumption by combining an active continuous-time front-end logarithmic photoreceptor [ 10 ]. In the algorithm methods, the visual information (such as an image) can be calculated by the address-event-representation [ 11 ] or by constructing stereo vision with cameras [ 12 ]. However, the devices for matching are more expensive than consumer cameras.…”
Section: Introductionmentioning
confidence: 99%