2017
DOI: 10.1038/srep40703
|View full text |Cite|
|
Sign up to set email alerts
|

A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

Abstract: Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
88
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 76 publications
(88 citation statements)
references
References 36 publications
0
88
0
Order By: Relevance
“…We are currently observing the interesting trend that event-based vision becomes increasingly interesting for research communities rooted in classical computer vision and robotics. Advantages of using event-based sensors have been demonstrated for diverse applications such as tracking (Mueggler et al, 2014 ; Lagorce et al, 2015 ; Gallego et al, 2018 ), stereo vision (Rogister et al, 2012 ; Osswald et al, 2017 ; Martel et al, 2018 ), optical flow estimation (Benosman et al, 2014 ; Bardow et al, 2016 ), gesture recognition (Lee et al, 2014 ; Amir et al, 2017 ), scene reconstruction (Carneiro et al, 2013 ; Kim et al, 2016 ; Rebecq et al, 2017 ), or SLAM (Weikersdorfer et al, 2014 ; Vidal et al, 2018 ). All of these applications benefit from the high speed and the high dynamic range of spike-based sensors to solve tasks, such as high-speed localization and navigation, which are very hard with conventional vision sensors.…”
Section: Applicationsmentioning
confidence: 99%
See 1 more Smart Citation
“…We are currently observing the interesting trend that event-based vision becomes increasingly interesting for research communities rooted in classical computer vision and robotics. Advantages of using event-based sensors have been demonstrated for diverse applications such as tracking (Mueggler et al, 2014 ; Lagorce et al, 2015 ; Gallego et al, 2018 ), stereo vision (Rogister et al, 2012 ; Osswald et al, 2017 ; Martel et al, 2018 ), optical flow estimation (Benosman et al, 2014 ; Bardow et al, 2016 ), gesture recognition (Lee et al, 2014 ; Amir et al, 2017 ), scene reconstruction (Carneiro et al, 2013 ; Kim et al, 2016 ; Rebecq et al, 2017 ), or SLAM (Weikersdorfer et al, 2014 ; Vidal et al, 2018 ). All of these applications benefit from the high speed and the high dynamic range of spike-based sensors to solve tasks, such as high-speed localization and navigation, which are very hard with conventional vision sensors.…”
Section: Applicationsmentioning
confidence: 99%
“…In addition, using time domain input is additional valuable information compared to frame-driven approaches, where an artificial time step imposed by the sensor is introduced. This can lead to efficient computation of features such as optical flow (Benosman et al, 2014 ) or stereo disparity (Osswald et al, 2017 ), and in combination with learning rules sensitive to spike timing leads to more data-efficient training (Panda et al, 2017 ).…”
Section: Introductionmentioning
confidence: 99%
“…In this work, we build upon the motion compensation framework [35] and extend it to include twenty more loss functions for applications such as ego-motion, depth and Variance (4) [33,35] Statistical No max Mean Square (9) [33,36] Statistical No max Mean Absolute Deviation (10) Statistical No max Mean Absolute Value (11) Statistical No max Entropy (12) Statistical No max Image Area (8) Statistical No min Image Range (13) Statistical No max Local Variance (14) Statistical [37] Statistical No min Table 1: List of objective functions considered.…”
Section: Introductionmentioning
confidence: 99%
“…Loss Function: Local Variance. The derivative of the aggregated local variance of the IWE is, by the chain rule on(14),…”
mentioning
confidence: 99%
“…Such event-based sensing allows us to perform some vision tasks extremely efficiently, reducing the amount of required computation, transmitted data, and power consumption. Many eventbased vision pipelines and architectures have been developed over the last decade [1,2], which address such vision tasks as stereo vision [3], 3D pose estimation [4,5], or optical flow [6]. These event-based pipelines are typically implemented on conventional Von Neumann computer architectures.…”
Section: Introductionmentioning
confidence: 99%