Procedings of the British Machine Vision Conference 2016 2016
DOI: 10.5244/c.30.9
|View full text |Cite
|
Sign up to set email alerts
|

Real-Time Intensity-Image Reconstruction for Event Cameras Using Manifold Regularisation

Abstract: Event cameras or neuromorphic cameras mimic the human perception system as they measure the per-pixel intensity change rather than the actual intensity level. In contrast to traditional cameras, such cameras capture new information about the scene at MHz frequency in the form of sparse events. The high temporal resolution comes at the cost of losing the familiar per-pixel intensity information. In this work we propose a variational model that accurately models the behaviour of event cameras, enabling reconstru… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
69
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 51 publications
(72 citation statements)
references
References 14 publications
0
69
0
Order By: Relevance
“…An animated version can be found here: https://youtu.be/LauQ6LWTkxM. [Matsuda et al, 2015], optical flow estimation , Rueckauer and Delbruck, 2016, Bardow et al, 2016, high dynamic range (HDR) image reconstruction [Cook et al, 2011, Reinbacher et al, 2016, mosaicing [Kim et al, 2014] and video compression [Brandli et al, 2014a]. In ego-motion estimation, event cameras have been used for pose tracking [Weikersdorfer and Conradt, 2012, Mueggler et al, 2014, and visual odometry and Simultaneous Localization and Mapping (SLAM) [Weikersdorfer et al, 2013, Censi and Scaramuzza, 2014, Kueng et al, 2016, Kim et al, 2016.…”
Section: Event Cameras and Applicationsmentioning
confidence: 99%
“…An animated version can be found here: https://youtu.be/LauQ6LWTkxM. [Matsuda et al, 2015], optical flow estimation , Rueckauer and Delbruck, 2016, Bardow et al, 2016, high dynamic range (HDR) image reconstruction [Cook et al, 2011, Reinbacher et al, 2016, mosaicing [Kim et al, 2014] and video compression [Brandli et al, 2014a]. In ego-motion estimation, event cameras have been used for pose tracking [Weikersdorfer and Conradt, 2012, Mueggler et al, 2014, and visual odometry and Simultaneous Localization and Mapping (SLAM) [Weikersdorfer et al, 2013, Censi and Scaramuzza, 2014, Kueng et al, 2016, Kim et al, 2016.…”
Section: Event Cameras and Applicationsmentioning
confidence: 99%
“…Sharpness Memory HDR Low light (a) DAVIS frame (b) MR [22] (c) HF [24] (d) E2VID [50] Figure 7: Edge cases for different reconstruction methods. First row: initialization, all method but E2VID fail.…”
Section: Fast Motionmentioning
confidence: 99%
“…Since their introduction, event cameras have spawned a flurry of research. They have been used in feature detection and tracking [3][4][5][6], depth estimation [7][8][9][10], stereo [11][12][13][14], optical flow [15][16][17][18], image reconstruction [19][20][21][22][23][24][25], localization [26][27][28][29], SLAM [30][31][32], visualinertial odometry [33][34][35][36], pattern recognition [37][38][39][40], and more. In response to the growing needs of the community, several important event-based vision datasets have been released, directed at popular topics such as SLAM [28], optical flow [41,42] and recognition [37,43].…”
Section: Introductionmentioning
confidence: 99%
“…where δ (t) is a Dirac-delta function and δ p p p i (p p p) is a Kronecker delta function with indices associated with the pixel coordinates of p p p i and p p p. That is δ p p p i (p p p) = 1 when p p p = p p p i and zero otherwise. In this paper we use the common assumption that the contrast threshold c is constant [11], [18], [19], although, in practice it does vary somewhat with intensity, event-rate and other factors [14]. The integral of events is…”
Section: A Mathematical Representation and Notationmentioning
confidence: 99%