2018
DOI: 10.1007/s11263-018-1106-2
|View full text |Cite
|
Sign up to set email alerts
|

Real-Time Intensity-Image Reconstruction for Event Cameras Using Manifold Regularisation

Abstract: Event cameras or neuromorphic cameras mimic the human perception system as they measure the per-pixel intensity change rather than the actual intensity level. In contrast to traditional cameras, such cameras capture new information about the scene at MHz frequency in the form of sparse events. The high temporal resolution comes at the cost of losing the familiar per-pixel intensity information. In this work we propose a variational model that accurately models the behaviour of event cameras, enabling reconstru… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
86
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 121 publications
(94 citation statements)
references
References 30 publications
0
86
0
Order By: Relevance
“…This figure shows image reconstructions from each method, 0.5 seconds after the sensor was started. HF [5] and MR [4], which are based on event integration, cannot recover the intensity correctly, resulting in "edge" images (first and second row) or severe "ghosting" effects (third row, where the trace of the dartboard is clearly visible). In contrast, our network successfully reconstructs most of the scene accurately, even with a low number of events.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…This figure shows image reconstructions from each method, 0.5 seconds after the sensor was started. HF [5] and MR [4], which are based on event integration, cannot recover the intensity correctly, resulting in "edge" images (first and second row) or severe "ghosting" effects (third row, where the trace of the dartboard is clearly visible). In contrast, our network successfully reconstructs most of the scene accurately, even with a low number of events.…”
Section: Resultsmentioning
confidence: 99%
“…19. Qualitative comparison of our reconstruction method with two recent competing approaches, MR [4] and HF [5], on sequences from [38], which contain ground truth frames from a DAVIS240C sensor. Our method successfully reconstructs fine details (textures in the second and third row) compared to other methods, while avoiding ghosting effects (particulary visible in the shapes sequences on the fourth row).…”
Section: D1 Results On Synthetic Event Datamentioning
confidence: 99%
See 2 more Smart Citations
“…Since their introduction, event cameras have spawned a flurry of research. They have been used in feature detection and tracking [3][4][5][6], depth estimation [7][8][9][10], stereo [11][12][13][14], optical flow [15][16][17][18], image reconstruction [19][20][21][22][23][24][25], localization [26][27][28][29], SLAM [30][31][32], visualinertial odometry [33][34][35][36], pattern recognition [37][38][39][40], and more. In response to the growing needs of the community, several important event-based vision datasets have been released, directed at popular topics such as SLAM [28], optical flow [41,42] and recognition [37,43].…”
Section: Introductionmentioning
confidence: 99%