2016
DOI: 10.48550/arxiv.1607.06283
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Real-Time Intensity-Image Reconstruction for Event Cameras Using Manifold Regularisation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…Bardow et al (Bardow, Davison, and Leutenegger 2016) employed primal-dual algorithm to simultaneously estimate optical flow and light intensity. Some other works (Reinbacher, Graber, and Pock 2016;Scheerlinck, Barnes, and Mahony 2018) reconstructed images with direct event integration. Recently, many works (Scheerlinck et al 2020;Rebecq et al 2019a,b;Stoffregen et al 2020;Wang et al 2019a;Ahmed et al 2021;Choi, Yoon et al 2020;Jiang et al 2020) explored to use deep convolutional networks for event camera reconstruction.…”
Section: Related Workmentioning
confidence: 99%
“…Bardow et al (Bardow, Davison, and Leutenegger 2016) employed primal-dual algorithm to simultaneously estimate optical flow and light intensity. Some other works (Reinbacher, Graber, and Pock 2016;Scheerlinck, Barnes, and Mahony 2018) reconstructed images with direct event integration. Recently, many works (Scheerlinck et al 2020;Rebecq et al 2019a,b;Stoffregen et al 2020;Wang et al 2019a;Ahmed et al 2021;Choi, Yoon et al 2020;Jiang et al 2020) explored to use deep convolutional networks for event camera reconstruction.…”
Section: Related Workmentioning
confidence: 99%
“…However, results show that naively composing the different color channels into an RGB image produces results that suffer from high salt-and-pepper type noise and poor color quality (see Figure 4 subfigure C in [29]). A more sophisticated approach to color interpolation from different color channels, such as the ones employed in [29][30][31] (classical algorithms or filters) and [32] (neural network solution) produce better results, especially in terms of noise, but still suffer from poor color quality (see the comparison between [29] and our method in the Result section).…”
Section: Related Workmentioning
confidence: 99%
“…Most of the previous works model the contrast threshold as being consistent across all pixels in the event camera and ignore biased pixels altogether [24,22,11,12]. This can be seen as a special case of our more general model in (5) and can be written as…”
Section: Special Pixelsmentioning
confidence: 99%
“…In DVS, contrast threshold defines the logarithmic intensity change that will trigger an event. It is commonly assumed that the contrast threshold is a constant (typically 10%) for all pixels [24,22,11,12]. However, due to a number of factors like event generation speed, intensity change, circuit noise, and manufacturing imperfections, the actual contrast threshold at each pixel may be different from the desired contrast threshold.…”
Section: Introductionmentioning
confidence: 99%