2014 IEEE International Symposium on Circuits and Systems (ISCAS) 2014
DOI: 10.1109/iscas.2014.6865228
|View full text |Cite
|
Sign up to set email alerts
|

Real-time, high-speed video decompression using a frame- and event-based DAVIS sensor

Abstract: Dynamic and active pixel vision sensors (DAVISs) are a new type of sensor that combine a frame-based intensity readout with an event-based temporal contrast readout. This paper demonstrates that these sensors inherently perform highspeed, video compression in each pixel by describing the first decompression algorithm for this data. The algorithm performs an online optimization of the event decoding in real time. Example scenes were recorded by the 240x180 pixel sensor at sub-Hz frame rates and successfully dec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
80
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 89 publications
(86 citation statements)
references
References 9 publications
0
80
0
Order By: Relevance
“…However, C is in reality neither constant nor uniform across the image plane. Rather, it strongly varies depending on factors such as the sign of the brightness change [12], the event rate (because of limited pixel bandwidth) [20], and the temperature [21]. Consequently, events cannot by directly integrated to recover accurate intensity images in practice.…”
Section: Video Reconstructionmentioning
confidence: 99%
“…However, C is in reality neither constant nor uniform across the image plane. Rather, it strongly varies depending on factors such as the sign of the brightness change [12], the event rate (because of limited pixel bandwidth) [20], and the temperature [21]. Consequently, events cannot by directly integrated to recover accurate intensity images in practice.…”
Section: Video Reconstructionmentioning
confidence: 99%
“…An animated version can be found here: https://youtu.be/LauQ6LWTkxM. [Matsuda et al, 2015], optical flow estimation , Rueckauer and Delbruck, 2016, Bardow et al, 2016, high dynamic range (HDR) image reconstruction [Cook et al, 2011, Reinbacher et al, 2016, mosaicing [Kim et al, 2014] and video compression [Brandli et al, 2014a]. In ego-motion estimation, event cameras have been used for pose tracking [Weikersdorfer and Conradt, 2012, Mueggler et al, 2014, and visual odometry and Simultaneous Localization and Mapping (SLAM) [Weikersdorfer et al, 2013, Censi and Scaramuzza, 2014, Kueng et al, 2016, Kim et al, 2016.…”
Section: Event Cameras and Applicationsmentioning
confidence: 99%
“…where δ (t) is a Dirac-delta function and δ p p p i (p p p) is a Kronecker delta function with indices associated with the pixel coordinates of p p p i and p p p. That is δ p p p i (p p p) = 1 when p p p = p p p i and zero otherwise. In this paper we use the common assumption that the contrast threshold c is constant [11], [18], [19], although, in practice it does vary somewhat with intensity, event-rate and other factors [14]. The integral of events is…”
Section: A Mathematical Representation and Notationmentioning
confidence: 99%
“…It is possible to compute the direct integral (6) using a similar approach to the direct integration schemes of [11], [14]. The drawback of this approach is integration of sensor noise, which results in drift, and undermines low temporal-frequency components of the estimate L K (p p p,t) over time.…”
Section: Continuous-time Filter For Convolved Eventsmentioning
confidence: 99%