2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) 2019
DOI: 10.1109/iccvw.2019.00532
|View full text |Cite
|
Sign up to set email alerts
|

Event-Driven Video Frame Synthesis

Abstract: Temporal Video Frame Synthesis (TVFS) aims at synthesizing novel frames at timestamps different from existing frames, which has wide applications in video codec, editing and analysis. In this paper, we propose a high framerate TVFS framework which takes hybrid input data from a lowspeed frame-based sensor and a high-speed event-based sensor. Compared to frame-based sensors, event-based sensors report brightness changes at very high speed, which may well provide useful spatio-temoral information for high framer… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 31 publications
(10 citation statements)
references
References 43 publications
0
10
0
Order By: Relevance
“…Pan et al [10], [28] devised the event double integral (EDI) relation between events and a blurry image, along with an optimisation approach to estimate contrast thresholds to reconstruct high-speed de-blurred video from events and frames. Reconstructed video can also be obtained by warping still images according to motion computed via events [47], [48], or by letting a neural network learn how to combine frames and events [15], [32], [49], [50], [51], [52]. Recognising the limited spatial resolution and noise of the DAVIS, some researchers built custom-built systems with separate eventframe sensors.…”
Section: Event-frame Reconstructionmentioning
confidence: 99%
“…Pan et al [10], [28] devised the event double integral (EDI) relation between events and a blurry image, along with an optimisation approach to estimate contrast thresholds to reconstruct high-speed de-blurred video from events and frames. Reconstructed video can also be obtained by warping still images according to motion computed via events [47], [48], or by letting a neural network learn how to combine frames and events [15], [32], [49], [50], [51], [52]. Recognising the limited spatial resolution and noise of the DAVIS, some researchers built custom-built systems with separate eventframe sensors.…”
Section: Event-frame Reconstructionmentioning
confidence: 99%
“…While we believe we are the first to try image restoration in the context of removing blocking artifacts, there have been many publications that address aspects of the problem we face. Wang et al [1] for example used non-distorted video feeds along with events to interpolate video frames. Baura et al [20] and Rebecq et al [21] were early adopters of using learning methods to reconstruct intensity images from only events.…”
Section: Related Workmentioning
confidence: 99%
“…These benefits open up new paths in solving various vision problems. Event cameras have brought new solutions to many classical as well as novel problems in computer vision and robotics, including high frame-rate video reconstruction [1], [2], [3], with HDR [4], [5] and high resolution [6], [7], [8], and 3D reconstruction of human motion [9] and scenes [10], [11], as well as odometry [12], [13] and tracking [14], [15].…”
Section: Introductionmentioning
confidence: 99%
“…Analysis of video components along the time dimension is the typical approach to segregating temporally consistent and nonconsistent objects [20], [21]. Numerous approaches to enhancing the quality of video sequences have been proposed, including editing [22], synthesizing [23], [24], resampling [25], [26], and removing certain events and popping artifacts. Sunkavalli et al [27] introduced an editing approach that separates video components into their reflectance, illumination, and geometry factors.…”
Section: Related Workmentioning
confidence: 99%