2022
DOI: 10.48550/arxiv.2201.10943
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Event-based Video Reconstruction via Potential-assisted Spiking Neural Network

Abstract: Neuromorphic vision sensor is a new bio-inspired imaging paradigm that reports asynchronous, continuously perpixel brightness changes called 'events' with high temporal resolution and high dynamic range. So far, the eventbased image reconstruction methods are based on artificial neural networks (ANN) or hand-crafted spatiotemporal smoothing techniques. In this paper, we first implement the image reconstruction work via fully spiking neural network (SNN) architecture. As the bio-inspired neural networks, SNNs o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 39 publications
(69 reference statements)
0
1
0
Order By: Relevance
“…With improved training techniques such as specialized normalization, SG, and loss function design [57,62,34,11], directly trained SNNs are achieving competitive accuracy on hard benchmark classification tasks like ImageNet, with only a few simulation steps required for convergence. These advances have encouraged their application to other event-based vision tasks beyond classification, such as optical flow estimation [18], video reconstruction [67], and object detection [8]. Nevertheless, these works are still limited by relatively simple handcrafted architectures and are suboptimal in terms of accuracy compared to state-of-the-art ANNs.…”
Section: Deep Snns For Vision Tasksmentioning
confidence: 99%
“…With improved training techniques such as specialized normalization, SG, and loss function design [57,62,34,11], directly trained SNNs are achieving competitive accuracy on hard benchmark classification tasks like ImageNet, with only a few simulation steps required for convergence. These advances have encouraged their application to other event-based vision tasks beyond classification, such as optical flow estimation [18], video reconstruction [67], and object detection [8]. Nevertheless, these works are still limited by relatively simple handcrafted architectures and are suboptimal in terms of accuracy compared to state-of-the-art ANNs.…”
Section: Deep Snns For Vision Tasksmentioning
confidence: 99%