2020
DOI: 10.1109/tip.2020.3023597
|View full text |Cite
|
Sign up to set email alerts
|

Graph-Based Spatio-Temporal Feature Learning for Neuromorphic Vision Sensing

Abstract: Neuromorphic vision sensing (NVS) devices represent visual information as sequences of asynchronous discrete events (a.k.a., "spikes") in response to changes in scene reflectance. Unlike conventional active pixel sensing (APS), NVS allows for significantly higher event sampling rates at substantially increased energy efficiency and robustness to illumination changes. However, feature representation for NVS is far behind its APS-based counterparts, resulting in lower performance in high-level computer vision ta… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
68
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 94 publications
(95 citation statements)
references
References 53 publications
2
68
0
Order By: Relevance
“…Finally, the feature vectors constructed from BoVV are used to train the linear SVM. Also Graph CNN based methods were reported for NVS-domain object recognition, with case studies on HAR in [42].…”
Section: Related Workmentioning
confidence: 99%
“…Finally, the feature vectors constructed from BoVV are used to train the linear SVM. Also Graph CNN based methods were reported for NVS-domain object recognition, with case studies on HAR in [42].…”
Section: Related Workmentioning
confidence: 99%
“…Huang et al [299] utilized timestamp image encoding, to encode the event data sequence into framebased representations for HAR. Bi et al [300] proposed a compact graph representation for end-to-end learning with Residual-Graph CNN (RG-CNN). Event cameras have also been used for gesture recognition in [301], which achieved promising results.…”
Section: Event Stream Modalitymentioning
confidence: 99%
“…While traditional cameras are capable of providing very rich visual information at the tradeoff of slow and often redundant updates, event-based cameras are asynchronous and spatially sparse, and capable of microseconds temporal resolution. Event-based systems range from designs that focus on exploiting and maintaining event-camera sparsity during computation [4,85,107], to algorithms that combine events with standard cameras [7,35,46,78,99], exploiting the complementarity of the two. With the goal of achieving minimum-delay computing, research has also focused on asynchronous designs, either by modifying regular CNNs [5,69] or by utilizing specific hardware solutions [2,21,29], often leveraging on bio-inspired computing frameworks [68].…”
Section: Related Workmentioning
confidence: 99%