2019
DOI: 10.48550/arxiv.1910.01851
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

EBBIOT: A Low-complexity Tracking Algorithm for Surveillance in IoVT Using Stationary Neuromorphic Vision Sensors

Abstract: In this paper, we present EBBIOT-a novel paradigm for object tracking using stationary neuromorphic vision sensors in low-power sensor nodes for the Internet of Video Things (IoVT). Different from fully event based tracking or fully frame based approaches, we propose a mixed approach where we create event-based binary images (EBBI) that can use memory efficient noise filtering algorithms. We exploit the motion triggering aspect of neuromorphic sensors to generate region proposals based on event density counts … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 14 publications
0
1
0
Order By: Relevance
“…There are two types of paradigms that can be used to find salient information from events based on: individual events or groups of events. A single event does not provide much information for classification problems, hence most research works explore a hybrid approach where groups of events are accumulated into a frame either using a fixed number of events [16], [17] or a fixed interval [18]. These frames are passed through a deep learning framework during training.…”
Section: Introductionmentioning
confidence: 99%
“…There are two types of paradigms that can be used to find salient information from events based on: individual events or groups of events. A single event does not provide much information for classification problems, hence most research works explore a hybrid approach where groups of events are accumulated into a frame either using a fixed number of events [16], [17] or a fixed interval [18]. These frames are passed through a deep learning framework during training.…”
Section: Introductionmentioning
confidence: 99%