2020
DOI: 10.1007/978-3-030-58526-6_22
|View full text |Cite
|
Sign up to set email alerts
|

Spike-FlowNet: Event-Based Optical Flow Estimation with Energy-Efficient Hybrid Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
123
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 118 publications
(128 citation statements)
references
References 28 publications
0
123
0
Order By: Relevance
“…Methods above provide solutions with both advantages and disadvantages, and recent research makes optimization by combining complementary advantages that each category has to offer. Current innovation architectures tailored to high-rate, variablelength, and nonuniform event stream have been proposed to balance the trade-off between accuracy and efficiency while protecting the spatio-temporal sparsity [4,43,44]. Generally speaking, with high capacity neural networks, features are automatically learned from data by optimizing the corresponding object function in data-driven approaches, exempt from handcrafted feature descriptors.…”
Section: Data-driven Approachesmentioning
confidence: 99%
“…Methods above provide solutions with both advantages and disadvantages, and recent research makes optimization by combining complementary advantages that each category has to offer. Current innovation architectures tailored to high-rate, variablelength, and nonuniform event stream have been proposed to balance the trade-off between accuracy and efficiency while protecting the spatio-temporal sparsity [4,43,44]. Generally speaking, with high capacity neural networks, features are automatically learned from data by optimizing the corresponding object function in data-driven approaches, exempt from handcrafted feature descriptors.…”
Section: Data-driven Approachesmentioning
confidence: 99%
“…However, SNNs for processing temporal or spatiotemporal data are still primarily based on recurrent connections (DePasquale et al, 2016 ; Bellec et al, 2018 ) and use supervised training (Stromatias et al, 2017 ; Wu et al, 2018 ), leading to high network complexity for processing spatiotemporal data and high demand for labeled data. Recently, SNN has also been explored for optical flow applications, such as Spike-FlowNet (Lee et al, 2020 ), which is based on self-supervision; with STDP-based learning, it is possible to implement SNN for optical flow (Paredes-Vallés et al, 2020 ), but learning is limited to only short-term temporal patterns.…”
Section: Introductionmentioning
confidence: 99%
“…Our key innovation is to use feedforward connections of spiking neurons with different dynamics to represent memory of different time-scales and learn temporal patterns. This eliminates the need for recurrent connection present in state-of-the-art SNNs used for temporal learning (DePasquale et al, 2016 ; Bellec et al, 2018 ; Wu et al, 2018 ; Lee et al, 2020 ). Moreover, we adapt spiking convolution modules (Kheradpisheh et al, 2018 ) to the network architecture.…”
Section: Introductionmentioning
confidence: 99%
“…Owing to their different advantages, in recent years there is a growing trend of integrating ANNs and SNNs to explore hybrid neural networks (HNNs) toward artificial general intelligence ( Marblestone et al, 2016 ; Zhang et al, 2016 ; Ullman, 2019 ). For example, in some cases of event-driven tasks ( Srinivasan and Roy, 2019 ; Lee et al, 2020 ), researchers use SNN modules for abstracting sparse temporal information, and further combine ANN modules for improving the classification performance. Similarly, in some cases of static image processing tasks ( Kheradpisheh et al, 2018 ; Chancán et al, 2020 ), researchers use ANN modules to extract the edge contrasts in images and further process them with SNN modules for low power consumption.…”
Section: Introductionmentioning
confidence: 99%
“…Hybrid neural networks have a promising perspective on the development of artificial general intelligence. However, by far these models are mainly studied and implemented on general-purpose platforms (i.e., CPU or GPU) ( Kheradpisheh et al, 2018 ; Srinivasan and Roy, 2019 ; Chancán et al, 2020 ; Lee et al, 2020 ). On the other side, HNNs retain the basic properties of neural networks, being promising in high-efficiency implementation on domain-specific hardware platforms ( Sze et al, 2017 ).…”
Section: Introductionmentioning
confidence: 99%