2021
DOI: 10.48550/arxiv.2106.00050
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Continual 3D Convolutional Neural Networks for Real-time Processing of Videos

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
3

Relationship

3
0

Authors

Journals

citations
Cited by 3 publications
(10 citation statements)
references
References 0 publications
0
10
0
Order By: Relevance
“…Recently, a modification to the spatio-temporal 3D convolution was proposed [10], which enables existing 3D CNNs to operate efficiently during continual inference by weights transfer to a Continual 3D CNN. Importantly, Co3D CNNs produce identical output to that of regular 3D CNNs during regular step-wise inference, and the learned weights are directly transferable between regular 3D CNNs and Co3D CNNs.…”
Section: Definition (Continual Inference Network) a Continualmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, a modification to the spatio-temporal 3D convolution was proposed [10], which enables existing 3D CNNs to operate efficiently during continual inference by weights transfer to a Continual 3D CNN. Importantly, Co3D CNNs produce identical output to that of regular 3D CNNs during regular step-wise inference, and the learned weights are directly transferable between regular 3D CNNs and Co3D CNNs.…”
Section: Definition (Continual Inference Network) a Continualmentioning
confidence: 99%
“…Unlike these efforts, our approach produces the exact same computational outputs for temporal sequences as the original Multi-Head Attention module and retains full weightcompatibility. With this work, we also extend the family of Continual Inference Networks [10] to include Transformers with our proposed Continual Retroactive Attention and Continual Single-output Attention. Notably, our attention formulations reduce the per-step cost of the Scaled Dot-Product Attention from time complexity O(n 2 d) to O(nd) and memory complexity O(n 2 ) to O(nd) while producing identical results to those in the original formulation.…”
Section: Introductionmentioning
confidence: 99%
“…First introduced in [34] and subsequently formalized in [35], Continual Inference Networks are Deep Neural Networks that can operate efficiently on both fixed-size (spatio-)temporal batches of data, where the whole temporal sequence is known up front, as well as on continual data, where new input steps are collected continually and inference needs to be performed efficiently in an online manner for each received frame.…”
Section: Continual Inference Networkmentioning
confidence: 99%
“…Recently, Continual 3D CNNs were made possible through the proposal of Continual 3D Convolutions [34]. Likewise, shallow Continual Transformers based on Continual Dotproduct Attentions were introduced in [35].…”
Section: Definition (Continual Inference Network) a Continual Inferen...mentioning
confidence: 99%
See 1 more Smart Citation