2020
DOI: 10.1007/s11760-020-01671-x
|View full text |Cite
|
Sign up to set email alerts
|

Temporal capsule networks for video motion estimation and error concealment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 9 publications
0
1
0
Order By: Relevance
“…The model is non-blind and requires knowledge of the location of the packet loss and the macroblocks above and below. The use of temporal capsule networks to encode video-related attributes is examined in [ 83 ]. It operates on three co-located “patches”, which are pixel regions extracted from a video sequence.…”
Section: Learning-based Transmissionmentioning
confidence: 99%
“…The model is non-blind and requires knowledge of the location of the packet loss and the macroblocks above and below. The use of temporal capsule networks to encode video-related attributes is examined in [ 83 ]. It operates on three co-located “patches”, which are pixel regions extracted from a video sequence.…”
Section: Learning-based Transmissionmentioning
confidence: 99%