ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2022
DOI: 10.1109/icassp43922.2022.9747610
|View full text |Cite
|
Sign up to set email alerts
|

VCD: View-Constraint Disentanglement for Action Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 19 publications
0
2
0
Order By: Relevance
“…The results in Fig. 5 show that the proposed biological features can outperform those deep features extracted from the CNN models in temporal quality perception, which motivate us to address the related vision problems, such as image restoration [21,22], action recognition [62,63], and video compression [52,53] considering the characteristics of the HVS.…”
Section: Ablation Studymentioning
confidence: 92%
“…The results in Fig. 5 show that the proposed biological features can outperform those deep features extracted from the CNN models in temporal quality perception, which motivate us to address the related vision problems, such as image restoration [21,22], action recognition [62,63], and video compression [52,53] considering the characteristics of the HVS.…”
Section: Ablation Studymentioning
confidence: 92%
“…These methods show that the structural prior effectively helps to improve the quality of the completed images. In spite of this, they still struggle in cases when holes are (d) EC [11] (e) GI [14] (f) Ours To seek more prior information to help solve this type of ill-posed problems, reference images with similar textures and structures are introduced in some vision tasks, such as image super-resolution [15,16], image compression [17] and action recognition [23]. Their motivation is to utilize rich textures from the references to compensate for the lost details in the input images and thereby produce more detailed and realistic content.…”
Section: Introductionmentioning
confidence: 99%