2015 IEEE International Conference on Computer Vision (ICCV) 2015
DOI: 10.1109/iccv.2015.17
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Texture Recognition via Orthogonal Tensor Dictionary Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
54
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 56 publications
(54 citation statements)
references
References 40 publications
0
54
0
Order By: Relevance
“…-9-class and 8-class: Original 50 classes of sequences are divided into 9 semantic categories [29,6] consisting of "boiling water" (8), "fire" (8), "flowers" (12), "fountains" (20), "plants" (108), "sea" (12), "smoke" (4), "water" (12), and "waterfall" (16), where the numbers in parentheses take account of sequences in each class. The "plants" category is eliminated from 9-class to form more challenging 8-class scheme [29,6].…”
Section: Datasets and Experimental Protocolsmentioning
confidence: 99%
See 1 more Smart Citation
“…-9-class and 8-class: Original 50 classes of sequences are divided into 9 semantic categories [29,6] consisting of "boiling water" (8), "fire" (8), "flowers" (12), "fountains" (20), "plants" (108), "sea" (12), "smoke" (4), "water" (12), and "waterfall" (16), where the numbers in parentheses take account of sequences in each class. The "plants" category is eliminated from 9-class to form more challenging 8-class scheme [29,6].…”
Section: Datasets and Experimental Protocolsmentioning
confidence: 99%
“…Fifth, owing to outperforming results, learning-based methods have recently attracted researchers with promising techniques coming from recent advances in deep learning: Transferred ConvNet Features (TCoF) [9], PCA convolutional network (PCANet-TOP) [10], Dynamic Texture Convolutional Neural Network (DT-CNN) [9]. Lately, dictionary-learning-based methods [11,12] have also become more popular in which local DT features are figured out by kernel sparse coding. Sixth, local-feature-based methods have been also considered with different LBP-based variants owing to their simplicity and efficiency since Zhao et al [13] proposed two LBP-based variants for DT depiction: Volume LBP (VLBP) and LBP on three orthogonal planes (LBP-TOP).…”
Section: Introductionmentioning
confidence: 99%
“…In the best configurations formed for comparison, the rate is 98.29% (see Table III). The dictionary learning approach (Orthogonal Tensor DL) [12] collected 0.43% sightly higher than ours but it ineffectively reacted with DynTex variants (i.e. Alpha, Beta, Gamma) which our method mostly outperforms …”
Section: Recognition On Dyntex Datasetmentioning
confidence: 62%
“…Arashloo et al [10] built a multilayer convolutional architecture (PCANetTOP) for spatio-temporal texture description and classification in which a PCA network (PCANet) is used on each of the three orthogonal planes of a DT sequence to learn filters. Other promising methods based on dictionary learning [11], [12] are utilized to extract local DT features via kernel sparse coding which exhibits strong ability of discrimination for classification in computer vision. Fourth, filter-based approaches [2], [13] have been also utilized for handling DT recognition.…”
Section: Introductionmentioning
confidence: 99%
“…A related approach is multi-scale binarised statistical image features (MBSIF-TOP) introduced by Arashloo and Kittler [17], which capture local image statistics by means of filters learned from data by independent component analysis. Tensor dictionary learning (OTD) (Qu et al [18]) is instead a sparse coding based approach for learning a dictionary for local space-time structure. Previous approaches using non-binary joint histograms for image analysis include Schiele and Crowley [19] and Linde and Lindeberg [6], but many later methods have often used either marginal histograms or relative feature strength to capture image statistics.…”
Section: Related Workmentioning
confidence: 99%