2019
DOI: 10.1117/1.jei.28.2.023012
|View full text |Cite
|
Sign up to set email alerts
|

RGB-D action recognition based on discriminative common structure learning model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 0 publications
0
9
0
Order By: Relevance
“…Domestic scholars have also made many contributions to image aesthetics assessment. Liu et al [17] extracted lowlevel visual features, high-level aesthetic features, and visual area features from the overall area and visually critical areas of an image and established an image aesthetic classifier and an aesthetic score assessment model.…”
Section: Research Statusmentioning
confidence: 99%
“…Domestic scholars have also made many contributions to image aesthetics assessment. Liu et al [17] extracted lowlevel visual features, high-level aesthetic features, and visual area features from the overall area and visually critical areas of an image and established an image aesthetic classifier and an aesthetic score assessment model.…”
Section: Research Statusmentioning
confidence: 99%
“…Most of the existing RGB-D-based methods only stitch the original heterogeneous features but do not find the potential relationship between different modes. Liu et al 20 . extracted deep learning-based features and hand-crafted features from multimodal data (skeleton, depth, and RGB) and used generalized ensemble matrix decomposition to learn shared features between different modalities.…”
Section: Related Workmentioning
confidence: 99%
“…To increase the stability of the CAE KNN, we propose a novel algorithm, namely attCAE KNN, which is the first time to explore the attention strategy to CAE. Attention strategy (Xu et al 2015;Gregor et al 2015) kNN Module The first part is the encoder that consists of the channel attention block and the spatial attention block (Liu et al 2019), which differs from the classical encoder in inserting CBAM, as shown in Fig. 4.…”
Section: The Attention-cae-knn-based Approachmentioning
confidence: 99%
“…makes the attCAE KNN focus on 'what' is meaningful for given astronomical images so that attCAE KNN could ignore the background noise. We build attCAE KNN by adopting a convolutional block attention module (CBAMLiu et al 2019). Its architecture is shown in Fig.3, including encoder, decoder, and KNN module.The decoder and KNN module have been described in Sect.…”
mentioning
confidence: 99%