2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2021
DOI: 10.1109/cvprw53098.2021.00312
|View full text |Cite
|
Sign up to set email alerts
|

One-shot action recognition in challenging therapy scenarios

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 24 publications
(11 citation statements)
references
References 18 publications
0
11
0
Order By: Relevance
“…III-C (the last group of approaches in Table I). The results further validated through three-shot experiments [39] 42.1 ---AP † [38] 42.9 ---APSR [26] 45.3 ---TCN-OneShot [55] 46.3 ---SL-DML [15] 50.9 ---Skeleton-DML [40] 54. ProFormer SL-DML Fig.…”
Section: B Experiments Resultsmentioning
confidence: 73%
See 1 more Smart Citation
“…III-C (the last group of approaches in Table I). The results further validated through three-shot experiments [39] 42.1 ---AP † [38] 42.9 ---APSR [26] 45.3 ---TCN-OneShot [55] 46.3 ---SL-DML [15] 50.9 ---Skeleton-DML [40] 54. ProFormer SL-DML Fig.…”
Section: B Experiments Resultsmentioning
confidence: 73%
“…We perform comprehensive studies for one-shot human activity recognition from 3D body poses on three challenging datasets: NTU-60 [54], NTU-120 [26] and Toyota Smarthome [30]. To compare with competitive state-of-the art methods for data-efficient recognition, we select NTU-120 as our primary test bed, as it is a well-established benchmark for one-shot recognition from 3D body poses [15], [26], [38], [39], [55]. Additionally, we adapt the evaluation protocols of Toyota Smarthome and NTU-60 to suit our data-scarce representation learning task.…”
Section: Experiments a Datasets And Implementation Detailsmentioning
confidence: 99%
“…After GCN module extracts movement representation features for all nodes, we deploy a TCN module to learn features in temporal domain [40]. Before entering TCN module, we need to participate the whole sequence in multiple segments according to the temporal resolution for anomaly state locating task.…”
Section: A Behavior Feature Extractingmentioning
confidence: 99%
“…The capability to understand human intentions and goals allows a robot to discern when its assistance is most needed, thereby minimizing disruptions to human activities. A robot equipped with a human action recognition system can also be used to monitor the condition of patients to provide better daily-life assistance for their recovery [6], [7], assess the safety of its surroundings, issue warnings, and detect gestures for help in rescue missions to provide assistance. Challenges of image-or video-based This work was supported in part by the SmartAge project sponsored by the Carl Zeiss Stiftung (P2019-01-003; 2021-2026), the MWK through the Cooperative Graduate School Accessibility through AI-based Assistive Technology (KATE) under Grant BW6-03, and in part by the BMBF through a fellowship within the IFI program of the German Academic Exchange Service (DAAD), in part by the HoreKA@KIT supercomputer partition, and in part by Hangzhou SurImage Technology Company Ltd. *Corresponding author.…”
Section: Introductionmentioning
confidence: 99%