2023
DOI: 10.3390/jpm13050874
|View full text |Cite
|
Sign up to set email alerts
|

Using EfficientNet-B7 (CNN), Variational Auto Encoder (VAE) and Siamese Twins’ Networks to Evaluate Human Exercises as Super Objects in a TSSCI Images

Abstract: In this article, we introduce a new approach to human movement by defining the movement as a static super object represented by a single two-dimensional image. The described method is applicable in remote healthcare applications, such as physiotherapeutic exercises. It allows researchers to label and describe the entire exercise as a standalone object, isolated from the reference video. This approach allows us to perform various tasks, including detecting similar movements in a video, measuring and comparing m… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 26 publications
0
1
0
Order By: Relevance
“…[9][10][11] However, there are applications in which the temporal context of salient motion is as essential as the motion itself, examples including video synopsis, 12,13 object tracking, 14 and action analysis. 15 In these applications, the temporal context of motion is an important semantic feature. Consequently, in this study, we proposed a new type of temporal context-aware motion saliency (TCAMS), including both motion and temporal-contextual information.…”
Section: Introductionmentioning
confidence: 99%
“…[9][10][11] However, there are applications in which the temporal context of salient motion is as essential as the motion itself, examples including video synopsis, 12,13 object tracking, 14 and action analysis. 15 In these applications, the temporal context of motion is an important semantic feature. Consequently, in this study, we proposed a new type of temporal context-aware motion saliency (TCAMS), including both motion and temporal-contextual information.…”
Section: Introductionmentioning
confidence: 99%
“…For example, Hobara et al analyzed publicly available internet broadcasts to determine the running characteristics of able-bodied and amputee sprinters in actual 100 m races at world championships [ 7 ]. Recently, advanced pose detection technologies have been used in various industries, such as sports [ 10 ], healthcare [ 11 ], and entertainment [ 12 ]. Therefore, we concluded that the combination of these cutting-edge technologies, namely, the analysis of publicly available fashion show video resources using pose detection technology and multivariate analysis techniques, can clarify the characteristics of the sophisticated walking styles of the world’s leading fashion models that are modified over time based on the demands of the industry.…”
Section: Introductionmentioning
confidence: 99%