2022
DOI: 10.1109/access.2022.3201894
|View full text |Cite
|
Sign up to set email alerts
|

Finger Joint Angle Estimation With Visual Attention for Rehabilitation Support: A Case Study of the Chopsticks Manipulation Test

Abstract: Most East Asian rehabilitation centers offer chopsticks manipulation tests (CMT). In addition to impaired hand function, approximately two-thirds of stroke survivors have visual impairment related to eye movement. This article investigates the significance of combining finger joint angle estimation and a visual attention measurement in CMT. We present a multiscopic framework that consists of microscopic, mesoscopic, and macroscopic levels. We develop a feature extraction technique to extract the kinematic fing… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 65 publications
(75 reference statements)
0
3
0
Order By: Relevance
“…We used the YOLOv5 [24] model to extract the object's location from picture frames represented by the bounding box and labels. To improve this detection, the Simple Online Real-time Tracking (SORT) [29] technique was used.…”
Section: Microscopic Level: Feature Extraction Abilitymentioning
confidence: 99%
See 1 more Smart Citation
“…We used the YOLOv5 [24] model to extract the object's location from picture frames represented by the bounding box and labels. To improve this detection, the Simple Online Real-time Tracking (SORT) [29] technique was used.…”
Section: Microscopic Level: Feature Extraction Abilitymentioning
confidence: 99%
“…From an egocentric standpoint, it is critical to support the current system. With a case study on the Chopsticks Manipulation Test, we examined the significance of combining finger joint angle estimation and a visual attention measurement in hand rehabilitation [24]. Our previous work used a multiscopic method to address dynamic locomotion in a legged robot [25] and simulation for human-robot interactions [26].…”
mentioning
confidence: 99%
“…Many efforts have been made to recognize human activities using images and videos with 2D and 3D visual information [5]- [8]. Semantics and the context of a situation are usually used for classification, which involves typical HOI [9]- [11]. However, most of their frameworks require massive labelled datasets and much training time to achieve a high level of accuracy.…”
Section: Introductionmentioning
confidence: 99%