2019
DOI: 10.1186/s12984-019-0557-1
|View full text |Cite
|
Sign up to set email alerts
|

Egocentric video: a new tool for capturing hand use of individuals with spinal cord injury at home

Abstract: Background Current upper extremity outcome measures for persons with cervical spinal cord injury (cSCI) lack the ability to directly collect quantitative information in home and community environments. A wearable first-person (egocentric) camera system is presented that aims to monitor functional hand use outside of clinical settings. Methods The system is based on computer vision algorithms that detect the hand, segment the hand outline, distinguish the user’s left or … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
43
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 33 publications
(45 citation statements)
references
References 40 publications
2
43
0
Order By: Relevance
“…Lastly, pixel-level segmentation was achieved by backprojecting using an adaptively selected region in the colour space. In [30], the coarse segmentation obtained with a mixture of Gaussian skin model [20], [29] was refined by using a structured forest edge detection [31], specifically trained on available datasets [19], [32].…”
Section: Discriminating Hands From Objects and Backgroundmentioning
confidence: 99%
See 4 more Smart Citations
“…Lastly, pixel-level segmentation was achieved by backprojecting using an adaptively selected region in the colour space. In [30], the coarse segmentation obtained with a mixture of Gaussian skin model [20], [29] was refined by using a structured forest edge detection [31], specifically trained on available datasets [19], [32].…”
Section: Discriminating Hands From Objects and Backgroundmentioning
confidence: 99%
“…CaffeNet [75] was used for classifying the proposals. Faster R-CNN was used in [30], [77], [78]. In particular, Likitlersuang et al [30] fine-tuned the network on videos from individuals with cSCI performing ADLs.…”
Section: Hand Detection As Object Detectionmentioning
confidence: 99%
See 3 more Smart Citations