2014 IEEE International Conference on Robotics and Automation (ICRA) 2014
DOI: 10.1109/icra.2014.6907310
|View full text |Cite
|
Sign up to set email alerts
|

Estimating finger grip force from an image of the hand using Convolutional Neural Networks and Gaussian processes

Abstract: Abstract-Estimating human fingertip forces is required to understand force distribution in grasping and manipulation. Human grasping behavior can then be used to develop forceand impedance-based grasping and manipulation strategies for robotic hands. However, estimating human grip force naturally is only possible with instrumented objects or unnatural gloves, thus greatly limiting the type of objects used.In this paper we describe an approach which uses images of the human fingertip to reconstruct grip force a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
14
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 15 publications
(14 citation statements)
references
References 17 publications
(17 reference statements)
0
14
0
Order By: Relevance
“…velocity) based on gross assumptions of linear regression, which overfit to a simple movement pattern or participant cohort [7], however, researchers have reported success deriving kinematics from these devices for movement classification [40,52]. To improve on these methods, a number of research teams have sought to leverage computer vision and data science techniques, and while initial results appear promising, to date they lack validation to ground truth data, or relevance to specific sporting related tasks [8,49,53]. For example, Fluit et al [22] and Yang et al [57] derive GRF/M from motion capture.…”
Section: Introductionmentioning
confidence: 99%
“…velocity) based on gross assumptions of linear regression, which overfit to a simple movement pattern or participant cohort [7], however, researchers have reported success deriving kinematics from these devices for movement classification [40,52]. To improve on these methods, a number of research teams have sought to leverage computer vision and data science techniques, and while initial results appear promising, to date they lack validation to ground truth data, or relevance to specific sporting related tasks [8,49,53]. For example, Fluit et al [22] and Yang et al [57] derive GRF/M from motion capture.…”
Section: Introductionmentioning
confidence: 99%
“…Previous computer vision and data science researchers have attempted to estimate GRF/Ms, however studies suffer from poor validation to ground truth data, are not sports related [17], [18], [19], require a full body modeling protocol with multiple inputs [20], or as before, predict only unidirectional GRF components (e.g. vertical F z ) [21].…”
Section: Introductionmentioning
confidence: 99%
“…13 [4] estimates force and torque using Gaussian processes (GP) and neural networks. Given the high accuracy obtained in [4][5][6] we use Gaussian processes to estimate force from the aligned images in this paper. We also explore other methods for the estimation such as Neural Networks, Convolutional Neural Networks and Recurrent Neural Networks.…”
Section: Introductionmentioning
confidence: 99%