2015 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI) 2015
DOI: 10.1109/mfi.2015.7295812
|View full text |Cite
|
Sign up to set email alerts
|

Human intention inference through interacting multiple model filtering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 20 publications
(11 citation statements)
references
References 21 publications
0
11
0
Order By: Relevance
“…This inferred both the target object a user was reaching for, as well as detecting the completion of the user reaching motion. The end of the reaching motion was used to switch to the next assembly step [17]. A limitation of G-MMIE is that it is based on a definition of 'action' that is limited to just the reaching event, and so it does not account for its environmental consequence (such as grasping versus not grasping an object and therefore switching to the next procedural steps vs remaining in the same step).…”
Section: Intent Prediction Over Multiple Stepsmentioning
confidence: 99%
“…This inferred both the target object a user was reaching for, as well as detecting the completion of the user reaching motion. The end of the reaching motion was used to switch to the next assembly step [17]. A limitation of G-MMIE is that it is based on a definition of 'action' that is limited to just the reaching event, and so it does not account for its environmental consequence (such as grasping versus not grasping an object and therefore switching to the next procedural steps vs remaining in the same step).…”
Section: Intent Prediction Over Multiple Stepsmentioning
confidence: 99%
“…(3) the Bayesian model, in which a human is treated as a stochastic agent [138], and a Bayesian network [110,117] is associated to his decisions, i.e. the human's motion follows a probabilistic distribution conditioned on the robot's behavior.…”
Section: Design Of the Knowledgementioning
confidence: 99%
“…Different methods are used to answer this question such as, Neural Networks [3] , gradients computation [23], or probability. Gaze is often used as an a priori to perform an intended task (e.g, our work with ProMPs) to detect the object of interest (e.g., [12] with Neural Networks) , or to predict the goal location (e.g., [22] with dynamic models). The main differences between our study and [12,22] is that these works are interested in the human motion prediction while we associate human gaze to the robot motions.…”
Section: Related Workmentioning
confidence: 99%