2008
DOI: 10.1016/j.neunet.2008.03.011
|View full text |Cite
|
Sign up to set email alerts
|

Teleoperation for a ball-catching task with significant dynamics

Abstract: a b s t r a c tIn this paper we present ongoing work on how to incorporate human motion models into the design of a high performance teleoperation platform. A short description of human motion models used for ball-catching is followed by a more detailed study of a teleoperation platform on which to conduct experiments. Also, a pilot study using minimum jerk theory to explain user input behavior in teleoperated catching is presented.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
12
0

Year Published

2009
2009
2015
2015

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 12 publications
(13 citation statements)
references
References 43 publications
1
12
0
Order By: Relevance
“…Our formalism and analysis build on machine learning, control theory, and human-robot interaction to provide insight into shared control. We suggest possible challenges, as well as opportunities that could arise from the tight interaction between the robot and the user: adaptation to the context and (Rosenberg, 1993) no (Marayong et al, 2003) (Debus et al, 2000) (You and Hauser, 2011) (Kim et al, 2011) (Marayong et al, 2002) (Leeper et al, 2012) predefined behaviors (Demiris and Hayes, 2002) (Fagg et al, 2004) (Crandall and Goodrich, 2002) no (Aigner and McCarragher, 1997) (Gerdes and Rossetter, 2001) (You and Hauser, 2011) (Marayong et al, 2002) predefined behaviors (Yu et al, 2005) no (Kofman et al, 2005) (Shen et al, 2004) (Smith et al, 2008) predefined behaviors MPC/minimum-jerk (Anderson et al, 2010) (Loizou and Kumar, 2007) (Weber et al, 2009) predefined behaviors (Aarno et al, 2005) (Yu et al, 2005) (Vasquez et al, 2005) fixed goals (2D) no (Ziebart et al, 2009) fully flexible (2D) no the user, predicting and expressing intent, and capitalizing on the user's reactions. These challenges and opportunities are not only applicable to shared control, but conceivably to humanrobot collaboration in general.…”
Section: Robot Predictionmentioning
confidence: 99%
“…Our formalism and analysis build on machine learning, control theory, and human-robot interaction to provide insight into shared control. We suggest possible challenges, as well as opportunities that could arise from the tight interaction between the robot and the user: adaptation to the context and (Rosenberg, 1993) no (Marayong et al, 2003) (Debus et al, 2000) (You and Hauser, 2011) (Kim et al, 2011) (Marayong et al, 2002) (Leeper et al, 2012) predefined behaviors (Demiris and Hayes, 2002) (Fagg et al, 2004) (Crandall and Goodrich, 2002) no (Aigner and McCarragher, 1997) (Gerdes and Rossetter, 2001) (You and Hauser, 2011) (Marayong et al, 2002) predefined behaviors (Yu et al, 2005) no (Kofman et al, 2005) (Shen et al, 2004) (Smith et al, 2008) predefined behaviors MPC/minimum-jerk (Anderson et al, 2010) (Loizou and Kumar, 2007) (Weber et al, 2009) predefined behaviors (Aarno et al, 2005) (Yu et al, 2005) (Vasquez et al, 2005) fixed goals (2D) no (Ziebart et al, 2009) fully flexible (2D) no the user, predicting and expressing intent, and capitalizing on the user's reactions. These challenges and opportunities are not only applicable to shared control, but conceivably to humanrobot collaboration in general.…”
Section: Robot Predictionmentioning
confidence: 99%
“…[1]- [6], [18] no [10], [11] predefined paths/behaviors [4], [6]- [8] no [12] predefined paths/behaviors [9], [20]- [22] no [13] predefined paths/behaviors [12], [14] predefined paths/behaviors [23] fixed environment, goals (2D) no [24] fully flexible (goal+policy) (2D) no the robot is able to generate a full policy for completing the task on its own, rather than an attractive/repulsive force or a constraint (e.g. [9], [20]).…”
Section: Methodsmentioning
confidence: 99%
“…Aside from work that classifies which one of a predefined set of paths or behaviors the user is currently engaging [10], [11], most work assumes the robot has access to the user's intent, e.g. that it knows what object to grasp and how (except in [22], which deals with time delays in ball catching by projecting the input forward in time using a minimum-jerk model). Predicting or recognizing intent has received a lot of attention outside of the teleoperation domain, dating back to high-level plan recognition [26].…”
Section: Methodsmentioning
confidence: 99%
“…Thus, there is only one possible MJ trajectory for a given start and end, and it can be found by solving a simple system of linear equations. This is illustrated with an example of a human's reaching motion recorded from an experiment in [16] with the MJ trajectory superimposed, in Fig. 1.…”
Section: B Minimum Jerk Motion Modelmentioning
confidence: 99%