The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2020 IEEE International Conference on Robotics and Automation (ICRA) 2020
DOI: 10.1109/icra40945.2020.9196658
|View full text |Cite
|
Sign up to set email alerts
|

Sample-Efficient Robot Motion Learning using Gaussian Process Latent Variable Models

Abstract: Robotic manipulators are reaching a state where we could see them in household environments in the following decade. Nevertheless, such robots need to be easy to instruct by lay people. This is why kinesthetic teaching has become very popular in recent years, in which the robot is taught a motion that is encoded as a parametric function -usually a Movement Primitive (MP)-. This approach produces trajectories that are usually suboptimal, and the robot needs to be able to improve them through trial-and-error. Su… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
10
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
2

Relationship

3
2

Authors

Journals

citations
Cited by 11 publications
(10 citation statements)
references
References 17 publications
0
10
0
Order By: Relevance
“…Additionally, it can be combined with vision algorithms to find relevant geometries in the environment, as done by Pumarola et al 2017 with an application to SLAM. Finally, the methodology can be applied with dimensionality reduction techniques, such as Gaussian Process Latent Variable Models (GPLVM) (Li and Chen 2016) in order to obtain a reduced-dimension feature space for policy learning, as in the works of Koganti et al 2017, Koganti et al 2019and Delgado-Guerrero et al 2020…”
Section: Discussionmentioning
confidence: 99%
“…Additionally, it can be combined with vision algorithms to find relevant geometries in the environment, as done by Pumarola et al 2017 with an application to SLAM. Finally, the methodology can be applied with dimensionality reduction techniques, such as Gaussian Process Latent Variable Models (GPLVM) (Li and Chen 2016) in order to obtain a reduced-dimension feature space for policy learning, as in the works of Koganti et al 2017, Koganti et al 2019and Delgado-Guerrero et al 2020…”
Section: Discussionmentioning
confidence: 99%
“…As a future work, we intend to perform updates of c-GPLVM during the learning process, to consider task modulation according to time-varying context data, to create artificial data when needed by means of random variations of the context vector, to complete reward information with user ratings, and to use GPLVM extensions such as Bayesian GPLVM [32], [33]. Furthermore, this work, together with the one presented in [14], is planned to be extended and improved, including a more exhaustive evaluation of the algorithm, comparing it with several of the aforementioned state-of-the-art methods.…”
Section: Discussionmentioning
confidence: 99%
“…In this paper, we assume that tasks at hand can be modelled as MPs, physically executed and evaluated by means of a reward function considered as a black box, and prior information on parameters is available through initial demonstrations, but instead, we cannot model their dynamics. On this basis, the proposed solution continues along the research line presented in [14]. Therefore, to speed up convergence, we build a surrogate model of reward and apply Bayesian Optimization (BO) in the latent space arisen from a Dimensionality Reduction (DR) of the PS parameter space, since BO algorithms do not perform well with high-dimensional search spaces.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…(GMM) [4], Kernelized Movement Primitives (KMP) [5] and Gaussian Process models (GP) [6]. In a recent work [7], we presented a GP-based LfD framework, which we adopt as a basis for this paper.…”
Section: Introductionmentioning
confidence: 99%