2022
DOI: 10.1177/02783649221143399
|View full text |Cite
|
Sign up to set email alerts
|

Supervised learning and reinforcement learning of feedback models for reactive behaviors: Tactile feedback testbed

Abstract: Robots need to be able to adapt to unexpected changes in the environment such that they can autonomously succeed in their tasks. However, hand-designing feedback models for adaptation is tedious, if at all possible, making data-driven methods a promising alternative. In this paper, we introduce a full framework for learning feedback models for reactive motion planning. Our pipeline starts by segmenting demonstrations of a complete task into motion primitives via a semi-automated segmentation algorithm. Then, g… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 67 publications
(110 reference statements)
0
5
0
Order By: Relevance
“…To incorporate the movement phase dependency into the feedback model, Sutanto et al [33] proposed phase-modulated neural networks (PMNNs), which could learn phase-dependent feedback models. Building upon this, Sutanto et al [1] presented a full framework for learning feedback models for reactive motion planning and used a sample-efficient RL algorithm to fine-tune these feedback models for novel tasks through a limited number of interactions with the real system. It is worth noting that all these sensor feedback models are involved in the tuning of the skill model as one term of the DMPs.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…To incorporate the movement phase dependency into the feedback model, Sutanto et al [33] proposed phase-modulated neural networks (PMNNs), which could learn phase-dependent feedback models. Building upon this, Sutanto et al [1] presented a full framework for learning feedback models for reactive motion planning and used a sample-efficient RL algorithm to fine-tune these feedback models for novel tasks through a limited number of interactions with the real system. It is worth noting that all these sensor feedback models are involved in the tuning of the skill model as one term of the DMPs.…”
Section: Related Workmentioning
confidence: 99%
“…Moreover, to address the challenges of gradient explosion and gradient disappearance, the gate recurrent unit (GRU) is used to learn time series information, as it offers computational efficiency compared to the long short-term memory (LSTM) [34]. Furthermore, the phases of actions are incorporated into the network construction to make the feedback model dependent on the evolution of phases [1], enabling improved scalability of the skill model in the time domain. Fig.…”
Section: Force Feedback Learning Model: Phase-modulated Diagonal Recu...mentioning
confidence: 99%
See 3 more Smart Citations