2019
DOI: 10.3390/robotics8040104
|View full text |Cite
|
Sign up to set email alerts
|

Accelerating Interactive Reinforcement Learning by Human Advice for an Assembly Task by a Cobot

Abstract: The assembly industry is shifting more towards customizable products, or requiring assembly of small batches. This requires a lot of reprogramming, which is expensive because a specialized engineer is required. It would be an improvement if untrained workers could help a cobot to learn an assembly sequence by giving advice. Learning an assembly sequence is a hard task for a cobot, because the solution space increases drastically when the complexity of the task increases. This work introduces a novel method whe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 22 publications
(23 reference statements)
0
6
0
Order By: Relevance
“…Applications of human-in-the-loop reinforcement learning to deal with real-world problems have been investigated in few papers. For example, in Shah et al [ 78 ], a human-in-the-loop reinforcement learning approach was presented for task-oriented dialogue management and in Winter et al [ 79 ], to program an assembly task, human-in-the-loop reinforcement learning approach was used.…”
Section: Resultsmentioning
confidence: 99%
“…Applications of human-in-the-loop reinforcement learning to deal with real-world problems have been investigated in few papers. For example, in Shah et al [ 78 ], a human-in-the-loop reinforcement learning approach was presented for task-oriented dialogue management and in Winter et al [ 79 ], to program an assembly task, human-in-the-loop reinforcement learning approach was used.…”
Section: Resultsmentioning
confidence: 99%
“…In our previous work (De Winter et al, 2019), we applied an Interactive Reinforcement Learning (IRL) model to enable a cobot to assemble the Cranfield benchmark (Collins et al, 1985). The Cranfield benchmark comprises of 9 objects: base plate, top plate, square peg (×2), round peg (×2), pendulum, separator, and shaft (Figure 1).…”
Section: Methodsmentioning
confidence: 99%
“…The challenge in (De Winter et al, 2019) was to teach the cobot the correct order of the objects to assemble the Cranfield benchmark correctly, which IRL was able to achieve. The applied IRL model generates a Q-table, which shows the appropriateness of selecting each object in different states.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The use of prior knowledge in the form of advice, feedback, and demonstrations has been proven to significantly increase the sample efficiency of the RL algorithms in single-agent settings e.g. [27,17,16,25,21,6]. But little has been done in the area in multi-agent settings e.g.…”
Section: Introductionmentioning
confidence: 99%