2019 IEEE 15th International Conference on Automation Science and Engineering (CASE) 2019
DOI: 10.1109/coase.2019.8843293
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Task Hierarchical Imitation Learning for Home Automation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 25 publications
(26 citation statements)
references
References 22 publications
0
26
0
Order By: Relevance
“…A number of approaches have explored the learning of primitives or options from demonstrations together with a high-level controller that is either learned from demonstrations [Kroemer et al, 2015, Krishnan et al, 2017, Ding et al, 2019, Lynch et al, 2020, or learned from interactions with the environment [Manschitz et al, 2015, Kipf et al, 2019, Shankar et al, 2019, or hand specified [Pastor et al, 2009, Fox et al, 2019. AQuaDem can be loosely interpreted as a two-level procedure as well, where the primitives (action discretization step) are learned fully offline, however there is no concept of goal nor temporally extended actions.…”
Section: Hierarchical Imitation Learningmentioning
confidence: 99%
“…A number of approaches have explored the learning of primitives or options from demonstrations together with a high-level controller that is either learned from demonstrations [Kroemer et al, 2015, Krishnan et al, 2017, Ding et al, 2019, Lynch et al, 2020, or learned from interactions with the environment [Manschitz et al, 2015, Kipf et al, 2019, Shankar et al, 2019, or hand specified [Pastor et al, 2009, Fox et al, 2019. AQuaDem can be loosely interpreted as a two-level procedure as well, where the primitives (action discretization step) are learned fully offline, however there is no concept of goal nor temporally extended actions.…”
Section: Hierarchical Imitation Learningmentioning
confidence: 99%
“…We conducted a limited number of experiments-4 sample scenes per 7, 9 and 11 objects. We then ran both the HLP and the ST O producing a total of 24 robot runs 7 . Sample results are shown in Fig.…”
Section: ) Real Robot Experimentsmentioning
confidence: 99%
“…The evolution of manipulation tasks is another wildly studied aspect that directly utilizes manipulation knowledge in robotics. Task evolution can be represented by structures such as semantic trees [6]- [9], state transition graphs [10], [11] or behavior trees [12], [13]. However, most of those evolution representations rely heavily on human annotations, and few of the aforementioned studies have discussed how to automatically acquire evolution representations on-scene.…”
Section: Related Work a Manipulation Knowledge In Roboticsmentioning
confidence: 99%