2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2020
DOI: 10.1109/iros45743.2020.9341470
|View full text |Cite
|
Sign up to set email alerts
|

Representation and Experience-Based Learning of Explainable Models for Robot Action Execution

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 11 publications
(24 citation statements)
references
References 23 publications
0
24
0
Order By: Relevance
“…For both actions, we learn dedicated execution models (using guided learning as in [8]) for each of the following objects in order to have a reasonable variety of models: apple, chips can, sugar box, mug, and tennis ball. Each model is learned using 25 executions of the action; the remaining objects are used for testing the generalisation.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…For both actions, we learn dedicated execution models (using guided learning as in [8]) for each of the following objects in order to have a reasonable variety of models: apple, chips can, sugar box, mug, and tennis ball. Each model is learned using 25 executions of the action; the remaining objects are used for testing the generalisation.…”
Section: Methodsmentioning
confidence: 99%
“…To find the planar orientation of the object with respect to the robot, we use a RANSAC-like procedure [27] that finds the direction of 2D lines fitted to randomly selected points from the object's point cloud; the average of these line orientations is taken to represent the object's orientation. 18 The relational model of the grasping action is extracted from the following relations (similar to [8]):…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations