2023
DOI: 10.1108/ria-09-2022-0226
|View full text |Cite
|
Sign up to set email alerts
|

Long-term robot manipulation task planning with scene graph and semantic knowledge

Abstract: Purpose Autonomous robots must be able to understand long-term manipulation tasks described by humans and perform task analysis and planning based on the current environment in a variety of scenes, such as daily manipulation and industrial assembly. However, both classical task and motion planning algorithms and single data-driven learning planning methods have limitations in practicability, generalization and interpretability. The purpose of this work is to overcome the limitations of the above methods and ac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 40 publications
(48 reference statements)
0
2
0
Order By: Relevance
“…The experimental tasks are defined with reference to previous work on robotic manipulation task planning [ 33 ]. Assuming a fixed scene and a given manipulation task, if prior manipulative knowledge regarding the performance or demonstration of the task is available in the knowledge base, we retrieve the template and instance modules from the knowledge base, call the corresponding action sequences and parameters, and transfer them to the robot for execution.…”
Section: Methodsmentioning
confidence: 99%
“…The experimental tasks are defined with reference to previous work on robotic manipulation task planning [ 33 ]. Assuming a fixed scene and a given manipulation task, if prior manipulative knowledge regarding the performance or demonstration of the task is available in the knowledge base, we retrieve the template and instance modules from the knowledge base, call the corresponding action sequences and parameters, and transfer them to the robot for execution.…”
Section: Methodsmentioning
confidence: 99%
“…With the advancements in computer vision and machine learning techniques, robots can now recognize, interpret and make decisions based on the perception information gathered (Qiao et al , 2022). This has resulted in the widespread use of robot perception in various applications, including the navigation of mobile robots, providing context-awareness for service robots (Miao et al , 2023), robot arm manipulation (Lin and Wang, 2021), manufacturing (Wan et al , 2022; Zeng et al , 2018), mobile robots (Qiu et al , 2019) and transportation guidance for logistic (Bloss, 2011).…”
Section: Introductionmentioning
confidence: 99%
“…Motion planning is a challenging task, since collision-free motion has to be achieved in a complex 3D environment, while also taking into account required contact with objects, including controlled intrusion (e.g., when cutting) [13,17]. In addition to motion planning, there must be efficient task planning that can divide the given recipe into individual tasks and orchestrate their execution [18,19].…”
Section: Introductionmentioning
confidence: 99%