2014
DOI: 10.48550/arxiv.1412.0691
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

RoboBrain: Large-Scale Knowledge Engine for Robots

Abstract: In this paper we introduce a knowledge engine, which learns and shares knowledge representations, for robots to carry out a variety of tasks. Building such an engine brings with it the challenge of dealing with multiple data modalities including symbols, natural language, haptic senses, robot trajectories, visual features and many others. The knowledge stored in the engine comes from multiple sources including physical interactions that robots have while performing tasks (perception, planning and control), kno… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
30
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 22 publications
(30 citation statements)
references
References 45 publications
0
30
0
Order By: Relevance
“…Prior work in open-world planning has developed techniques to make robots programmed by expert users more robust to environment variation through the use of commonsense or domain knowledge [19], [20], [21], [22], [23]. Open-world planners can achieve plan goals in environments with variations [19], under-specified plans [21], [24], and partially observed worlds [20] by coupling a knowledge base with declarative programming planners, such as PDDL [19], CRAM [21], or ASP [20].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Prior work in open-world planning has developed techniques to make robots programmed by expert users more robust to environment variation through the use of commonsense or domain knowledge [19], [20], [21], [22], [23]. Open-world planners can achieve plan goals in environments with variations [19], under-specified plans [21], [24], and partially observed worlds [20] by coupling a knowledge base with declarative programming planners, such as PDDL [19], CRAM [21], or ASP [20].…”
Section: Related Workmentioning
confidence: 99%
“…Our domain knowledge representation uses an explicit model of world semantics in the form of a knowledge graph G composed of individual facts or triples (h, r, t) with h and t being the head and tail entities (respectively) for which the relation r holds, e.g. (cup, hasAction, fill) [26], [23], [22], [27].…”
Section: A Knowledge Representationmentioning
confidence: 99%
“…Modeling Semantic Knowledge in Robotics methods commonly use an explicit model of world semantics in the form of a knowledge graph G composed of individual facts or triples (h, r, t) with h and t being the head and tail entities (respectively) for which the relation r holds, e.g. (cup, hasAction, fill) [21], [22], [23], [24]. Recent work has modeled G using distributed representations because of their ability to approximate proximity of meaning from vector computations [1], [2], [3], [4], [5], [6].…”
Section: Related Work and Backgroundmentioning
confidence: 99%
“…Thus, most robotic systems of the future will maintain associated databases storing past sensor data. The nascent interdisciplinary work between robotics and databases includes knowledge engines [25], failure provenance [22], and notably, Vroom [19], a project with similar ambitions. To handle robotic perception data, Vroom memoizes the incoming data using pre-existing classifiers, typically done onboard the robot.…”
Section: Introductionmentioning
confidence: 99%