2021
DOI: 10.48550/arxiv.2102.10556
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Inductive logic programming at 30

Andrew Cropper,
Sebastijan Dumančić,
Richard Evans
et al.

Abstract: Inductive logic programming (ILP) is a form of logic-based machine learning. The goal of ILP is to induce a hypothesis (a logic program) that generalises given training examples and background knowledge. As ILP turns 30, we survey recent work in the field.In this survey, we focus on (i) new meta-level search methods, (ii) techniques for learning recursive programs that generalise from few examples, (iii) new approaches for predicate invention, and (iv) the use of different technologies, notably answer set prog… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 74 publications
(185 reference statements)
0
1
0
Order By: Relevance
“…In very simplified domains such as family tree reasoning (a common benchmark in the ILP community, or blocks world, it may be reasonable to write down an initial background knowledge theory that is sufficient to support induction of a complete set of predicates and rules. However, the background knowledge is "similar to features used in most forms of ML" (Cropper et al, 2021). Just as deep learning has substantially reduced the need to handcode feature functions for many problems, it is possible that the current reliance on handcoded background knowledge theories is a substantial limitation in learning logical representations for embodied intelligent systems.…”
Section: Limitations and Challengesmentioning
confidence: 99%
“…In very simplified domains such as family tree reasoning (a common benchmark in the ILP community, or blocks world, it may be reasonable to write down an initial background knowledge theory that is sufficient to support induction of a complete set of predicates and rules. However, the background knowledge is "similar to features used in most forms of ML" (Cropper et al, 2021). Just as deep learning has substantially reduced the need to handcode feature functions for many problems, it is possible that the current reliance on handcoded background knowledge theories is a substantial limitation in learning logical representations for embodied intelligent systems.…”
Section: Limitations and Challengesmentioning
confidence: 99%