2002
DOI: 10.1007/3-540-36755-1_30
|View full text |Cite
|
Sign up to set email alerts
|

Macro-Operators in Multirelational Learning: A Search-Space Reduction Technique

Abstract: Abstract. Refinement operators are frequently used in the area of multirelational learning (Inductive Logic Programming, ILP) in order to search systematically through a generality order on clauses for a correct theory. Only the clauses reachable by a finite number of applications of a refinement operator are considered by a learning system using this refinement operator; ie. the refinement operator determines the search space of the system. For efficiency reasons, we would like a refinement operator to comput… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2004
2004
2021
2021

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 10 publications
0
5
0
Order By: Relevance
“…As known by all ILPers, the predicate structure (arity, links) strongly influences learning performances. Better predicates can be introduced dynamically through refinement operators, as for instance in Lachiche and Flach (2003) and Peña-Castillo and Wrobel (2002). But the adequacy of a hypothesis language can also be experimentally assessed through systematic θ -subsumption tests, prior to learning.…”
Section: Discussionmentioning
confidence: 99%
“…As known by all ILPers, the predicate structure (arity, links) strongly influences learning performances. Better predicates can be introduced dynamically through refinement operators, as for instance in Lachiche and Flach (2003) and Peña-Castillo and Wrobel (2002). But the adequacy of a hypothesis language can also be experimentally assessed through systematic θ -subsumption tests, prior to learning.…”
Section: Discussionmentioning
confidence: 99%
“…Although, propositionalisation approaches have been successfully applied to various problems but are still considered as ad hoc approaches. These approaches are studied in the larger context of macro-operators [45], which are approaches to improve the heuristic search in ILP systems and extract higherlevel or meta-rules [46]. Pioneering work on the combination of neural-networks and symbolic features has been done by d'Avila Garcez and Zaverucha [47] and extended in França et al [43,48].…”
Section: Related Workmentioning
confidence: 99%
“…Macros are automatically created based on a list of (user declared or automatically determined) dependent providers. The macros construction algorithm we use differs from the one presented in (Peña Castillo & Wrobel, 2002) in the generation of macros with more than one dependent provider. 2 By employing a macro-based refinement operator (ρ M ) in step 2(b)i of Algorithm 1, we obtain macro-based hill-climbing.…”
Section: Macro-operatorsmentioning
confidence: 99%
“…As proposed by Peña Castillo and Wrobel (2002), literals can be classified in providers and consumers. A literal p is a consumer of literal q if p has at least one input variable bound to an output argument value of q; conversely, q is a provider of p. Notice that these relations apply as well to the head literal.…”
Section: Reducing Hill-climbing's Myopiamentioning
confidence: 99%
See 1 more Smart Citation