2015
DOI: 10.1007/978-3-319-23708-4_8
|View full text |Cite
|
Sign up to set email alerts
|

Learning Prime Implicant Conditions from Interpretation Transition

Abstract: In a previous work we proposed a framework for learning normal logic programs from transitions of interpretations. Given a set of pairs of interpretations (I, J) such that J = TP (I), where TP is the immediate consequence operator, we infer the program P. Here we propose a new learning approach that is more efficient in terms of output quality. This new approach relies on specialization in place of generalization. It generates hypotheses by specialization from the most general clauses until no negative transit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
18
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 18 publications
(18 citation statements)
references
References 17 publications
0
18
0
Order By: Relevance
“…A modeling of both synchronous, asynchronous and general semantics is also proposed. From this solid theory a rather straightforward extension of the LF1T algorithm of [13] is proposed. LF1T is restricted to learning synchronous deterministic Boolean systems.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…A modeling of both synchronous, asynchronous and general semantics is also proposed. From this solid theory a rather straightforward extension of the LF1T algorithm of [13] is proposed. LF1T is restricted to learning synchronous deterministic Boolean systems.…”
Section: Discussionmentioning
confidence: 99%
“…The table also provides the number of transitions generated by the semantics for each benchmark. It is important to note that those systems are all synchronous deterministic, meaning that in the synchronous case, the input transitions of GULA are the same as the input transitions of LF1T in [13]. Here the number of transitions in the synchronous case is much lower than for the random experiment, which explains the difference in terms of run-time.…”
mentioning
confidence: 91%
See 2 more Smart Citations
“…To build a logic program with LF1T , we use a bottom-up method that generates hypotheses by specialization from the most general rules that are fact rules, until the logic program is consistent with all input state transitions. Learning by specialization ensures to output the most general valid hypothesis (Ribeiro and Inoue, 2014 ). Here, the notion of prime implicant is used to define minimality of logic programs.…”
Section: Introductionmentioning
confidence: 99%