2013
DOI: 10.1007/s10994-013-5353-8
|View full text |Cite
|
Sign up to set email alerts
|

Learning from interpretation transition

Abstract: We propose a novel framework for learning normal logic programs from transitions of interpretations. Given a set of pairs of interpretations (I, J) such that J = T P (I), where T P is the immediate consequence operator, we infer the program P. The learning framework can be repeatedly applied for identifying Boolean networks from basins of attraction. Two algorithms have been implemented for this learning task, and are compared using examples from the biological literature. We also show how to incorporate backg… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
99
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 58 publications
(99 citation statements)
references
References 43 publications
0
99
0
Order By: Relevance
“…Another framework, under the supported model semantics rather than the answer set semantics, is Learning from Interpretation Transitions (LFIT) [47]. In LFIT, the examples are pairs of interpretations I, J where J is the set of immediate consequences of I given B ∪ H .…”
Section: Other Learning Frameworkmentioning
confidence: 99%
“…Another framework, under the supported model semantics rather than the answer set semantics, is Learning from Interpretation Transitions (LFIT) [47]. In LFIT, the examples are pairs of interpretations I, J where J is the set of immediate consequences of I given B ∪ H .…”
Section: Other Learning Frameworkmentioning
confidence: 99%
“…The idea of context-dependent example has similarities with the concept of learning from interpretation transitions (LFIT) (Inoue et al 2014), where examples are pairs of set of atoms I , J such that B ∪ H must satisfy T B∪H (I ) = J (where T P (I ) is the set of immediate consequences of I with respect to the program P ). LFIT technically learns under the supported model semantics and uses a far smaller language than that supported by ILP context LOAS (not supporting choice rules or hard or weak constraints), but can be simply represented in ILP context LOAS .…”
Section: Related Workmentioning
confidence: 99%
“…More recently, in [3], the authors question the kind of properties that may be preserved, whatever the semantics, while discussing the merits of the usual updating modes including synchronous, fully asynchronous and generalized asynchronous updating. As a good choice of semantics is key to a sound analysis of a system, it is critical to be able to learn not only one kind of semantics, but to embrace a wide range of updating modes.So far, learning from interpretation transition (LFIT) [9] has been proposed to automatically construct a model of the dynamics of a system from the observation of its state transitions. Figure 1 shows this learning process.…”
mentioning
confidence: 99%
“…The LFIT framework proposes several modeling and learning algorithms to tackle those different semantics. To date, the following systems have been tackled: memory-less consistent systems [9], systems with memory [14], non-consistent systems [12] …”
mentioning
confidence: 99%