2011
DOI: 10.1007/978-3-642-18275-4_19
|View full text |Cite
|
Sign up to set email alerts
|

Automata Learning with Automated Alphabet Abstraction Refinement

Abstract: Abstraction is the key when learning behavioral models of realistic systems, but also the cause of a major problem: the introduction of non-determinism. In this paper, we introduce a method for refining a given abstraction to automatically regain a deterministic behavior onthe-fly during the learning process. Thus the control over abstraction becomes part of the learning process, with the effect that detected nondeterminism does not lead to failure, but to a dynamic alphabet abstraction refinement. Like automa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
29
0

Year Published

2011
2011
2018
2018

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 62 publications
(29 citation statements)
references
References 22 publications
0
29
0
Order By: Relevance
“…We concentrated on presenting an approach that adapts ideas using abstraction that have been successfully applied in formal verification. This approach has been used on some nontrivial examples [1,2], and techniques for revising abstractions by need have been developed [19]. However, it is clear that much work remains in order to make automata learning with data easily applicable to a wide class of systems.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…We concentrated on presenting an approach that adapts ideas using abstraction that have been successfully applied in formal verification. This approach has been used on some nontrivial examples [1,2], and techniques for revising abstractions by need have been developed [19]. However, it is clear that much work remains in order to make automata learning with data easily applicable to a wide class of systems.…”
Section: Discussionmentioning
confidence: 99%
“…This can be achieved by letting each possible guard correspond to a different abstract input symbol. For the case that the abstraction is not fine enough to distinguish between symbolic transitions that cause different output, a technique for refining the abstraction on-the-fly, during the learning process, has been developed by Howar, Steffen, and Merten [19]. -The abstraction should be unambiguous.…”
Section: Systematic Construction Of Abstractionsmentioning
confidence: 99%
“…Howar et al [22] also used the paradigm on the alphabet for inferring abstract λ (a, xa = 3) λ 1 0 (s0) (a, xa = 0) 0 0 (s1) (a, xa = 1) 1 1 (s2) (a, xa = 2) 0 0 (a, xa = 3) 0 0 (a, xa ≥ 3) 0 0 (a, 0 ≤ xa ≤ 1) 0 0 (a, 1 ≤ xa ≤ 2) 0 0 (a, 2 ≤ xa ≤ 3) 0 0 (a, 0 ≤ xa ≤ 2) 0 0 (a, 1 ≤ xa ≤ 3) 0 0 (a, 0 ≤ xa ≤ 3) 0 0 (a, xa = 0)(a, xa = 0) 0 0 (a, xa = 0)(a, xa = 1) 0 0 (a, xa = 0)(a, xa = 2) 0 0 (a, xa = 0)(a, xa = 3) 0 0 (a, xa = 0)(a, xa ≥ 3) 0 0 (a, xa = 0)(a, 0 ≤ xa ≤ 1) 0 0 (a, xa = 0)(a, 1 ≤ xa ≤ 2) 0 0 (a, xa = 0)(a, 2 ≤ xa ≤ 3) 0 0 (a, xa = 0)(a, 0 ≤ xa ≤ 2) 0 0 (a, xa = 0)(a, 1 ≤ xa ≤ 3) 0 0 (a, xa = 0)(a, 0 ≤ xa ≤ 3) 0 0 (a, xa = 1)(a, xa = 0) 0 0 (a, xa = 1)(a, xa = 1) 0 0 (a, xa = 1)(a, xa = 2) 0 0 (a, xa = 1)(a, xa = 3) 1 0 (a, xa = 1)(a, xa ≥ 3) 0 0 (a, xa = 1)(a, 0 ≤ xa ≤ 1) 0 0 (a, xa = 1)(a, 1 ≤ xa ≤ 2) 0 0 (a, xa = 1)(a, 2 ≤ xa ≤ 3) 0 0 (a, xa = 1)(a, 0 ≤ xa ≤ 2) 0 0 (a, xa = 1)(a, 1 ≤ xa ≤ 3) 0 0 (a, xa = 1)(a, 0 ≤ xa ≤ 3) 0 0 Table Constructed by TL * sg automata with respect to given concrete behavior such that determinism is preserved. Our TL * algorithm may benefit from the abstraction refinement paradigm if the alphabet of the ERA to be learned can be smaller.…”
Section: Related Workmentioning
confidence: 99%
“…The more structured the intended learning output is, the more successful active learning will be, as the required structural constraints are a good guide for the active construction of examples [3]. It has been successfully used in practice for inferring computational models via testing [10,9].…”
Section: Machine Learning: a Brief Taxonomymentioning
confidence: 99%