1996
DOI: 10.1016/0893-6080(95)00086-0
|View full text |Cite
|
Sign up to set email alerts
|

Extraction of rules from discrete-time recurrent neural networks

Abstract: Absa'act--The extraction of symbolic knowledge from trained neural networks and the direct encoding of (partial)

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
86
0
1

Year Published

1998
1998
2015
2015

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 158 publications
(92 citation statements)
references
References 25 publications
2
86
0
1
Order By: Relevance
“…LSTM learned to solve many previously unlearnable DL tasks involving: Recognition of the temporal order of widely separated events in noisy input streams; Robust storage of high-precision real numbers across extended time intervals; Arithmetic operations on continuous input streams; Extraction of information conveyed by the temporal distance between events; Recognition of temporally extended patterns in noisy input sequences (Hochreiter and Schmidhuber, 1997b;; Stable generation of precisely timed rhythms, as well as smooth and non-smooth periodic trajectories . LSTM clearly outperformed previous RNNs on tasks that require learning the rules of regular languages describable by deterministic Finite State Automata (FSAs) (Watrous and Kuhn, 1992;Casey, 1996;Siegelmann, 1992;Blair and Pollack, 1997;Kalinke and Lehmann, 1998;Zeng et al, 1994;Manolios and Fanelli, 1994;Omlin and Giles, 1996;Vahed and Omlin, 2004), both in terms of reliability and speed.…”
Section: : Supervised Recurrent Very Deep Learner (Lstm Rnn)mentioning
confidence: 99%
“…LSTM learned to solve many previously unlearnable DL tasks involving: Recognition of the temporal order of widely separated events in noisy input streams; Robust storage of high-precision real numbers across extended time intervals; Arithmetic operations on continuous input streams; Extraction of information conveyed by the temporal distance between events; Recognition of temporally extended patterns in noisy input sequences (Hochreiter and Schmidhuber, 1997b;; Stable generation of precisely timed rhythms, as well as smooth and non-smooth periodic trajectories . LSTM clearly outperformed previous RNNs on tasks that require learning the rules of regular languages describable by deterministic Finite State Automata (FSAs) (Watrous and Kuhn, 1992;Casey, 1996;Siegelmann, 1992;Blair and Pollack, 1997;Kalinke and Lehmann, 1998;Zeng et al, 1994;Manolios and Fanelli, 1994;Omlin and Giles, 1996;Vahed and Omlin, 2004), both in terms of reliability and speed.…”
Section: : Supervised Recurrent Very Deep Learner (Lstm Rnn)mentioning
confidence: 99%
“…This was verified empirically [57] through information extraction methods [46], [37] where recurrent networks trained on fuzzy strings develop a crisp internal representation of FFA, i.e., that they represent FFA in the form of equivalent deterministic acceptors. 3 Thus, our theoretical analysis correctly predicted the knowledge representation for such trained networks.…”
Section: B Backgroundmentioning
confidence: 99%
“…Neural networks that have been trained to behave like FFA do not necessarily share this property, i.e., their internal representation of states and transitions may become unstable for sufficiently long input sequences [37]. Finally, with the extraction of knowledge from trained neural networks, the methods discussed here could potentially be applied to incorporating and refining [38] fuzzy knowledge previously encoded into recurrent neural networks.…”
mentioning
confidence: 99%
“…These two mechanisms could conceivably use two different representations: effectively a task-dependent, and a taskindependent representation, e.g. the learning mechanism could be a recurrent neural network, and knowledge could be extracted in the form of finite state automata [12]. People have also attempted to combine feed-forward neural networks with symbolic rules [13].…”
Section: Cumulative Learningmentioning
confidence: 99%