IJCNN-91-Seattle International Joint Conference on Neural Networks
DOI: 10.1109/ijcnn.1991.155294
|View full text |Cite
|
Sign up to set email alerts
|

Incremental learning with rule-based neural networks

Abstract: A classifier for discrete-valued variable classification problems is presented. The system utilizes an information-theoretic algorithm for constructing informative rules from example data. These rules are then used to construct a neural network to perform parallel inference and posterior probability estimation. The network can be 'grown' incrementally, so that new data can be incorporated without repeating the training on previous data. It is shown that this technique performs comparably with other techniques … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 8 publications
0
6
0
Order By: Relevance
“…The J-measure not only provides a useful and sound method for ranking rules, but also provides a more complex metric in rule mining and is used in many literatures for handing different rule discovery problems, such as efficient rule discovery in a geo-spatial decision support system (Harms et al, 2002), learning fuzzy rulebased networks for function approximation (Higgins & Goodman, 1992), and incremental learning with rule-based neural networks (Higgins & Goodman, 1991).…”
Section: The J-measurementioning
confidence: 99%
“…The J-measure not only provides a useful and sound method for ranking rules, but also provides a more complex metric in rule mining and is used in many literatures for handing different rule discovery problems, such as efficient rule discovery in a geo-spatial decision support system (Harms et al, 2002), learning fuzzy rulebased networks for function approximation (Higgins & Goodman, 1992), and incremental learning with rule-based neural networks (Higgins & Goodman, 1991).…”
Section: The J-measurementioning
confidence: 99%
“…For example, in some cases, the phrase "incremental learning" has been used to refer to growing or pruning of classifier architectures [2]- [4] or to selection of most informative training samples [5]. In other cases, some form of controlled modification of classifier weights has been suggested, typically by retraining with misclassified signals [6]- [12]. These algorithms are capable of learning new information; however, they do not simultaneously satisfy all of the above-mentioned criteria for incremental learning: they either require access to old data, forget prior knowledge along the way, or unable to accommodate new classes.…”
Section: A Incremental Learningmentioning
confidence: 99%
“…The distribution update rule constitutes the heart of the algorithm, as it allows Learn++ to learn incrementally if otherwise (6) According to this rule, if instance is correctly classified by the composite hypothesis , its weight is multiplied by a factor of , which, by its definition, is less than 1. If is misclassified, its distribution weight is kept unchanged.…”
Section: Ensemble Of Classifiers For Incremental Learning: Learn++mentioning
confidence: 99%
“…In this paper, we present a method for learning fuzzy rules from example data based upon information theory. This method of learning rules from data has been well-documented on discrete data (see [1,2,3]), and can be simply modified to be used for the learning of fuzzy rules.…”
Section: Introductionmentioning
confidence: 99%