1978
DOI: 10.1109/tit.1978.1055877
|View full text |Cite
|
Sign up to set email alerts
|

On the monotonicity of the performance of Bayesian classifiers (Corresp.)

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

1987
1987
2006
2006

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(6 citation statements)
references
References 5 publications
0
6
0
Order By: Relevance
“…As the parameters are estimated from a limited number of instances, learning a separate multinet structure per class instead of one overall structure results in more unreliable parameter estimates and, hence, a higher probability of overgeneralization. This effect is closely related to the so-called peaking phenomenon, for a discussion see, e.g., [53].…”
Section: Multinet Bayesian Network Classifiersmentioning
confidence: 92%
“…As the parameters are estimated from a limited number of instances, learning a separate multinet structure per class instead of one overall structure results in more unreliable parameter estimates and, hence, a higher probability of overgeneralization. This effect is closely related to the so-called peaking phenomenon, for a discussion see, e.g., [53].…”
Section: Multinet Bayesian Network Classifiersmentioning
confidence: 92%
“…However, most of these data sets were not highly informative, as judged by their ability to identify true-positive gene interactions with a low false-positive rate. Because it is known that many classifiers perform best when a subset of features are used, 37,38 we used only four informative microarray coexpression data sets for classification, [39][40][41][42] each showing a minimal AUC of 0.59. In total, these data sets contained 461 microarray hybridizations.…”
Section: Construction Of a Functional Gene Networkmentioning
confidence: 99%
“…It is well known that trained classifiers suffer from the curse of dimensionality, which impedes generalization when the number of features becomes high. This so-called peaking phenomenon 51,52 implies an increasing difficulty in discerning discriminative from useless features as the dimensionality of the feature space increases. 53 The peaking phenomenon can prevent our scale selection algorithm from choosing the best set of scales.…”
Section: Discussionmentioning
confidence: 99%