1997
DOI: 10.1006/jcss.1997.1504
|View full text |Cite
|
Sign up to set email alerts
|

A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting

Abstract: In the first part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weightupdate Littlestone Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

19
8,464
0
101

Year Published

2000
2000
2016
2016

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 14,586 publications
(8,979 citation statements)
references
References 16 publications
19
8,464
0
101
Order By: Relevance
“…The process of building the model uses a process known as boosting (AdaBoost algorithm [28]) to enhance the predictive capability. Boosting combined together several weighted individual predictors into one model, with each individual predictor composed of the LNTCP metric and a decision tree.…”
Section: Introductionmentioning
confidence: 99%
“…The process of building the model uses a process known as boosting (AdaBoost algorithm [28]) to enhance the predictive capability. Boosting combined together several weighted individual predictors into one model, with each individual predictor composed of the LNTCP metric and a decision tree.…”
Section: Introductionmentioning
confidence: 99%
“…It has been shown that provided enough data for training, weak classifiers (performing slightly better than random guessing) can be aggregated to form an arbitrarily good classifier satisfying any desired (classification or generalization) error tolerance. Schapire introduced the first polynomial time boosting algorithm in 1990, and in 1995, Freund and Schapire (1996 introduced the Adaboost algorithm for binary classification (see also Schapire, 1999), and, later, versions for multiclass classification (Allwein et al, 2000). Mason et al (1999) introduced a general framework for Boosting design based on convex risk minimization with arbitrary convex loss function.…”
mentioning
confidence: 99%
“…24. Classification model using the aforementioned features was constructed with three different state-of-the-art ensemble and meta classifiers: Random forest (25), Bagging (26) and LogitBoost (27)(28)(29)(30). The performance of the model was evaluated based on the promoter prediction metrics suggested by (31): sensitivity (SN), positive predictive value (PPV), correlation coefficient (CC) and true-positive cost (TPC).…”
Section: Identification and Annotation Of Pol-ii Promoter Peaksmentioning
confidence: 99%