Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval 2008
DOI: 10.1145/1390334.1390442
|View full text |Cite
|
Sign up to set email alerts
|

trNon-greedy active learning for text categorization using convex ansductive experimental design

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
71
0

Year Published

2009
2009
2021
2021

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 40 publications
(71 citation statements)
references
References 13 publications
0
71
0
Order By: Relevance
“…The first one is a subset of the Newsgroups corpus [20], which consists of 3970 documents with TFIDF features of 8014 dimensions. Each document belongs to exactly one of four categories: autos, motorcycles, baseball and hockey.…”
Section: Methodsmentioning
confidence: 99%
“…The first one is a subset of the Newsgroups corpus [20], which consists of 3970 documents with TFIDF features of 8014 dimensions. Each document belongs to exactly one of four categories: autos, motorcycles, baseball and hockey.…”
Section: Methodsmentioning
confidence: 99%
“…Since our approach is motivated by recent progress in experimental design, we begin with a brief description of it. For details, please see [2,14,23,24].…”
Section: Related Workmentioning
confidence: 99%
“…The most popular active learning techniques include Support Vector Machine active learning (SVMactive) [19,20] and regression based active learning [2,14,23,24]. SVMactive asks the user to label those images which are closest to the SVM boundary.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In such situations, once we have trained the classifier with the available training data, if we discover that its accuracy is insufficient we are left with the issue of how to further improve it, under the constraint that the amount of human effort available to perform additional labelling is limited. One solution is to apply active learning techniques (see, e.g., [Cohn et al 1994;Yu et al 2008]), which rank a set of unlabelled examples in terms of how useful they are expected to be, once manually labelled, for retraining a (hopefully) better classifier; this allows the human annotators to concentrate on the most promising examples This article is a substantially revised and extended version of a paper presented at the 2nd International Conference on the Theory of Information Retrieval (ICTIR'09). The order in which the authors are listed is alphabetical; each author has given an equally important contribution to this work.…”
Section: Introductionmentioning
confidence: 99%