2013
DOI: 10.1016/j.jbi.2013.08.007
|View full text |Cite
|
Sign up to set email alerts
|

Learning classification models from multiple experts

Abstract: Building classification models from clinical data using machine learning methods often relies on labeling of patient examples by human experts. Standard machine learning framework assumes the labels are assigned by a homogeneous process. However, in reality the labels may come from multiple experts and it may be difficult to obtain a set of class labels everybody agrees on; it is not uncommon that different experts have different subjective opinions on how a specific patient example should be classified. In th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
25
0
1

Year Published

2015
2015
2022
2022

Publication Types

Select...
4
2
1

Relationship

3
4

Authors

Journals

citations
Cited by 44 publications
(26 citation statements)
references
References 14 publications
(20 reference statements)
0
25
0
1
Order By: Relevance
“…The clinical data consists of 50 patient state features essential for detection of HIT. The datasets consist of 579, 571, and 573 labeled patient state instances for Expert 1, 2 and 3 (see [30]), respectively. The labels include Likert-scale labels on four levels indicating the agreement, weak agreement, weak disagreement, and disagreement of the expert with the HIT alert.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…The clinical data consists of 50 patient state features essential for detection of HIT. The datasets consist of 579, 571, and 573 labeled patient state instances for Expert 1, 2 and 3 (see [30]), respectively. The labels include Likert-scale labels on four levels indicating the agreement, weak agreement, weak disagreement, and disagreement of the expert with the HIT alert.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…was a procedure that searches for possible correlations between the labels of different experts. The self‐consistency of the labels provided by an expert allows Valizadegan et al . to identify labelers that introduce random noise, the least damaging kind of noise.…”
Section: Related Problemsmentioning
confidence: 99%
“…The main contribution of Wiebe et al 15 was a procedure that searches for possible correlations between the labels of different experts. The self-consistency of the labels provided by an expert allows Valizadegan et al 14 to identify labelers that introduce random noise, the least damaging kind of noise. The opposite strategy consists of learning a classifier from the annotations of each expert and, finally, combining all the models.…”
Section: Introductionmentioning
confidence: 99%
“…It aims to sequentially select examples to be labeled next by evaluating the possible impact of the examples on the solution. Active learning has been successfully applied in domains as diverse as computer vision, natural language processing and bio-medical data mining [8,14,15]. …”
Section: Introductionmentioning
confidence: 99%