2013
DOI: 10.1016/j.datak.2012.06.003
|View full text |Cite
|
Sign up to set email alerts
|

Combining multiple classifiers using vote based classifier ensemble technique for named entity recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
50
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 103 publications
(51 citation statements)
references
References 13 publications
1
50
0
Order By: Relevance
“…This is an intermediate result. In our second setting we combine all the solutions of the final Pareto optimal front using a MOO based classifier ensemble technique [24]. This is done to further improve the performance as well as to compare with the last two baselines (i.e.…”
Section: Selection Of a Single Solution From Pareto Optimal Frontmentioning
confidence: 99%
See 2 more Smart Citations
“…This is an intermediate result. In our second setting we combine all the solutions of the final Pareto optimal front using a MOO based classifier ensemble technique [24]. This is done to further improve the performance as well as to compare with the last two baselines (i.e.…”
Section: Selection Of a Single Solution From Pareto Optimal Frontmentioning
confidence: 99%
“…Thus it is very difficult to select a unique solution from the best population. Hence rather than selecting a solution we combine all the outputs of the classifiers using a MOO based classifier ensemble technique [24].…”
Section: Combining Solutions Of the Final Populationmentioning
confidence: 99%
See 1 more Smart Citation
“…Our motivations are two-fold. First, [13] showed that MOO strategies evidence improved results when compared to single objective solutions and state-of-the-art baselines. Second, MOO techniques propose a set of performing solutions rather than a single one.…”
Section: Learning Frameworkmentioning
confidence: 99%
“…Indeed, due to the small size of training data (only 100 queries distributed equally by class), classification results are weak if a single classifier is used in a traditional supervised way. To overcome this situation, we follow the ensemble learning paradigm defined as a multi-objective optimization problem in a similar way as [13].…”
Section: Related Workmentioning
confidence: 99%