2006
DOI: 10.1007/11683568_5
|View full text |Cite
|
Sign up to set email alerts
|

Recognizing Biomedical Named Entities Using SVMs: Improving Recognition Performance with a Minimal Set of Features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
14
0

Year Published

2008
2008
2012
2012

Publication Types

Select...
2
2
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(16 citation statements)
references
References 6 publications
2
14
0
Order By: Relevance
“…Four forwardparsed classifiers (e 5 , e 6 , e 9 , and e 33 ) and five backward-parsed classifiers (e 4 , e 7 , e 32 , e 34 , and e 35 ) are selected. This behavior agrees with the discussion in [4] that training SVMs with different parse directions produce systems that make errors at different boundaries. The forward-parsed classifiers are selected even though their full object F-scores are lower than many backward-parsed classifiers that are not included in the ensemble.…”
Section: Resultssupporting
confidence: 89%
See 3 more Smart Citations
“…Four forwardparsed classifiers (e 5 , e 6 , e 9 , and e 33 ) and five backward-parsed classifiers (e 4 , e 7 , e 32 , e 34 , and e 35 ) are selected. This behavior agrees with the discussion in [4] that training SVMs with different parse directions produce systems that make errors at different boundaries. The forward-parsed classifiers are selected even though their full object F-scores are lower than many backward-parsed classifiers that are not included in the ensemble.…”
Section: Resultssupporting
confidence: 89%
“…1, diversity can be achieved by using different model parameters and features in each ensemble member. Because of this, each classifier is trained using different settings of YamCha parameters such as dimensionality of the polynomial kernel, range of the context window and direction of parsing as used in [4]. Four different feature types which are frequently used for NER are considered in this study.…”
Section: Data Set Used and Individual Classifiersmentioning
confidence: 99%
See 2 more Smart Citations
“…Furthermore, we studied application of various ensembling methods to a five-entity NER problem instead of the two-entity case studied by Zhou et al [9]. A novel surface word feature and two orthographic feature extraction techniques, all based on occurrence statistics of entity names that are originally proposed by the authors of this paper are also considered [11]. Experimental results conducted using JNLPBA Bio-Entity Recognition Task data [12] have shown that the proposed approach provides an F-score of 72.51%, improving the F-score achieved by the best individual classifier, the ensemble of all classifiers and three popular static classifier selection based ensembles, namely Forward Selection (FS), Backward Selection (BS) and Genetic Algorithms (GA) [13] by 2.5% and 1.3%, 0.9%, 0.9% and 0.8% respectively.…”
Section: Introductionmentioning
confidence: 98%