2007
DOI: 10.1007/978-3-540-74999-8_84
|View full text |Cite
|
Sign up to set email alerts
|

Baseline Results for the ImageCLEF 2006 Medical Automatic Annotation Task

Abstract: The ImageCLEF 2006 medical automatic annotation task encompasses 11,000 images from 116 categories, compared to 57 categories for 10,000 images of the similar task in 2005. As a baseline for comparison, a run using the same classifiers with the identical parameterization as in 2005 is submitted. In addition, the parameterization of the classifier was optimized according to the 9,000/1,000 split of the 2006 training data. In particular, texture-based classifiers are parallel combined with classifiers, which use… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2008
2008
2010
2010

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 7 publications
0
2
0
Order By: Relevance
“…The run was not submitted to the challenge, but given its results it would have ranked 26th among 28 real submissions. Güld et al (2006) used the texture features proposed in Tamura et al (1978) and Castelli et al (1998) image. They also used a down-scaled representation of the original images evaluating the cross correlation function and the image distortion model value as distance measure.…”
Section: Introductionmentioning
confidence: 99%
“…The run was not submitted to the challenge, but given its results it would have ranked 26th among 28 real submissions. Güld et al (2006) used the texture features proposed in Tamura et al (1978) and Castelli et al (1998) image. They also used a down-scaled representation of the original images evaluating the cross correlation function and the image distortion model value as distance measure.…”
Section: Introductionmentioning
confidence: 99%
“…State of the art approaches used texture-based descriptors as features and discriminative algorithms, mainly SVMs, for the classification step [3,4]. Local and global features, have been used separately or combined together in multi-cue approaches with disappointing results [3,5]. Still, years of research on visual recognition showed clearly that multiple-cue methods outperform single-feature approaches, provided that the features are complementary.…”
Section: Introductionmentioning
confidence: 99%