2013
DOI: 10.1109/tmi.2013.2268163
|View full text |Cite
|
Sign up to set email alerts
|

Benchmarking HEp-2 Cells Classification Methods

Abstract: In this paper, we report on the first edition of the HEp-2 Cells Classification contest, held at the 2012 edition of the International Conference on Pattern Recognition, and focused on indirect immunofluorescence (IIF) image analysis. The IIF methodology is used to detect autoimmune diseases by searching for antibodies in the patient serum but, unfortunately, it is still a subjective method that depends too heavily on the experience and expertise of the physician. This has been the motivation behind the recent… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
153
0
2

Year Published

2014
2014
2022
2022

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 218 publications
(157 citation statements)
references
References 34 publications
2
153
0
2
Order By: Relevance
“…This is supported by the recently published detailed report of the contest findings [5], which shows great variability between method rankings by cell-level and by sample-level performance. Consideration of the sample as a whole, including complete measurements from all the relevant cells, allows the application of a much richer set of pattern recognition methods, and is a better match for the ultimate goal of replicating the diagnostic decision of a physician.…”
Section: Resultssupporting
confidence: 61%
See 2 more Smart Citations
“…This is supported by the recently published detailed report of the contest findings [5], which shows great variability between method rankings by cell-level and by sample-level performance. Consideration of the sample as a whole, including complete measurements from all the relevant cells, allows the application of a much richer set of pattern recognition methods, and is a better match for the ultimate goal of replicating the diagnostic decision of a physician.…”
Section: Resultssupporting
confidence: 61%
“…Therefore any possible cross-validation arrangement which could be used for algorithm selection or parameter tuning by the contestants would necessarily include cells from the same sample in both training and validation sections. The performance estimates provided by such cross-validation (error rates of around 5% were reported by a number of participants) proved very far from the final test-set performance reported in [5], as the validation task is very much easier than the hold-out test, which contains genuinely independent samples from patients not represented in the training set.…”
Section: Previous Workmentioning
confidence: 75%
See 1 more Smart Citation
“…We deem that the future adoption of deep architectures might be a viable way for doing a step forward, due to their ability to learn the discriminant properties of the cells without requiring any features engineering. Although deep learning methods have not been widely used for this task, it is worth mentioning the methods by the team of Malon et al as reported in Foggia et al (2013) and by Gao et al (2014), both based on convolutional neural networks. On the one hand, the method by Malon reached an accuracy of 60% over a small dataset, the MIVIA HEp-2 Images Dataset (Foggia et al (2013)), comprised approximatively 720 cells images for the training and 730 for the test set.…”
Section: Discussionmentioning
confidence: 99%
“…Although deep learning methods have not been widely used for this task, it is worth mentioning the methods by the team of Malon et al as reported in Foggia et al (2013) and by Gao et al (2014), both based on convolutional neural networks. On the one hand, the method by Malon reached an accuracy of 60% over a small dataset, the MIVIA HEp-2 Images Dataset (Foggia et al (2013)), comprised approximatively 720 cells images for the training and 730 for the test set. Over the same dataset, the highest score was achieved by Nosaka (69%), while the scientist achieved an accuracy of approximatively 73%.…”
Section: Discussionmentioning
confidence: 99%