2016
DOI: 10.1186/s12859-016-1318-9
|View full text |Cite
|
Sign up to set email alerts
|

Bioimage classification with subcategory discriminant transform of high dimensional visual descriptors

Abstract: BackgroundBioimage classification is a fundamental problem for many important biological studies that require accurate cell phenotype recognition, subcellular localization, and histopathological classification. In this paper, we present a new bioimage classification method that can be generally applicable to a wide variety of classification problems. We propose to use a high-dimensional multi-modal descriptor that combines multiple texture features. We also design a novel subcategory discriminant transform (SD… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(12 citation statements)
references
References 52 publications
0
12
0
Order By: Relevance
“…SVM was used as classifier and it obtained 95.5% accuracy. Song et al [8] proposed a method to extract high-dimensional descriptor and subcategory discriminant transform (SDT) was used to enhance discriminative power of descriptors, which achieved the accuracy of 96.8%. Meng et al [7] proposed a framework based on the Collateral Representative Subspace Projection Modeling (CRSPM) supervised classification model for histology image classification.…”
Section: Comparison and Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…SVM was used as classifier and it obtained 95.5% accuracy. Song et al [8] proposed a method to extract high-dimensional descriptor and subcategory discriminant transform (SDT) was used to enhance discriminative power of descriptors, which achieved the accuracy of 96.8%. Meng et al [7] proposed a framework based on the Collateral Representative Subspace Projection Modeling (CRSPM) supervised classification model for histology image classification.…”
Section: Comparison and Discussionmentioning
confidence: 99%
“…However, its weighting scheme was performed in a class level and majority voting was still used for image label acquisition, which might lead to losing a part of local information. Song et al [8] proposed a method to extract high-dimensional multimodal descriptor, and subcategory discriminant transform (SDT) was used to enhance discriminative power of descriptors. This approach extracted sufficient image information and achieved decent results, but it used deep learning method just for comparison purpose instead of benefitting from making use of the full power of deep learning.…”
Section: Introductionmentioning
confidence: 99%
“…Despite our method presenting an accuracy rate smaller than that obtained by Refs. [2,[12][13][14], the differences were not significant when analyzed by the Friedman test (P f ¼ 0:05), with all pairwise comparisons (Conover), and the Kruskal-Wallis test (P k ¼ 0:9746), considering all pairwise comparisons (Dwass-Steel-Chritchlow-Fligner). In these tests the significance level was of 0.05.…”
Section: Unsegmentedmentioning
confidence: 93%
“…After applying the chi-squared attribute selection, the size of the feature vector was reduced from 12,625 to 50. The authors in Song et al [13] proposed a method based on visual descriptors to extract features from grayscale images, including a NHL dataset. The features were extracted separately and each one went on to make up a different set that was given as input to a specific classification stage.…”
Section: Introductionmentioning
confidence: 99%
“…In Nascimento et al [16] an investigation was also made into the classification of these lesions with descriptors based on the stationary wavelet transform. In addition authors also presented approaches for quantifying and extracting features from the histological images of the lymphoma image dataset [17,18] . In these works, the authors did not employ the detection stage for nuclei before the feature extraction stage.…”
Section: Introductionmentioning
confidence: 99%