2018
DOI: 10.1007/s40745-018-0162-3
|View full text |Cite
|
Sign up to set email alerts
|

Histopathological Breast-Image Classification Using Concatenated R–G–B Histogram Information

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 25 publications
0
5
0
Order By: Relevance
“…It is worth mentioning that works ( 35 43 ) did not split training and test set according to the protocol of ( 9 ), works ( 44 , 45 ) adopted the existed protocol, and works ( 46 , 47 ) randomly divided training set (70%) and test set (30%), but they did not mention whether it was the same as the protocol. Although the recognition accuracy of the works ( 37 , 39 , 41 43 , 46 , 47 ) is significantly higher than that of our method, they all use deep learning model, which requires a large number of labeled training samples and consumes longer training time. In addition, in these works, except ( 42 ), they only calculated the image level recognition accuracy.…”
Section: Experiments and Resultsmentioning
confidence: 84%
See 1 more Smart Citation
“…It is worth mentioning that works ( 35 43 ) did not split training and test set according to the protocol of ( 9 ), works ( 44 , 45 ) adopted the existed protocol, and works ( 46 , 47 ) randomly divided training set (70%) and test set (30%), but they did not mention whether it was the same as the protocol. Although the recognition accuracy of the works ( 37 , 39 , 41 43 , 46 , 47 ) is significantly higher than that of our method, they all use deep learning model, which requires a large number of labeled training samples and consumes longer training time. In addition, in these works, except ( 42 ), they only calculated the image level recognition accuracy.…”
Section: Experiments and Resultsmentioning
confidence: 84%
“…Table 8 shows that the method proposed in this paper is superior to many state-of-the-art methods in benign and malignant tumor recognition, both for the image level and the patient level. It is worth mentioning that works (35)(36)(37)(38)(39)(40)(41)(42)(43) did not split training and test set according to the protocol of (9), works (44,45) adopted the existed protocol, and works (46,47) randomly divided training set (70%) and test set (30%), but they did not mention whether it was the same as the protocol. Although the recognition accuracy of the works (37, 39, 41-43, 46, 47) is significantly higher than that of our method, they all use deep learning model, which requires a large number of labeled training samples and consumes longer training time.…”
Section: Experiments Resultsmentioning
confidence: 99%
“…Regardless of the used model, the distribution of intensities in each color channel is represented by a histogram of 256 bins (for images coded on 8 bits). These histograms can be concatenated and used as color descriptor of (3 × 256) values [17], [22]. Instead of concatenating the three histograms independently, it is also possible to generate a 3D histogram representing the joint distribution of the three channels.…”
Section: A Color Histogram Featuresmentioning
confidence: 99%
“…This study found that performing the grayscale transformation, as a stain normalization method, decreases the accuracy of the results. Study [52] claims that conventional normalization techniques increase the noise in the image and introduce a new normalization technique that controls the noise.…”
Section: Binary Classificationmentioning
confidence: 99%