2021
DOI: 10.1136/jclinpath-2021-207524
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation of an open-source machine-learning tool to quantify bone marrow plasma cells

Abstract: AimsThe objective of this study was to develop and validate an open-source digital pathology tool, QuPath, to automatically quantify CD138-positive bone marrow plasma cells (BMPCs).MethodsWe analysed CD138-scanned slides in QuPath. In the initial training phase, manual positive and negative cell counts were performed in representative areas of 10 bone marrow biopsies. Values from the manual counts were used to fine-tune parameters to detect BMPCs, using the positive cell detection and neural network (NN) class… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 17 publications
(16 citation statements)
references
References 28 publications
0
13
0
Order By: Relevance
“…Our approach is therefore distinct from previous studies, which generally used less accurate area- or segmentation-based methods to identify/quantify plasma cells, and relied on low magnification visual estimates from pathologists as gold-standard references. 3 , 4 , 5 , 6 , 7 , 8 One exception is a recent publication from Baranova et al 10 that also started with small image patches, before scaling up to WSIs. The major technical difference between our study and Baranova et al 10 is that they relied on about 40 features from QuPath's cell segmentation algorithm to train a downstream neural network, whereas ours is a single-stage CNN trained on raw image pixels and pathologist labels.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Our approach is therefore distinct from previous studies, which generally used less accurate area- or segmentation-based methods to identify/quantify plasma cells, and relied on low magnification visual estimates from pathologists as gold-standard references. 3 , 4 , 5 , 6 , 7 , 8 One exception is a recent publication from Baranova et al 10 that also started with small image patches, before scaling up to WSIs. The major technical difference between our study and Baranova et al 10 is that they relied on about 40 features from QuPath's cell segmentation algorithm to train a downstream neural network, whereas ours is a single-stage CNN trained on raw image pixels and pathologist labels.…”
Section: Discussionmentioning
confidence: 99%
“… 3 , 4 , 5 , 6 , 7 , 8 One exception is a recent publication from Baranova et al 10 that also started with small image patches, before scaling up to WSIs. The major technical difference between our study and Baranova et al 10 is that they relied on about 40 features from QuPath's cell segmentation algorithm to train a downstream neural network, whereas ours is a single-stage CNN trained on raw image pixels and pathologist labels. We speculate that our CNN's independence from prior segmentation explains why our reported measures of concordance appear to be higher than that reported by Baranova et al, although a fair comparison would involve a head-to-head evaluation using validation images from both studies.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Our approach is therefore distinct from previous studies, which generally used less accurate area- or segmentation-based methods to identify/quantify plasma cells, and relied on low magnification visual estimates from pathologists as gold-standard references [38]. One exception is a recent publication from Baranova et al [10] that also started with small image patches, before scaling up to WSIs. The major technical difference between our study and Baranova et al [10] is that they relied on about 40 features from QuPath’s cell segmentation algorithm to train a downstream neural network, while ours is a single-stage CNN trained on raw image pixels and pathologist labels.…”
Section: Discussionmentioning
confidence: 99%
“…One exception is a recent publication from Baranova et al [10] that also started with small image patches, before scaling up to WSIs. The major technical difference between our study and Baranova et al [10] is that they relied on about 40 features from QuPath’s cell segmentation algorithm to train a downstream neural network, while ours is a single-stage CNN trained on raw image pixels and pathologist labels. We speculate that our CNN’s independence from prior segmentation explains why our reported measures of concordance appear to be higher than that reported by Baranova et al, although a fair comparison would involve a head-to-head evaluation using validation images from both studies.…”
Section: Discussionmentioning
confidence: 99%