2020
DOI: 10.1038/s41598-020-69534-6
|View full text |Cite|
|
Sign up to set email alerts
|

Radiomics feature reproducibility under inter-rater variability in segmentations of CT images

Abstract: identifying image features that are robust with respect to segmentation variability is a tough challenge in radiomics. So far, this problem has mainly been tackled in test-retest analyses. in this work we analyse radiomics feature reproducibility in two phases: first with manual segmentations provided by four expert readers and second with probabilistic automated segmentations using a recently developed neural network (pHiseg). We test feature reproducibility on three publicly available datasets of lung, kidne… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
62
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 83 publications
(73 citation statements)
references
References 34 publications
3
62
1
Order By: Relevance
“…In addition, we used manual segmentation of tumors by only one human expert. While studies have shown that in general tumor segmentation and radiomic feature extraction could be affected by inter-rater variability 45 , recent studies suggest that such variability may not necessarily affect the robustness of all radiomic features 46 . In a preliminary evaluation, we also recently showed that despite inter-reader variation, radiomic features extracted from segmentations obtained by different human raters tend to be highly correlated and have similar predictive value 47 .…”
Section: Discussionmentioning
confidence: 99%
“…In addition, we used manual segmentation of tumors by only one human expert. While studies have shown that in general tumor segmentation and radiomic feature extraction could be affected by inter-rater variability 45 , recent studies suggest that such variability may not necessarily affect the robustness of all radiomic features 46 . In a preliminary evaluation, we also recently showed that despite inter-reader variation, radiomic features extracted from segmentations obtained by different human raters tend to be highly correlated and have similar predictive value 47 .…”
Section: Discussionmentioning
confidence: 99%
“…However, it is not clear to what degree segmentation variability has an impact on radiomics features, even considering that a universal automatic segmentation algorithm has not been validated and established for all image applications, and some features may not show stability and reproducibility using different methods. Furthermore, automatic segmentation means “probabilistic” segmentation, and the ground truth for automatic boundering comes only for big datasets able to train the neural network [ 61 ]. Unfortunately, 11 patients were not sufficient to apply an automatic or semi-automatic approach, leading to more variable results than manual segmentation.…”
Section: Discussionmentioning
confidence: 99%
“…To determine the variability and repeatability (test–retest reliability) 28 , 29 of the developed algorithm, we additionally have compared two manual annotations by the same observer with two automatic segmentations, using the same test dataset of 100 images (50 upper and 50 lower eyelids). The same observer (M.A.K.S.)…”
Section: Methodsmentioning
confidence: 99%