2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01216
|View full text |Cite
|
Sign up to set email alerts
|

Learning Calibrated Medical Image Segmentation via Multi-rater Agreement Modeling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
42
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 110 publications
(51 citation statements)
references
References 39 publications
2
42
0
Order By: Relevance
“…Instead, in many cases, subjective labels from multiple experts are available. This raises the recent research attention [1,22,28,29,31,45] on multi-rater problem. Many works leveraged multi-rater annotations to attain calibrated result [28,29].…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Instead, in many cases, subjective labels from multiple experts are available. This raises the recent research attention [1,22,28,29,31,45] on multi-rater problem. Many works leveraged multi-rater annotations to attain calibrated result [28,29].…”
Section: Related Workmentioning
confidence: 99%
“…This raises the recent research attention [1,22,28,29,31,45] on multi-rater problem. Many works leveraged multi-rater annotations to attain calibrated result [28,29]. However, they still took majority vote as ground-truth in model evaluation, ignoring the discrepancy between majority vote and the potential gold standard.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…This increased ability to encode the inter-rater variability is probably due to the fact that SoftSeg facilitates the propagation of soft labels throughout the training scheme: (1) no binarization of the input labels, (2) a loss function which does not penalize uncertain predictions, and (3) an activation function which does not enforce binary outputs. Considered with equivalent expertise in this work, future studies could account for the different expertise across raters, for instance by modulating the training scheme with FiLM layers , or by the use of expertise-aware inferring module Ji et al (2021).…”
Section: When Using the Labels Through The Training Pipelinementioning
confidence: 99%