2022
DOI: 10.1016/j.knosys.2021.108021
|View full text |Cite
|
Sign up to set email alerts
|

Semi-supervised NPC segmentation with uncertainty and attention guided consistency

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 51 publications
(7 citation statements)
references
References 41 publications
0
7
0
Order By: Relevance
“…One issue with the mean-teacher is that the ensemble model may produce unreliable results. Thus, uncertainty estimation [12], [13], [14] is integrated with the mean-teacher network to facilitate more reliable knowledge transfer from the ensemble, resulting in more accurate segmentation labels than competing methods. Uncertainty-guided mean-teacher networks have been further improved by using triple-uncertainty in [20], [21], where data, model and task-level consistencies are combined via multi-task learning to exploit unlabeled data with higher efficiency.…”
Section: Related Work a Semi-supervised Learning For Segmentationmentioning
confidence: 99%
“…One issue with the mean-teacher is that the ensemble model may produce unreliable results. Thus, uncertainty estimation [12], [13], [14] is integrated with the mean-teacher network to facilitate more reliable knowledge transfer from the ensemble, resulting in more accurate segmentation labels than competing methods. Uncertainty-guided mean-teacher networks have been further improved by using triple-uncertainty in [20], [21], where data, model and task-level consistencies are combined via multi-task learning to exploit unlabeled data with higher efficiency.…”
Section: Related Work a Semi-supervised Learning For Segmentationmentioning
confidence: 99%
“…Chen et al [61] propose multi-level consistency loss which computes the similarities between multi-scale features in an additional discriminator, where the inputs are the segmentation regions by multiplying the unlabeled input image with predicted segmentation probability maps instead of segmentation probability maps. Hu et al [62] propose attention guided consistency which encourages the attention maps from the student model and the teacher model to be consistent. Each image contains the same class object, so different images share similar semantics in the feature space.…”
Section: Unsupervised Regularization With Consistency Learningmentioning
confidence: 99%
“…Another common semi-supervised method is co-training, which uses the interworking between two networks to improve the segmentation performance [49,50]. Hu et al [51] proposed uncertainty and attention guided consistency semi-supervised method to segment nasopharyngeal carcinoma. Lou et al [52] proposed a semi-supervised method that extends the backbone segmentation network to produce pyramidal predictions at different scales.…”
Section: Semi-supervisedmentioning
confidence: 99%