2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00408
|View full text |Cite
|
Sign up to set email alerts
|

What Can Be Transferred: Unsupervised Domain Adaptation for Endoscopic Lesions Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
39
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
4
1
1

Relationship

2
8

Authors

Journals

citations
Cited by 118 publications
(39 citation statements)
references
References 32 publications
0
39
0
Order By: Relevance
“…In the future, we will consider HeDA in semantic segmentation tasks [79], [80]. Inspired by Dong et al [81], [82] that develop a novel perspective to distinguish transferable or untransferable representations across domains, we will develop a novel learning theory in semantic segmentation tasks to quantify transferability across heterogeneous domains.…”
Section: Discussionmentioning
confidence: 99%
“…In the future, we will consider HeDA in semantic segmentation tasks [79], [80]. Inspired by Dong et al [81], [82] that develop a novel perspective to distinguish transferable or untransferable representations across domains, we will develop a novel learning theory in semantic segmentation tasks to quantify transferability across heterogeneous domains.…”
Section: Discussionmentioning
confidence: 99%
“…With the brief mention of neural network semantic segmentation methods, including FCN [26], intensive study has been done in the computer vision society and consistently shown improved results [27][28][29]. Furthermore, with particular objectives such as semantic lesion segmentation for medical use, References [30,31] showed improved methods with highly GAN-based knowledge adaptation. However, we adopt trap ball segmentation, which is based on a supervised contour detecting line filler algorithm and the Seg2pix network, which we also use for colorization.…”
Section: Sketch Parsingmentioning
confidence: 99%
“…Here, we finetune the pretrained decoder with the selected knowledge k s t by giving higher importance weights if the selected knowledge is suitable for the gold response generation. In this way, we further alleviate the mismatch problem because we highlight the matched samples by assigning an importance weight to each instance (x t , k s t , y t ) to reform the training data (Cai et al, 2020;Dong et al, 2020).…”
Section: Trainingmentioning
confidence: 99%