2020
DOI: 10.1016/j.patrec.2020.06.002
|View full text |Cite
|
Sign up to set email alerts
|

SceneAdapt: Scene-based domain adaptation for semantic segmentation using adversarial learning

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 16 publications
(9 citation statements)
references
References 7 publications
0
9
0
Order By: Relevance
“…Finally, the morphological opening operation was used to process the prediction results. In future work, we will explore the integration of prior knowledge for the macular edema in retinal OCT images, investigate advanced semantic segmentation network architectures and the self-supervision model [31,32], and achieve much better 3D segmentation results. For the problems of small amount of data and the annotation differences caused by the subjectivity of experts, we will increase the collection of data, find a number of relevant professionals to label, and select relatively accurate annotations to train the network model more effectively.…”
Section: Discussionmentioning
confidence: 99%
“…Finally, the morphological opening operation was used to process the prediction results. In future work, we will explore the integration of prior knowledge for the macular edema in retinal OCT images, investigate advanced semantic segmentation network architectures and the self-supervision model [31,32], and achieve much better 3D segmentation results. For the problems of small amount of data and the annotation differences caused by the subjectivity of experts, we will increase the collection of data, find a number of relevant professionals to label, and select relatively accurate annotations to train the network model more effectively.…”
Section: Discussionmentioning
confidence: 99%
“…We used A1, A2, and A4 as source domain, i.e., excluding the tobacco plants (as in [18], we named this group of images CVPPP*). Overall, the CVPPP* dataset contains 964 images and a number of leaves ranging in [4,32]. For training, we split this dataset as in [17] to perform a 4-fold cross-validation for the pretraining step.…”
Section: Plantsmentioning
confidence: 99%
“…To minimise the covariate shift, several approaches have been proposed, such as Maximum Mean Discrepancy (MMD) [3], adversarial training [4][5][6][7], as well as styletransfer [8]. DA has recently been mostly investigated for classification tasks, showing outstanding results on closed set [5,7,[9][10][11], open set [12,13], partial [14,15], and even universal cases [16].…”
Section: Introductionmentioning
confidence: 99%
“…Unlike cross-view image classification [36,63,10,1,16], aligning domains of different viewpoints for pixel-level prediction tasks is ill-posed, since the task is indeed view dependent [7]. The most relevant are [11,8], which again resort to adversarial domain alignment. Additionally, [8] requires known camera intrinsics and extrinsics.…”
Section: Related Workmentioning
confidence: 99%