2018 25th IEEE International Conference on Image Processing (ICIP) 2018
DOI: 10.1109/icip.2018.8451071
|View full text |Cite
|
Sign up to set email alerts
|

Semi-Supervised Automatic Layer and Fluid Region Segmentation of Retinal Optical Coherence Tomography Images Using Adversarial Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
47
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
1
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 27 publications
(47 citation statements)
references
References 19 publications
0
47
0
Order By: Relevance
“…Semantic segmentation architectures are typically trained on huge datasets with pixelwise annotations (e.g., the Cityscapes [5] or CamVid [1] datasets), which are highly expensive, time-consuming and error-prone to generate. To overcome this issue, semisupervised methods are emerging, trying to exploit weakly annotated data (e.g., with only image labels or only bounding boxes) [25,31,37,39,13,6,14,32] or completely unlabeled [24,29,15,31,19] data after a first stage of supervised training. In particular the works of [22,31] have paved the way respectively to adversarial learning approaches for the semantic segmentation task and to their application to semi-supervised learning.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Semantic segmentation architectures are typically trained on huge datasets with pixelwise annotations (e.g., the Cityscapes [5] or CamVid [1] datasets), which are highly expensive, time-consuming and error-prone to generate. To overcome this issue, semisupervised methods are emerging, trying to exploit weakly annotated data (e.g., with only image labels or only bounding boxes) [25,31,37,39,13,6,14,32] or completely unlabeled [24,29,15,31,19] data after a first stage of supervised training. In particular the works of [22,31] have paved the way respectively to adversarial learning approaches for the semantic segmentation task and to their application to semi-supervised learning.…”
Section: Related Workmentioning
confidence: 99%
“…In particular the works of [22,31] have paved the way respectively to adversarial learning approaches for the semantic segmentation task and to their application to semi-supervised learning. The recent approaches of [15,19] propose semi-supervised frameworks exploiting adversarial learning with a Fully Convolutional Discriminator (FCD) trying to distinguish the predicted probability maps from the ground truth segmentation distributions at pixel-level. These works targeted a scenario where the dataset is only partially labeled: in their settings, unlabeled data comes from the same dataset and shares the same domain data distribution of labeled data.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The first component that controls the training is a standard cross-entropy loss exploiting ground truth annotations used to perform a supervised training on synthetic data. The second is an adversarial learning scheme similar to the ones used in works (e.g., [5], [6]) dealing with semi-supervised semantic segmentation (i.e., for dealing with partially annotated datasets). In particular, we exploited a fully convolutional discriminator which produces a pixel-level confidence map distinguishing between data produced by the generator (both from real or synthetic data) and the ground truth segmentation maps.…”
Section: Introductionmentioning
confidence: 99%
“…In this work, we develop a new data-driven deep spectral learning (DSL) method to enable highly robust and reliable sO2 estimation. By training a neural network to directly relate the spectral measurements to the corresponding independent sO2 labels, DSL bypasses the need for a rigid parametric model, similar to existing deep learning methods for solving optical inverse problems [33][34][35][36][37]. We show that DSL can be trained to be highly robust to multiple sources of variabilities in the experiments, including different devices, imaging protocols, speeds, and other possible longitudinal variations.…”
mentioning
confidence: 99%