2019
DOI: 10.1007/978-3-030-32778-1_8
|View full text |Cite
|
Sign up to set email alerts
|

Mask2Lesion: Mask-Constrained Adversarial Skin Lesion Image Synthesis

Abstract: Skin lesion segmentation is a vital task in skin cancer diagnosis and further treatment. Although deep learning based approaches have significantly improved the segmentation accuracy, these algorithms are still reliant on having a large enough dataset in order to achieve adequate results. Inspired by the immense success of generative adversarial networks (GANs), we propose a GAN-based augmentation of the original dataset in order to improve the segmentation performance. In particular, we use the segmentation m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
23
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 26 publications
(24 citation statements)
references
References 23 publications
(25 reference statements)
0
23
0
Order By: Relevance
“…The boundary between lesion and surrounding tissues on dermoscopic images is discernible when compared with the soft tissue boundaries on grayscale CT. Thus, it can be seen from Figure 5 that the 3D tumors generated by the method in [29] are blurry when compared with our proposed FRGAN. The tumors generated by FRGAN are natural with high visual texture similarities with surroundings.…”
Section: Qualitative Analysismentioning
confidence: 91%
See 4 more Smart Citations
“…The boundary between lesion and surrounding tissues on dermoscopic images is discernible when compared with the soft tissue boundaries on grayscale CT. Thus, it can be seen from Figure 5 that the 3D tumors generated by the method in [29] are blurry when compared with our proposed FRGAN. The tumors generated by FRGAN are natural with high visual texture similarities with surroundings.…”
Section: Qualitative Analysismentioning
confidence: 91%
“…Red: Best response 2, when using extra synthetic images generated by synthesis models, the overall segmentation performance was improved in terms of Dice, Jaccard, VOE, RVD, and HD over all the three datasets. Specifically, the AU-Net/U2-Net trained with tumors generated by our method achieved the best performance over LiTS and KiTS datasets in terms of spatial overlap measured by Dice, Jaccard, VOE, and HD when compared to those trained with tumors generated by [4] and [29]. For LiTS, the advanced U2-Net obtained the best performance with our synthetic samples, which outperformed the second best by 1.1%/1.1% in terms of Dice/Jaccard.…”
Section: Comparison With Other Methodsmentioning
confidence: 97%
See 3 more Smart Citations