2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2020
DOI: 10.1109/cvprw50498.2020.00377
|View full text |Cite
|
Sign up to set email alerts
|

Less is More: Sample Selection and Label Conditioning Improve Skin Lesion Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 24 publications
0
7
0
Order By: Relevance
“…Table 1 compares the segmentation performance of our baseline models as well as the individual base models across different prediction fusion schemes using the Jaccard index. In addition, we compare the performance of our proposed method against the work by Ribeiro et al [28] where a subset of samples with small annotator disagreements are also taken into account during the training. In our based models, we follow Ribeiro et al [28] and minimize the loss function with respect to the randomly selected image annotations in each training batch.…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…Table 1 compares the segmentation performance of our baseline models as well as the individual base models across different prediction fusion schemes using the Jaccard index. In addition, we compare the performance of our proposed method against the work by Ribeiro et al [28] where a subset of samples with small annotator disagreements are also taken into account during the training. In our based models, we follow Ribeiro et al [28] and minimize the loss function with respect to the randomly selected image annotations in each training batch.…”
Section: Resultsmentioning
confidence: 99%
“…All images are 8-bit RGB color dermoscopy images. Similar to [28], we utilized 2,223 images with more than one segmentation ground truth mask (2,094 with two, 100 with three and 36 with four and 3 with five) to train our model. We split all 2,223 images to 80% for training and 20% for validation.…”
Section: Datamentioning
confidence: 99%
See 3 more Smart Citations