2019
DOI: 10.48550/arxiv.1903.03462
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On Boosting Semantic Street Scene Segmentation with Weak Supervision

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…This was the largest model with 448,794,235 trainable parameters and 87,356,100 non-trainable parameters. The optimizer of choice was ADAM [8] without gradient clipping and the loss function used was sparse categorical cross entropy [9] for all of the three models.…”
Section: Model Architecturementioning
confidence: 99%
“…This was the largest model with 448,794,235 trainable parameters and 87,356,100 non-trainable parameters. The optimizer of choice was ADAM [8] without gradient clipping and the loss function used was sparse categorical cross entropy [9] for all of the three models.…”
Section: Model Architecturementioning
confidence: 99%
“…The optimizer of choice was ADAM [8] without gradient clipping and the loss function used was sparse categorical cross entropy [9] for all of the three models…”
Section: Model Based On 23mentioning
confidence: 99%
“…In this work we experiment with Cityscapes Dense subset [12] as C, and Cityscapes Coarse subset and Open Images bounding boxes subset [13] as O. We employ the hierarchical convolutional networks of [14] that can be trained on multiple datasets with strong and weak labels, which require the labels z of dataset O.…”
Section: Problem Definitionmentioning
confidence: 99%
“…We use our published hierarchical convolutional network for training simultaneously on weak and strong supervision for semantic segmentation [2], [14]. The network consists of a conventional ResNet-50 feature extractor, that is modified to have semantic segmentation output with dilated convolutions and an upsampling module.…”
Section: B Convolutional Model For Training On Strong and Weak Superv...mentioning
confidence: 99%