2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017
DOI: 10.1109/cvpr.2017.404
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Detect Salient Objects with Image-Level Supervision

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
671
0
6

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 914 publications
(680 citation statements)
references
References 43 publications
3
671
0
6
Order By: Relevance
“…Here for a fair comparison, we train our model on the VGG [43] and ResNet [16], respectively. It can be seen that our model performs favorably against the state-of-the-art methods under all evaluation metrics on all the compared datasets especially on the relative challenging dataset SOD [36,44] (2.9% and 1.7% improvements in F-measure and S-measure) and the largest dataset DUTS [46] (3.0% and 2.5%). Specifically, Compared with the current best approach [33], the average Fmeasure improvement on six datasets is 1.9%.…”
Section: Comparison With the State-of-the-artmentioning
confidence: 92%
See 2 more Smart Citations
“…Here for a fair comparison, we train our model on the VGG [43] and ResNet [16], respectively. It can be seen that our model performs favorably against the state-of-the-art methods under all evaluation metrics on all the compared datasets especially on the relative challenging dataset SOD [36,44] (2.9% and 1.7% improvements in F-measure and S-measure) and the largest dataset DUTS [46] (3.0% and 2.5%). Specifically, Compared with the current best approach [33], the average Fmeasure improvement on six datasets is 1.9%.…”
Section: Comparison With the State-of-the-artmentioning
confidence: 92%
“…We train our model on DUTS [46] dataset followed by [33,49,59,63]. For a fair comparison, we use VGG [43] and ResNet [16] as backbone networks, respectively.…”
Section: Implementation Detailsmentioning
confidence: 99%
See 1 more Smart Citation
“…To our best knowledge, HRSOD is cur- Concretely, (c) is from HKU-IS [21]. (d) is from DUTS-Test [33]. (e) is from THUR [5].…”
Section: High-resolution Saliency Detection Datasetmentioning
confidence: 99%
“…In this paper, we choose the train set of DUTS [39], DAVIS [30] and FBMS [2] as our training set. We evaluate video salient object detection methods on DAVIS, FBMS and ViSal [43] benchmark.…”
Section: Methodsmentioning
confidence: 99%