2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01655
|View full text |Cite
|
Sign up to set email alerts
|

Weakly Supervised Video Salient Object Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
34
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 72 publications
(34 citation statements)
references
References 37 publications
0
34
0
Order By: Relevance
“…It greatly reduces the manual labeling cost, but still requires 20% pixel-wise annotations, which would be a large project for large video datasets with much richer scenes. Zhao et al [20] designed a weakly supervised VSOD model via scribble to relieve the burden of pixel-wise labeling. Although scribble annotations can effectively reduce time and capital cost compared with pixel-wise annotations, it still requires expensive, time-consuming labeling.…”
Section: Weakly/semi/un-supervised Video Salient Object Detectionmentioning
confidence: 99%
See 1 more Smart Citation
“…It greatly reduces the manual labeling cost, but still requires 20% pixel-wise annotations, which would be a large project for large video datasets with much richer scenes. Zhao et al [20] designed a weakly supervised VSOD model via scribble to relieve the burden of pixel-wise labeling. Although scribble annotations can effectively reduce time and capital cost compared with pixel-wise annotations, it still requires expensive, time-consuming labeling.…”
Section: Weakly/semi/un-supervised Video Salient Object Detectionmentioning
confidence: 99%
“…As shown in Table I, we compare our proposed method with existing three image salient object detection models (PoolNet [54], EGNet [57], PAKRN [21]), ten fully supervised VSOD models (SCOM [41], MBN [58], PDB [8], FGRN [42], MGA [11], RCRN [10], SSAV [9], PCSA [43], STVS [44], and DCFN [45]), and four weakly or unsupervised models(SSOD [59], GF [23], SAG [1], WS [20]) . For fair comparison, results of these methods are directly provided by authors or by their trained model and we use the same evaluation codes to test them.…”
Section: H Comparison With the State-of-the-artsmentioning
confidence: 99%
“…Guo et al [40] proposed a fast VSOD method by using the principal motion vectors to represent the corresponding motion patterns, and such motion message coupling with the color clues together are fed into the multiclue optimization framework to achieve the spatiotemporal VSOD. Zhao et al [41] proposed a weakly-supervised VSOD model based on eye-fixation annotation. Compared with the fully-supervised VSOD models, the proposed new annotation method dramatically reduces the consumption of time.…”
Section: A Hand-crafted Vsod Approachesmentioning
confidence: 99%
“…As a sparse and low-cost supervision label, the scribble annotation has received increasing attention for salient object detection and object segmentation in recent works [28,29,30,31,32,16,17]. As is known to all, the scribble annotations are too simple and sparse to convey sufficient information, e.g., object structure and details.…”
Section: Introductionmentioning
confidence: 99%