2017
DOI: 10.1109/tcsvt.2016.2595324
|View full text |Cite
|
Sign up to set email alerts
|

Saliency Detection for Unconstrained Videos Using Superpixel-Level Graph and Spatiotemporal Propagation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
91
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 165 publications
(95 citation statements)
references
References 65 publications
0
91
0
Order By: Relevance
“…As shown in Table 1, our proposed method is compared with 11 existing static-image salient object detection models including Amulet [48], UCF [49], SRM [40], DSS [13], MSR [22], NLDF [29], R3Net [7], C2SNet [26], RAS [5], DGRL [41], PiCANet [27], and 6 state-of-the-art video SOD algorithms including GAFL [43], SAGE [42], SGSP [28], FCNS [44], FGRNE [23], PDB [33]. Our proposed method is implemented by adopting MGA-tmc module at the positions of MGA-{0-4}, and MGA-m module at the position of MGA-5.…”
Section: Comparison With the State-of-the-artmentioning
confidence: 99%
“…As shown in Table 1, our proposed method is compared with 11 existing static-image salient object detection models including Amulet [48], UCF [49], SRM [40], DSS [13], MSR [22], NLDF [29], R3Net [7], C2SNet [26], RAS [5], DGRL [41], PiCANet [27], and 6 state-of-the-art video SOD algorithms including GAFL [43], SAGE [42], SGSP [28], FCNS [44], FGRNE [23], PDB [33]. Our proposed method is implemented by adopting MGA-tmc module at the positions of MGA-{0-4}, and MGA-m module at the position of MGA-5.…”
Section: Comparison With the State-of-the-artmentioning
confidence: 99%
“…The proposed method is evaluated on both DAVIS and FBMS datasets and the results are compared against seven recent state-of-the-art saliency detection methods: deeply supervised salient object detection (DSS) [24], minimum barrier (MB) [71], multi-task deep neural network (MT) [41], local gradient flow optimization (GF) [63], superpixel-level graph (SGSP) [47], geodesic distance based video saliency (SAGE) [66] and fully convolutional networks (FCN) [64]. The first three are still-image saliency detection methods while the last four methods operate on video sequences to predict saliency maps.…”
Section: Performance Comparisonmentioning
confidence: 99%
“…However, they might loss some important high-level features such as semantic information in video sequences. What's more, some existing methods attempt to use linear or nonlinear combination rules to fuse spatial and temporal information simply [8], [13], [14], [16], which may ignore the intrinsic relationship due to the fixed weights used for the combination of spatial and temporal information.…”
Section: Arxiv:180704514v1 [Cscv] 12 Jul 2018mentioning
confidence: 99%
“…Besides, we make use of one 3D pooling layer, two 3D deconvolutional layers for the last three groups, and all strides of the 3D unpooling layer are set as 1×2×2 Table 1. The basic information of four public datasets: DAVIS [22], SegTrackV2 [21], VOT2016 [18], USVD [13].…”
Section: The Deconv3dnet For Saliency Learningmentioning
confidence: 99%
See 1 more Smart Citation