2021
DOI: 10.1609/aaai.v35i4.16408
|View full text |Cite
|
Sign up to set email alerts
|

Locate Globally, Segment Locally: A Progressive Architecture With Knowledge Review Network for Salient Object Detection

Abstract: Salient object location and segmentation are two different tasks in salient object detection (SOD). The former aims to globally find the most attractive objects in an image, whereas the latter can be achieved only using local regions that contain salient objects. However, previous methods mainly accomplish the two tasks simultaneously in a simple end-to-end manner, which leads to the ignorance of the differences between them. We assume that the human vision system orderly locates and segments objects, so we pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
38
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 92 publications
(38 citation statements)
references
References 40 publications
0
38
0
Order By: Relevance
“…To verify the challenges of the proposed dataset ODI-SOD, in Tab.II we list the performances of 20 state-ofthe-art (SOTA) 2D SOD and 360 • SOD methods on our test set without finetuning on our train set. The methods include GCPANet [46], MINet-R [45], ITSD [69], F3Net [70], DFI [71], PFSNet [72], CTDNet [51], VST [50], PAKRN [9], DCN [73], SOD100K [74], PSGLoss [75], SCASOD [76], FastSaliency [77], PurNet [78], PoolNet [79], RCSB [80], ZoomNet [10], TRACER [81] and DDS [7]. From Tab.II we find that all listed methods perform not well on the ODI-SOD test set including the 360 • -based method DDS [7] trained by the 360-SOD train set, which suggests that currently available models have poor generalization ability over the proposed dataset.…”
Section: B Benchmarking Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…To verify the challenges of the proposed dataset ODI-SOD, in Tab.II we list the performances of 20 state-ofthe-art (SOTA) 2D SOD and 360 • SOD methods on our test set without finetuning on our train set. The methods include GCPANet [46], MINet-R [45], ITSD [69], F3Net [70], DFI [71], PFSNet [72], CTDNet [51], VST [50], PAKRN [9], DCN [73], SOD100K [74], PSGLoss [75], SCASOD [76], FastSaliency [77], PurNet [78], PoolNet [79], RCSB [80], ZoomNet [10], TRACER [81] and DDS [7]. From Tab.II we find that all listed methods perform not well on the ODI-SOD test set including the 360 • -based method DDS [7] trained by the 360-SOD train set, which suggests that currently available models have poor generalization ability over the proposed dataset.…”
Section: B Benchmarking Resultsmentioning
confidence: 99%
“…In contrast, the severely distorted regions can easily lead to segmentation failure due to their apparent differences from existing knowledge and perception. The target objects with discontinuous edge effects and the To demonstrate the effectiveness of the proposed method, we selected the Top-5 in performance from 2D SOD methods with available training code in Tab.II, i.e., PAKRN [9], PoolNet [79], RCSB [80], ZoomNet [10] and TRACER [81]. Then, for a fair comparison, we finetune these five models, DDS [7] and our proposed model on the ODI-SOD train set.…”
Section: B Benchmarking Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…We compare our proposed method with 16 previous stateof-the-art methods, including Amulet [55], UCF [56], BRN [45], C2SNet [23], AFNet [11], BASNet [37], F3Net [48], CAGNet-R [34], GCPANet [7], ITSD [60], LDF [49], MINet [35], GateNet [59], VST [28], PAKRN [51], PFSNet [31]. To ensure the fairness of the comparison, all predicted saliency maps are downloaded from the public official website and evaluated under the same evaluation code and environment.…”
Section: Performance Comparisonmentioning
confidence: 99%