2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00404
|View full text |Cite
|
Sign up to set email alerts
|

A Simple Pooling-Based Design for Real-Time Salient Object Detection

Abstract: We solve the problem of salient object detection by investigating how to expand the role of pooling in convolutional neural networks. Based on the U-shape architecture, we first build a global guidance module (GGM) upon the bottom-up pathway, aiming at providing layers at different feature levels the location information of potential salient objects. We further design a feature aggregation module (FAM) to make the coarse-level semantic information well fused with the fine-level features from the top-down pathw… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
627
1
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 867 publications
(691 citation statements)
references
References 48 publications
1
627
1
1
Order By: Relevance
“…Multi-scale feature representations of CNNs are of great importance to a number of vision tasks including object detection [43], face analysis [4], [41], edge detection [37], semantic segmentation [6], salient object detection [34], [65], and skeleton detection [67], boosting the model performance of those fields.…”
Section: Multi-scale Representations For Vision Tasksmentioning
confidence: 99%
“…Multi-scale feature representations of CNNs are of great importance to a number of vision tasks including object detection [43], face analysis [4], [41], edge detection [37], semantic segmentation [6], salient object detection [34], [65], and skeleton detection [67], boosting the model performance of those fields.…”
Section: Multi-scale Representations For Vision Tasksmentioning
confidence: 99%
“…To validate the proposed RGB-T salient detection model, we compare our model with 10 SOTA methods, which are further divided into three types, i.e., (1)RGB salient object detection methods: PoolNet [39], R3Net [40], and CPDNet [41]; (2) RGB-D salient object detection methods: AFNet [45], TSAA [46], PDNet [47], and SSRC [48]; and (3) RGB-T salient object detection methods: MFSR [28], GCL [49], and MRCM [27]. For fair comparisons, we modify these RGB and RGB-D salient object detection methods for RGB-T saliency detection.…”
Section: Comparison With the State-of-the-art Methodsmentioning
confidence: 99%
“…Recently, to extract more sophisticated features, tremendous deep learning based saliency detectors have been proposed [19]- [25], [39]- [41], and achieved substantially better performance than those previous methods. For example, Lee et al [23] proposed to first encode low-level distance map and high-level sematic features of deep CNNs to form a new feature vector, and then evaluate saliency by a multi-level fully connected neural network classifier.…”
Section: Related Work a Rgb Salient Object Detectionmentioning
confidence: 99%
See 1 more Smart Citation
“…We compare the effectiveness of the DMS model with 16 saliency methods, including 12 traditional saliency algorithms (ITTI [40], LC [41], SR [42], AC [43], FT [44], MSS [45], PHOT [46], HC [47], RC [47], SF [48], BMS [49], and MBP [50]), and 8 deep learning methods (U-Net [51], FCN [19], R3Net [24], DSS [33], PiCANet [52], BASNet [29], PoolNet [53], and EGNet [26]). We implement the traditional saliency algorithm through the toolbox provided in [16].…”
Section: Model Comparisonmentioning
confidence: 99%