2017
DOI: 10.1109/tip.2017.2711277
|View full text |Cite
|
Sign up to set email alerts
|

Depth-Aware Salient Object Detection and Segmentation via Multiscale Discriminative Saliency Fusion and Bootstrap Learning

Abstract: This paper proposes a novel depth-aware salient object detection and segmentation framework via multiscale discriminative saliency fusion (MDSF) and bootstrap learning for RGBD images (RGB color images with corresponding Depth maps) and stereoscopic images. By exploiting low-level feature contrasts, mid-level feature weighted factors and high-level location priors, various saliency measures on four classes of features are calculated based on multiscale region segmentation. A random forest regressor is learned … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
89
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
4
2
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 223 publications
(89 citation statements)
references
References 45 publications
0
89
0
Order By: Relevance
“…As mentioned above, these approaches can be roughly divided into early fusion, middle fusion, and late fusion. Early fusion regards the depth map as a additional channel to concatenate with RGB as initial input, e.g., [47]. Late fusion applies two separate backbone network for RGB and depth to generate individual features or predictions which are fused together for final prediction, such as [20] [14].…”
Section: Rgb-d Salient Object Detectionmentioning
confidence: 99%
See 1 more Smart Citation
“…As mentioned above, these approaches can be roughly divided into early fusion, middle fusion, and late fusion. Early fusion regards the depth map as a additional channel to concatenate with RGB as initial input, e.g., [47]. Late fusion applies two separate backbone network for RGB and depth to generate individual features or predictions which are fused together for final prediction, such as [20] [14].…”
Section: Rgb-d Salient Object Detectionmentioning
confidence: 99%
“…Although several novel CNN-based SOD approaches [42] [62] have been proposed for RGB-D data recently, the optimal way to fuse RGB and depth information remains an open issue, which lies in two aspects: model incompatibility and redundancy, low-quality depth map. Most of the existing fusion strategies can be classified into early fusion [45] [47], late fusion [20] [14], and middle fusion [1] [3][2] [42]. Recent researches mainly focus on the middle fusion where a separate backbone network pre-trained from ImageNet [9] is usually utilized to extract depth features, which may causes incompatible problem due to the inherent modality difference between RGB and depth image [62].…”
Section: Introductionmentioning
confidence: 99%
“…DMRA [32], CPFP [39], TANet [4], PCF [3], MMCI [5], CTMF [16], AFNet [37], DF [33]. In addition, we also report SE [15], ACSD [21], LBE [12], DCMC [8], MDSF [36] which are traditional RGB-D salient object detection works using various hand-crafted features.…”
Section: Comparisons With State-of-the-art Methodsmentioning
confidence: 99%
“…Peng et al [26] propose a novel structured matrix decomposition method [27] [28] with two regularizations: (1) a tree-structured sparsity-inducing regularization that absorbs the image structure [29] and enforces patches from the same object to have similar saliency information, and (2) a Laplacian regularization [30] that increases the gaps between salient objects and the background. Song et al [31] propose a novel of 8 depth-aware salient object detection and segmentation framework [32][33] [40] via multiscale discriminative saliency fusion and bootstrap learning for RGBD [34]. Hou et al [35] present a deeply supervised salient object detection with short connections.…”
Section: Introductionmentioning
confidence: 99%