2021
DOI: 10.1016/j.neucom.2021.04.052
|View full text |Cite
|
Sign up to set email alerts
|

Circular Complement Network for RGB-D Salient Object Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(5 citation statements)
references
References 39 publications
0
5
0
Order By: Relevance
“…For example, PGARNet [8] proposed a progressive guided alternating refinement network for RGB-D salient object detection, which employs recursive and alternating strategies to progressively enhance the accuracy and consistency of saliency prediction. Paper [9] introduced a calibrated RGB-D salient object detection method, which achieves more accurate saliency prediction by jointly learning the calibration mapping between RGB and depth features. CDINet [10] proposed a cross-modal inconsistent interaction network for RGB-D salient object detection.…”
Section: Rgb-d Salient Object Detectionmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, PGARNet [8] proposed a progressive guided alternating refinement network for RGB-D salient object detection, which employs recursive and alternating strategies to progressively enhance the accuracy and consistency of saliency prediction. Paper [9] introduced a calibrated RGB-D salient object detection method, which achieves more accurate saliency prediction by jointly learning the calibration mapping between RGB and depth features. CDINet [10] proposed a cross-modal inconsistent interaction network for RGB-D salient object detection.…”
Section: Rgb-d Salient Object Detectionmentioning
confidence: 99%
“…Traditional models based on handcrafted features have limited expressive power, resulting in subpar performance in complex scenes [16], [17], [18], [20]. With the popularity of deep learning, various deep learning-based cross-modal fusion methods have been proposed [8], [9], [10], [11], [12], [13], [19]. For example, literature [36], [38], [44] introduce multiscale fusion methods that fully leverage depth and color information at different scales to improve the performance of salient object detection.…”
Section: Introductionmentioning
confidence: 99%
“…Despite some improvements, most feature fusion based RGB-D SOD models mentioned above mainly focus on capturing the complementary information within the multi-modality input images, while ignoring the impacts of image qualities on the representation ability of fused features, thus degrading the subsequent saliency detection performance. Recently, some studies have been carried out on the disturbing problem caused by the low-quality images [13], [19]- [21], [54]- [57]. For example, Zhao et al [19] designed a contrast enhancement module with contrast prior information to enhance the quality of depth images, thus boosting the saliency detection performance.…”
Section: B Rgb-d Salient Object Detectionmentioning
confidence: 99%
“…Rather than directly enhancing low-quality depth images, Fan et al [13] proposed a depth depurator unit to reduce the impact of low-quality depth images on the saliency detection performance at the result-level. Bai et al [54] and Li et al [55] employed RGB features to filter distractors in depth features prior to exploiting cross-modal complementarity. Chen et al…”
Section: B Rgb-d Salient Object Detectionmentioning
confidence: 99%
“…In the past decades, the visual saliency detection theory [13,14], aiming to highlight the most attractive and distinctive regions in a scene, has been widely used in the field of AD [15,16]. Among them, the context-aware saliency detection [17] is commonly adopted to act as the technique to search for salient objects, due to its powerful performance.…”
Section: Introductionmentioning
confidence: 99%