2021
DOI: 10.1007/s41095-020-0200-x
|View full text |Cite
|
Sign up to set email alerts
|

WGI-Net: A weighted group integration network for RGB-D salient object detection

Abstract: Salient object detection is used as a pre-process in many computer vision tasks (such as salient object segmentation, video salient object detection, etc.). When performing salient object detection, depth information can provide clues to the location of target objects, so effective fusion of RGB and depth feature information is important. In this paper, we propose a new feature information aggregation approach, weighted group integration (WGI), to effectively integrate RGB and depth feature information. We use… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 55 publications
0
2
0
Order By: Relevance
“…After filtering the papers, using the inclusion and exclusion criteria already discussed, 24 papers remained to be reviewed, listed in Table 2. Among them, 19 papers were found that are specifically dedicated to edge detection, either using multiple descriptors extraction and aggregation or based on fuzzy set theory: type-2 fuzzy and neutrosophic sets [44,47,[80][81][82][83][84][85][86][87][88][89][90] or works that use clustering and pre-aggregation functions, which are dedicated to region segmentation, but consider that their characteristics can be extended to the task of edge detection [46,48,51,52,91,92]. The found methods of segmentation or edge detection, based on aggregation and pre-aggregation functions, can be divided into three groups: (i) based on the aggregation of distance functions and FCM, (ii) multiple descriptors extraction and aggregation, and (iii) based on fuzzy set theory: type-2 fuzzy and neutrosophic sets.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…After filtering the papers, using the inclusion and exclusion criteria already discussed, 24 papers remained to be reviewed, listed in Table 2. Among them, 19 papers were found that are specifically dedicated to edge detection, either using multiple descriptors extraction and aggregation or based on fuzzy set theory: type-2 fuzzy and neutrosophic sets [44,47,[80][81][82][83][84][85][86][87][88][89][90] or works that use clustering and pre-aggregation functions, which are dedicated to region segmentation, but consider that their characteristics can be extended to the task of edge detection [46,48,51,52,91,92]. The found methods of segmentation or edge detection, based on aggregation and pre-aggregation functions, can be divided into three groups: (i) based on the aggregation of distance functions and FCM, (ii) multiple descriptors extraction and aggregation, and (iii) based on fuzzy set theory: type-2 fuzzy and neutrosophic sets.…”
Section: Resultsmentioning
confidence: 99%
“…The use of depth information was also explored by [83] for the detection of region boundaries. According to the authors, detecting the border of salient regions of an image in an accurate way and distinguishing between objects is an extremely difficult task, especially when one has complex backgrounds or when the objects in the main plane and in the background have low contrast between each other.…”
Section: Definition 1 (C T -Integral)mentioning
confidence: 99%
See 1 more Smart Citation
“…CUHK Avenue ShanghaiTech VEC [16] 90.2 74.8 Conv-VRNN [17] 85.8 -MNAD-P [18] 88.5 70.5 AMDN [19] 84.6 -Conv2D-AE [6] 70.2 -StackRNN [16] 80.9…”
Section: Methodsmentioning
confidence: 99%
“…The first ones are based on density estimation and probability models including VEC [ 16 ] and Conv-VRNN [ 17 ]. The second ones are single-class classification-based methods including MNAD-P [ 18 ] and AMDN [ 19 ]. The third ones are reconstruction-based methods including Conv2D-AE [ 6 ] and StackRNN [ 20 ].…”
Section: Methodsmentioning
confidence: 99%