2023
DOI: 10.1109/tcyb.2022.3169431
|View full text |Cite
|
Sign up to set email alerts
|

Global-and-Local Collaborative Learning for Co-Salient Object Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 53 publications
(16 citation statements)
references
References 73 publications
0
11
0
Order By: Relevance
“…We also identify other two-modality SOD tasks (e.g., imagevideo optical flow SOD [89], text-image SOD [90], audioimage SOD [91]) and the similar but different image pair co-saliency task [92]- [94]. Therefore, two-modality SOD is worthy of further study.…”
Section: B Two-modality Salient Object Detectionmentioning
confidence: 99%
“…We also identify other two-modality SOD tasks (e.g., imagevideo optical flow SOD [89], text-image SOD [90], audioimage SOD [91]) and the similar but different image pair co-saliency task [92]- [94]. Therefore, two-modality SOD is worthy of further study.…”
Section: B Two-modality Salient Object Detectionmentioning
confidence: 99%
“…where G is the ground truth, and bce is the BCE loss as defined in [22], [24]. During the testing phase, we only utilize the prediction of RGB-D stream as the final saliency map.…”
Section: E Loss Functionmentioning
confidence: 99%
“…With the development of SOD task research, there are many subtasks, such as co-salient object detection (CoSOD) [20]- [22], remote sensing SOD [23]- [27], video SOD [28]- [30], light field SOD [31], have also been developed. In fact, the natural binocular structure of humans can also perceive the depth of field of the scene, and then generate stereo perception.…”
Section: Introductionmentioning
confidence: 99%
“…With the success of unified models in up-stream tasks (Ren et al 2015;Xiao et al 2017), the latest CoSOD models try to address salient object detection and common object detection in a unified framework (Fan et al 2021(Fan et al , 2022Zhang et al 2020c). Despite the promising performance achieved by these methods, most of them only focus on learning better consistent feature representations in an individual group (Zhang et al 2020c;Wei et al 2017;Zhang et al 2020b;Cong et al 2022;Jin et al 2020;Tang et al 2022), which may make them suffer from the following limitations. First, images from the same group can only act as positive samples of each other.…”
Section: Introductionmentioning
confidence: 99%