2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00321
|View full text |Cite
|
Sign up to set email alerts
|

Co-Saliency Detection via Mask-Guided Fully Convolutional Networks With Multi-Scale Label Smoothing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
75
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 81 publications
(76 citation statements)
references
References 46 publications
0
75
0
Order By: Relevance
“…Early-fusion techniques [70,89] initially extract a global representation of all the images in the input group, capturing relationships between different images. Conversely, late-fusion techniques [67,92,94,95] are designed to estimate single-image saliency from each input individually, and reciprocally update them in a second phase, based on the extracted information.…”
Section: Methods For Co-saliencymentioning
confidence: 99%
See 2 more Smart Citations
“…Early-fusion techniques [70,89] initially extract a global representation of all the images in the input group, capturing relationships between different images. Conversely, late-fusion techniques [67,92,94,95] are designed to estimate single-image saliency from each input individually, and reciprocally update them in a second phase, based on the extracted information.…”
Section: Methods For Co-saliencymentioning
confidence: 99%
“…Another possible criterion to discriminate among different approaches, is the distinction between deep-learning solutions, and those based on hand-crafted design and traditional techniques. Methods in the deep learning group [67,70,94,95] typically benefit from end-to-end learning, therefore optimizing the final objective of co-saliency estimation regardless of the adopted early-fusion or late-fusion approach. Many are based on the Fully-Convolutional Network (FCN) by Long et al [97] or DeepLab by Chen et al [18], both leveraging the VGG backbone [75].…”
Section: Methods For Co-saliencymentioning
confidence: 99%
See 1 more Smart Citation
“…Learning-based methods [21], [34]- [37], [49]- [52] have attracted increasing attention since machine learning is much superior and achieves great success for co-saliency detection. Wei et al [35] extracted co-saliency correspondence with an end-to-end fully convolution network architecture.…”
Section: B Co-saliency Detectionmentioning
confidence: 99%
“…Feature Vector Calculation: The texture feature is added to the KNN matting method, so in the HSV colour space, the feature vector of a given pixel p i can be expressed as a sevendimensional vector V(p i ). where x; y ð Þ is the spatial coordinate of pixel p i , and T is the texture feature obtained by Equation (14).…”
Section: The Linear Filter Response Of the Template At Pointmentioning
confidence: 99%