2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.01215
|View full text |Cite
|
Sign up to set email alerts
|

Inferring Attention Shift Ranks of Objects for Image Saliency

Abstract: Psychology studies and behavioural observation show that humans shift their attention from one location to another when viewing an image of a complex scene. This is due to the limited capacity of the human visual system in processing simultaneously multiple visual inputs. The sequential shifting of attention on objects in a non-task oriented viewing can be seen as a form of saliency ranking. Although there are methods proposed for predicting saliency rank, they are not able to model this human attention shift … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
66
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 38 publications
(66 citation statements)
references
References 66 publications
0
66
0
Order By: Relevance
“…The current popular saliency detection datasets do not contain depth information. It could therefore be concluded that the study of saliency detection algorithms in relation to RGB-D content is seen as a current challenge for this particular area [21] [23] [13] and the dataset being proposed in this paper aims to explore this further.…”
Section: A Saliency Detection Datasetsmentioning
confidence: 99%
“…The current popular saliency detection datasets do not contain depth information. It could therefore be concluded that the study of saliency detection algorithms in relation to RGB-D content is seen as a current challenge for this particular area [21] [23] [13] and the dataset being proposed in this paper aims to explore this further.…”
Section: A Saliency Detection Datasetsmentioning
confidence: 99%
“…Salient Instance Detection (SID) goes further from SOD as it aims to differentiate individual salient instances. This instance-level saliency information can benefit vision tasks that require fine-grained scene understanding, e.g., object rank [41], image captioning [21], image editing [59] and semantic segmentation [43]. However, existing SID methods [11,25,62] still rely on large-scale annotated ground truth masks in order to learn how to segment salient instances with their boundaries delineated.…”
Section: Introductionmentioning
confidence: 99%
“…E-mail: liunian228@gmail.com, ling.shao@inceptioniai.org • L. Li, W. Zhao, and J. Han are with School of Automation, Northwestern Polytechnical University, Xi'an, China, E-mail: {longli.nwpu,wangbo.zhao96,junweihan2010}@gmail.com • Junwei Han is the corresponding author. (d) RSDNet [12] (e) ASSR [13] (f) Ours In [12], the authors predicted pixel-wise saliency ranking, without actually differentiating object instances. (e): In [13], saliency ranking is inferred based on attention shift, and only less than five objects are considered.…”
Section: Introductionmentioning
confidence: 99%
“…(d) RSDNet [12] (e) ASSR [13] (f) Ours In [12], the authors predicted pixel-wise saliency ranking, without actually differentiating object instances. (e): In [13], saliency ranking is inferred based on attention shift, and only less than five objects are considered.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation