Proceedings of the 2019 3rd International Conference on Innovation in Artificial Intelligence 2019
DOI: 10.1145/3319921.3319932
|View full text |Cite
|
Sign up to set email alerts
|

An Element Sensitive Saliency Model with Position Prior Learning for Web Pages

Abstract: Understanding human visual attention is important for multimedia applications. Many studies have attempted to build saliency prediction models on natural images. However, limited efforts have been devoted to saliency prediction for Web pages, which are characterized by diverse content elements and spatial layouts. In this paper, we propose a novel endto-end deep generative saliency model for Web pages. To capture position biases introduced by page layouts, a Position Prior Learning (PPL) sub-network is propose… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
2
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 24 publications
0
2
0
Order By: Relevance
“…Such models perform best over this kind of visual scenes whereas their performances are significantly reduced when the input stimulus does not belong to such a category, such as webpages, UAV (Unmanned Aerial Vehicles) imagery [19], comics [20] to name a few. To cope with the lack of generalisation of visual attention models, it is common to fine-tune deep saliency models with eye-tracking data collected over the target visual scenes, such as comics [20] or webpages [21].…”
Section: Introductionmentioning
confidence: 99%
“…Such models perform best over this kind of visual scenes whereas their performances are significantly reduced when the input stimulus does not belong to such a category, such as webpages, UAV (Unmanned Aerial Vehicles) imagery [19], comics [20] to name a few. To cope with the lack of generalisation of visual attention models, it is common to fine-tune deep saliency models with eye-tracking data collected over the target visual scenes, such as comics [20] or webpages [21].…”
Section: Introductionmentioning
confidence: 99%
“…Zheng et al [33] collected human fixations under various tasks such as shopping and formfilling, and predicted saliency based on these task labels. Gu et al [4] captured page biases using a variational autoencoder and used feature detectors to predict webpage saliency. Xia et al's [5] saccadic model predicts scanpaths while viewing webpages, using hand-crafted features for training and is not interpretable.…”
Section: B Attention Prediction On Graphic Designsmentioning
confidence: 99%
“…We compare our AGD-F model with natural saliency models: Itti [16], Deep Gaze II [38], SalGAN [56], DVA [57], SAM-ResNet [12], UAVDSM [37], DI Net [39], EML-NET [58], webpage saliency models: MKL [2], MMF [3] and SPPL [4], TaskWebSal (FreeView) [33] and the graphic design saliency model, UMSI [10]. We quantitatively compare over the metrics: Normalized Scanpath Saliency (NSS), Cross Correlation (CC), KL divergence (KL), the Judd AUC (AUC-J) [59] and the Shuffled AUC (sAUC).…”
Section: A Evaluation Of the Saliency Prediction Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…Metadata saliency describes the confidence of a recommendation in the transferred domain, which reflects the graph-based relationships among items. It is generally known that the visual saliency represents the position-based visual preference [20]. As such, methods for predicting eye fixation maps on websites or mobile interfaces [21][22][23] have been studied, and the relation between the probability of the movement of a mouse pointer and the eye fixation has been discussed [24].…”
Section: Introductionmentioning
confidence: 99%