2019 IEEE International Conference on Consumer Electronics - Taiwan (ICCE-TW) 2019
DOI: 10.1109/icce-tw46550.2019.8991811
|View full text |Cite
|
Sign up to set email alerts
|

User-Specific Visual Attention Estimation Based on Visual Similarity and Spatial Information in Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
1

Relationship

3
2

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 6 publications
0
7
0
Order By: Relevance
“…Sim↑ KLdiv↓ CC↑ Signature [1] 0.412 8.04 0.413 GBVS [2] 0.447 6.89 0.437 Itti [3] 0.391 9.04 0.322 SalGAN [4] 0.569 3.56 0.635 Baseline1 [29] 0.503 4.13 0.597 Baseline2 [30] 0.417 7.64 0.401 FPSP based on similarity [18] [23]. Among the images in the dataset, 500 images were randomly selected as test images, and the remaining 1100 images were used as training images.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Sim↑ KLdiv↓ CC↑ Signature [1] 0.412 8.04 0.413 GBVS [2] 0.447 6.89 0.437 Itti [3] 0.391 9.04 0.322 SalGAN [4] 0.569 3.56 0.635 Baseline1 [29] 0.503 4.13 0.597 Baseline2 [30] 0.417 7.64 0.401 FPSP based on similarity [18] [23]. Among the images in the dataset, 500 images were randomly selected as test images, and the remaining 1100 images were used as training images.…”
Section: Methodsmentioning
confidence: 99%
“…Furthermore, we adopted the following three PSM prediction methods using small amount of gaze data. Baseline1: A PSM prediction method using relationships of parts of the image and the whole image [29] Baseline2: A PSM prediction method using visual similarities [30] PSM based on similarity: A PSM prediction method similar to the proposed method. The difference between this method and our method is that this method used simple similarity of gaze tendency based on correlation coefficient of PSMs between the target person and other persons [18] Note that these PSM prediction methods were trained with only selected images via AIS in the same way as our method.…”
Section: Methodsmentioning
confidence: 99%
“…Methods Sim↑ KLdiv↓ CC↑ Signature [3] 0.412 8.04 0.413 GBVS [4] 0.447 6.89 0.437 Itti [5] 0.391 9.04 0.322 SalGAN [6] 0.569 3.56 0.635 Contextual [28] 0.580 3.57 0.674 Baseline1 [31] 0.503 4.13 0.597 Baseline2 [32] 0.417 7.64 0.401 Similarity-based FPSP [19] [25], and its settings of the mini-batch size, the momentum, the number of layers L, the learning rate, and iterations were 9, 0.9, 3, 3.0×10 −5 , and 1000, respectively. Furthermore, the calculation method of SU(X) was the average of the PSMs of the training persons to eliminate the effect of USM calculation errors.…”
Section: Experimental Settingsmentioning
confidence: 99%
“…In addition to the USM prediction methods, we used the following four PSM prediction methods with a small amount of eye tracking data. Baseline1: A PSM prediction method based on both local and global information of input images [31]. Baseline2: A PSM prediction method based on visual similarities of the target and training images [32].…”
Section: Experimental Settingsmentioning
confidence: 99%
“…Consequently, FPSP based on AIS for the new target person can be realized from a small amount of training data with high accuracy. It should be noted that this paper is an extended version of [22]. Specifically, we enable the novel PSM prediction of the target person from those predicted from similar persons based on the multi-task CNN.…”
Section: Introductionmentioning
confidence: 99%