2020
DOI: 10.1007/978-3-030-58604-1_40
|View full text |Cite
|
Sign up to set email alerts
|

Measuring the Importance of Temporal Features in Video Saliency

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 40 publications
0
2
0
Order By: Relevance
“…Their conclusion was that better predictions were obtained by considering the eyes and gaze of people in the image. More recent work in video saliency uses deep learning to better mimic human perception [GC18] but predicting the spectator's gaze while viewing cinematographic contents remains a challenging task [TKWB20], further complicated by high level narrative engagement [LLMS15].…”
Section: Eye-tracementioning
confidence: 99%
“…Their conclusion was that better predictions were obtained by considering the eyes and gaze of people in the image. More recent work in video saliency uses deep learning to better mimic human perception [GC18] but predicting the spectator's gaze while viewing cinematographic contents remains a challenging task [TKWB20], further complicated by high level narrative engagement [LLMS15].…”
Section: Eye-tracementioning
confidence: 99%
“…Specific methods for the dynamic case have been studied [ 28 , 29 , 30 , 31 , 32 , 33 ] and, very recently, unified image-video approaches [ 34 ] proposed, but only in the context of spatial salience. For gaze prediction, temporal features are found to be of key importance in rare events, so spatial static features can explain gaze in most cases [ 35 ]. At the same time, features derived from deep learning models exploiting temporal information have been found to benefit gaze estimation over using static-only features [ 36 ].…”
Section: Related Workmentioning
confidence: 99%