2016
DOI: 10.1109/lsp.2016.2523339
|View full text |Cite
|
Sign up to set email alerts
|

Novelty-based Spatiotemporal Saliency Detection for Prediction of Gaze in Egocentric Video

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
8
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(8 citation statements)
references
References 18 publications
0
8
0
Order By: Relevance
“…Gaze data in first-person wearable systems can aid in temporal video segmentation [ 17 ], and its computational prediction has been studied [ 18 ]. The gaze data of the wearer of an egocentric camera have been used to score the importance of the frames, as the input to a fast-forward algorithm [ 19 ].…”
Section: Related Workmentioning
confidence: 99%
“…Gaze data in first-person wearable systems can aid in temporal video segmentation [ 17 ], and its computational prediction has been studied [ 18 ]. The gaze data of the wearer of an egocentric camera have been used to score the importance of the frames, as the input to a fast-forward algorithm [ 19 ].…”
Section: Related Workmentioning
confidence: 99%
“…Visual analysis of egocentric videos has recently became a hot research topic in computer vision [13], [14], from recognizing daily activities [2], [3] to object detection [15], video summarization [16], and predicting gaze behavior [17], [18], [19]. In the following, we review some previous work related to ours spanning Relating static and egocentric, Social interactions among egocentric viewers, and Person identification and localization.…”
Section: Related Workmentioning
confidence: 99%
“…Even fewer studies supply manual annotations or develop an algorithmic detection strategy for the eye movements in this context. Saliency in 360 • [Cheng et al 2018;Nguyen et al 2018] as well as egocentric [Lee et al 2012;Li et al 2018;Polatsek et al 2016] content is gaining popularity, and this inevitably requires the collection of eye tracking data for 360 • images and videos or in the mobile eye tracking scenario. However, the data sets that are typically published provide scanpaths in the form of sequences of "fixations" [Bolshakov et al 2017;Rai et al 2017;Sitzmann et al 2018], which limits their usefulness for eye movement research.…”
Section: Data Sets and Eye Movement Annotationmentioning
confidence: 99%
“…[Lo et al 2017] only capture the head rotation data without any eye tracking, assuming that the object at the centre of the participant's field of view is the one being looked at. [Polatsek et al 2016] seem to use the term "fixation" interchangeably with "gaze point".…”
Section: Data Sets and Eye Movement Annotationmentioning
confidence: 99%