2010
DOI: 10.1007/978-3-642-11301-7_33
|View full text |Cite
|
Sign up to set email alerts
|

Stereoscopic Visual Attention Model for 3D Video

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
43
0

Year Published

2013
2013
2018
2018

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 88 publications
(43 citation statements)
references
References 11 publications
0
43
0
Order By: Relevance
“…This type of models (e.g. [29], [30] and [31]) does not contain any depth-map-based feature-extraction processes. Apart from detecting the salient areas by using 2D visual features, these models share a same step in which depth information is used as the weighting factor of the 2D saliency.…”
Section: Dw Depth Information Operation Validationmentioning
confidence: 99%
“…This type of models (e.g. [29], [30] and [31]) does not contain any depth-map-based feature-extraction processes. Apart from detecting the salient areas by using 2D visual features, these models share a same step in which depth information is used as the weighting factor of the 2D saliency.…”
Section: Dw Depth Information Operation Validationmentioning
confidence: 99%
“…However, when color contrast is not distinct enough, depth cue can serve as the complement to weight the color contrast under the assumption that a closer region is usually more attractive for human visual attention [12]. For each region i R , its color contrast is evaluated by comparing to all the other regions in the image with a depth weighting factor as follows: (1), the parameter  controls the strength of spatial weighting, and is set to 0.4, a value larger than  , for a moderate spatial weighting effect, due that color variations are usually larger than depth variations in most neighboring regions.…”
Section: Depth Weighted Color Contrastmentioning
confidence: 99%
“…Zhang et al [53] proposed a stereoscopic visual attention model for 3D video based on multiple perceptual stimuli. Chamaret et al [54] designed a Region of Interest (ROI) extraction method for adaptive 3D rendering.…”
Section: D Visual Saliency Mapmentioning
confidence: 99%