2021
DOI: 10.1016/j.displa.2021.102096
|View full text |Cite
|
Sign up to set email alerts
|

Spatiotemporal just noticeable difference modeling with heterogeneous temporal visual features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(9 citation statements)
references
References 55 publications
0
9
0
Order By: Relevance
“…According to [ 44 ], information content can be measured by the prior probability distribution and the likelihood function. It has been demonstrated that these measurements are consistent across human subjects and can be modeled using simple parametric functions [ 42 , 45 ]. Motivated by this, this paper adopts the probability distribution function and fitting curve to measure color information content induced by color feature parameters.…”
Section: Analysis Of Color Feature Parametersmentioning
confidence: 99%
See 1 more Smart Citation
“…According to [ 44 ], information content can be measured by the prior probability distribution and the likelihood function. It has been demonstrated that these measurements are consistent across human subjects and can be modeled using simple parametric functions [ 42 , 45 ]. Motivated by this, this paper adopts the probability distribution function and fitting curve to measure color information content induced by color feature parameters.…”
Section: Analysis Of Color Feature Parametersmentioning
confidence: 99%
“…For example, compared with the pure-color background, some color sharp-change areas in the images are perceptually more noisy. That is to say, color complexity prevents the visual system from acquiring accurate information, which can be equivalent to the perceptual noise in the visual channel of HVS, as in [ 45 ].…”
Section: The Proposed Jnd Modelmentioning
confidence: 99%
“…An overview of these factors can be found in the survey by researchers in [ 3 ]. Using the computational domain as a classification criterion, there are two branches of existing JND models, i.e., the pixel domain (where JND thresholds are computed directly for each pixel) [ 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 ] and the transform domain (where the image is first transformed into a subband domain, and then JND thresholds are calculated for each subband) [ 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 ]. Nevertheless, both model types generally follow the same design philosophy of simulating the visual-masking effects of several elements before combining (multiplying or adding) them to obtain an overall JND estimation.…”
Section: Introductionmentioning
confidence: 99%
“…Subsequently, a more realistic LA function was proposed to improve JND estimation by integrating a block classification (plain, edge, and texture) strategy [ 16 ]. For video, corresponding video JND models for the DCT domain were proposed considering spatiotemporal CSF and eye movement compensation [ 17 ], LA effect based on gamma correction and CM effect based on more accurate block classification [ 18 ], texture complexity and frequency, visual sensitivity and visual attention [ 19 ], the various sizes of DCT blocks (from 4 × 4 to 32 × 32) [ 20 ], motion direction [ 21 ], fovea masking [ 22 ], temporal duration and residual fluctuations [ 23 ].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation