2001
DOI: 10.1109/83.951528
|View full text |Cite
|
Sign up to set email alerts
|

Perceptual feedback in multigrid motion estimation using an improved DCT quantization

Abstract: Abstract-In this paper, a multigrid motion compensation video coder based on the current human visual system (HVS) contrast discrimination models is proposed. A novel procedure for the encoding of the prediction errors has been used. This procedure restricts the maximum perceptual distortion in each transform coefficient. This subjective redundancy removal procedure includes the amplitude nonlinearities and some temporal features of human perception. A perceptually weighted control of the adaptive motion estim… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
26
0

Year Published

2005
2005
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 35 publications
(27 citation statements)
references
References 44 publications
1
26
0
Order By: Relevance
“…al. [3] proposed a multigrid motion compensation video coding scheme based on HVS. However, it consumes a lot of memory and is not suitable for digital applications with restrict battery lifetime such as mobile video.…”
Section: Introductionmentioning
confidence: 99%
“…al. [3] proposed a multigrid motion compensation video coding scheme based on HVS. However, it consumes a lot of memory and is not suitable for digital applications with restrict battery lifetime such as mobile video.…”
Section: Introductionmentioning
confidence: 99%
“…The difference between the approaches that implicitly followed the MPE idea [2], [14]- [18] is the accuracy of the perception model which is used to propose the perceptually Euclidean domain. For instance, the quantization scheme (empirically) recommended in the JPEG standard [16] may be deduced from the MPE restriction with a very simple linear vision model based on the CSF [14].…”
Section: Image Coding Resultsmentioning
confidence: 99%
“…In this very simple case, it is assumed that no perceptual relationship exists between the coefficients of the transform, and that the perceptual relevance of each coefficient is given by the corresponding CSF value. The performance of this approach can be improved at around 0.5 bits/pix if a more sophisticated model is used [14], [15], [17], [18]. In these references the authors used a point-wise non-linear model in the DCT domain.…”
Section: Image Coding Resultsmentioning
confidence: 99%
See 2 more Smart Citations