1998
DOI: 10.1109/83.668016
|View full text |Cite
|
Sign up to set email alerts
|

Image coding with an L/sup ∞/ norm and confidence interval criteria

Abstract: A new image coding technique based on an Linfinity-norm criterion and exploiting statistical properties of the reconstruction error is investigated. The original image is preprocessed, quantized, encoded, and reconstructed within a given confidence interval. Two important classes of preprocessing, namely linear prediction and iterated filterbanks, are used. The approach is also shown to be compatible with previous techniques. The approach allows a great flexibility in that it can perform lossless coding as wel… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2002
2002
2007
2007

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 27 publications
(21 citation statements)
references
References 12 publications
(24 reference statements)
0
21
0
Order By: Relevance
“…Note that similar expressions for error variances in even/odd locations have been derived by Karray et al [12] and Alecu et al [13], albeit in a somewhat different context (e.g., the authors used these expressions to model the reconstruction error distributions at even/odd locations in order to obtain a precise estimate of the probability that the infinity norm of the reconstruction error exceeds a certain predefined threshold). Different treatment of even/odd locations reflects the authors' implicit assumption that these variances might be different from each other in some cases.…”
Section: Error Modeling and Inhomogeneity Predictionmentioning
confidence: 87%
“…Note that similar expressions for error variances in even/odd locations have been derived by Karray et al [12] and Alecu et al [13], albeit in a somewhat different context (e.g., the authors used these expressions to model the reconstruction error distributions at even/odd locations in order to obtain a precise estimate of the probability that the infinity norm of the reconstruction error exceeds a certain predefined threshold). Different treatment of even/odd locations reflects the authors' implicit assumption that these variances might be different from each other in some cases.…”
Section: Error Modeling and Inhomogeneity Predictionmentioning
confidence: 87%
“…1 Here, it is assumed that the coefficients are approximately uncorrelated, allowing the simplifying approximation that K y is diagonal. For relatively high-rate situations (rather small quantization bin sizes), it is well established that the quantization errors are also uncorrelated [11,27]. For lower-rate situations it is not immediately obvious that the quantization errors are uncorrelated, although empirical evidence has supported such an assumption: We have performed simulations with numerous test images (frames from the standard video test sequences football, mobile, bike, garden, and tennis at resolution 352 Â 240), and quantization errors for the wavelet coefficients show no noticeable covariance with other coefficients.…”
Section: Article In Pressmentioning
confidence: 99%
“…1 has been conducted in the area of L N -constrained image compression [11], where the objective is to limit the maximum pixel error (i.e., a local approach) rather than an overall global average error. However, not only are the errors induced by wavelet quantization distributed differently, but they are also correlated.…”
Section: Quantization Noisementioning
confidence: 99%
“…We consider the family of embedded deadzone uniform scalar quantizers in which every sample is quantized to [6] sign if otherwise (1) where , determines the width of the deadzone, and represents the number of discarded bit-planes. We restrict ourselves to midtread quantizers, respectively the range of interest , i.e., embedded deadzone quantizers possessing a deadzone bin size that is larger or equal to the other bin sizes [6].…”
Section: A Embedded Scalar Quantizer Output Entropymentioning
confidence: 99%
“…The local reconstruction error is referred to as the maximum absolute difference (MAXAD) between the pixel values in the original and reconstructed images. Various methods for -distortion constrained compression have been proposed in literature, some operating in the transform domain [1] or in the image domain [2], while others propose a hybrid bit-stream between the two [3]. We have recently proposed in [4] and [5] a wavelet-based -constrained scalable image-coding technique that generates a fully embedded -oriented bit-stream, while retaining the coding performance and scalability options of state-of-the-art wavelet-based codecs [6].…”
Section: Introductionmentioning
confidence: 99%