2003
DOI: 10.1049/el:20030139
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive vector quantisation of non-orthogonal representations for image compression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2007
2007
2009
2009

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 3 publications
0
2
0
Order By: Relevance
“…Therefore, the proposed PCU-AVQ algorithm can be simplified as follows: 1:Initialize the codebook P, and Lagrangian multiplier λ. Calculate T * according to (4): T * = λ/0.10 2. Find the nearest codewordp for the input vector s based on the Euclidean norm: s −p 2 = min s − p i 2 3: Generate the partial updated codewordp(T * ) and record …”
Section: Relationship Between Updating Threshold and Lagrangian Multimentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, the proposed PCU-AVQ algorithm can be simplified as follows: 1:Initialize the codebook P, and Lagrangian multiplier λ. Calculate T * according to (4): T * = λ/0.10 2. Find the nearest codewordp for the input vector s based on the Euclidean norm: s −p 2 = min s − p i 2 3: Generate the partial updated codewordp(T * ) and record …”
Section: Relationship Between Updating Threshold and Lagrangian Multimentioning
confidence: 99%
“…VQ is theoretically attractive; however, there still exists a large gap between the theoretical performance and actually achieved performance. Thus, many adaptive vector quantization (AVQ) algorithms [2], [3], [4] were proposed. The most important feature of AVQ is that codebook can be updated to track the changing statistics of data source in the coding process [2].…”
Section: Introductionmentioning
confidence: 99%