2009 Data Compression Conference 2009
DOI: 10.1109/dcc.2009.62
|View full text |Cite
|
Sign up to set email alerts
|

Low Bit Rate Vector Quantization of Outlier Contaminated Data Based on Shells of Golay Codes

Abstract: In this paper we study how to encode N -long vectors, with N in the range of hundreds, at low bit rates of 0.5 bit per sample or lower. We adopt a vector quantization structure, where an overall gain is encoded with a scalar quantizer and the remaining scaled vector is encoded using a vector quantizer built out by combining smaller (length L) binary codes known to be efficient in filling the space, the important examples discussed here being the Golay codes. Due to the typical nonstationary distribution of the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
3
0

Year Published

2010
2010
2013
2013

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 7 publications
0
3
0
Order By: Relevance
“…E g that is the closest to the received r. and then the 11 informational bits are extracted from §.. For some channels, for each transmitted bit Si the receiver does not take immediately trivial hard decisions ri E {O, I}, but instead it can evaluate the probability Pi = P(ri = 0) and P(ri = 1) = 1 -Pi. Then for each Golay codevector 9 E g the probability P(r. = g) = L P(ri = gi) can be used for a better decision, given by the maximum likelihood decoding L argmaxP(r. = g) = argmax L P(ri = gi) [ (1,0), (w, …”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…E g that is the closest to the received r. and then the 11 informational bits are extracted from §.. For some channels, for each transmitted bit Si the receiver does not take immediately trivial hard decisions ri E {O, I}, but instead it can evaluate the probability Pi = P(ri = 0) and P(ri = 1) = 1 -Pi. Then for each Golay codevector 9 E g the probability P(r. = g) = L P(ri = gi) can be used for a better decision, given by the maximum likelihood decoding L argmaxP(r. = g) = argmax L P(ri = gi) [ (1,0), (w, …”
Section: Introductionmentioning
confidence: 99%
“…For some specific problems, see e.g., [1], the nearest neighbor must be looked for in a particular shell of the codebook g. The nearest neighbor in the Euclidean distance sense minimizes the distance (5) L L L L v; -2 L Vigi + L g; (4) i=l i=l i=l L L(Vi -gi)2 i=l which is equivalent to maximizing L argmax~Es L Vi~i + dH(J).…”
mentioning
confidence: 99%
See 1 more Smart Citation