[Proceedings] DCC `93: Data Compression Conference
DOI: 10.1109/dcc.1993.253150
|View full text |Cite
|
Sign up to set email alerts
|

Combining image classification and image compression using vector quantization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
13
0
1

Publication Types

Select...
5
2
1

Relationship

3
5

Authors

Journals

citations
Cited by 29 publications
(15 citation statements)
references
References 10 publications
0
13
0
1
Order By: Relevance
“…The issue of joint compression and classification was considered and studied extensively by Oehler, Gray, Perlmutter, Olshen, et al [13,14,16,17,15,18,19,11,6]. A vector quantizer aimed at combining compression and classification generates indices that are mapped into both representative codewords and classes for original vectors at the receiving end.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The issue of joint compression and classification was considered and studied extensively by Oehler, Gray, Perlmutter, Olshen, et al [13,14,16,17,15,18,19,11,6]. A vector quantizer aimed at combining compression and classification generates indices that are mapped into both representative codewords and classes for original vectors at the receiving end.…”
Section: Introductionmentioning
confidence: 99%
“…The joint compression and classification algorithm developed by Oehler, Gray, Perlmutter, Olshen, et al [13,14,16,17,15,18,19], is referred to as Bayes VQ. The basic assumption is that a training sequence L = {(x i , y i ), i = 1, 2, ..., L} is a realization of a random process {(X i , Y i ), i = 1, 2, ...} with (X i , Y i ) obeying a common but unknown distribution P XY on (X, Y ) ∈ A X × A Y .…”
Section: Introductionmentioning
confidence: 99%
“…(1). The respective problem of generative (representative) models versus classification property was generally discussed for vector quantization [9,37,38,39]. In consequence, if a generative GLVQ models is strictly demanded, one has to add a respective penalty term to the cost function according to…”
Section: Class Typical Prototypes Versus Class Border-sensitive Lvq Vmentioning
confidence: 99%
“…Moreover, keeping in mind the rapidly growing number of available data, classification learning is demanded to handle big data leading to the related task of data compression. However, classification learning and data compression of vector data are the two different sides of one coin, which are closely related to supervised and unsupervised vector quantization [9]. Beside other paradigms, prototype based approaches are known to be robust machine learning methods with high performance, in general, while being often easy to interpret [10].…”
Section: Introductionmentioning
confidence: 99%
“…Another approach to incorporating class information in the design of prototypes is the Bayes Vector Quantization BVQ algorithm proposed by Oehler, Gray, Perlmutter, Olshen et al 14,15 . BVQ aims at achieving good compression and classication simultaneously.…”
Section: Introductionmentioning
confidence: 99%