2012
DOI: 10.1016/j.neunet.2012.09.009
|View full text |Cite
|
Sign up to set email alerts
|

On the systematic development of fast fuzzy vector quantization for grayscale image compression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
3
0
2

Year Published

2013
2013
2019
2019

Publication Types

Select...
4
3
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 20 publications
(7 citation statements)
references
References 28 publications
0
3
0
2
Order By: Relevance
“…We train the neural network with different number of hidden layer unit by sample sets in MATLAB [7][8]. The training result is shown in chair 1.…”
Section: B Design Hidden Layermentioning
confidence: 99%
“…We train the neural network with different number of hidden layer unit by sample sets in MATLAB [7][8]. The training result is shown in chair 1.…”
Section: B Design Hidden Layermentioning
confidence: 99%
“…Dynamic programming is used as an efficient optimization technique for layout optimization of interconnection networks by Tripathy et al [19]. Tsolakis et al [20] presented a fast fuzzy vector quantization technique for compression of gray scale images. Based on a crisp relation, an input block is assigned to more than one code-word following a fuzzy vector quantization method [21].…”
Section: Introductionmentioning
confidence: 99%
“…The prevailing algorithm for codebook design is Linde-Buzo-Gray (LBG) [20], also known as Generalized Lloyd Algorithm (GLA) or K - means . Other examples of codebook design algorithms are: fuzzy [7,21,22], competitive learning [23], memetic [24], genetic [25], firefly [26] and honey bee mating optimization [27]. …”
Section: Introductionmentioning
confidence: 99%