1993
DOI: 10.1049/ip-f-2.1993.0022
|View full text |Cite
|
Sign up to set email alerts
|

Finite state vector quantisation with neural network classification of states

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

1995
1995
2020
2020

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 2 publications
0
8
0
Order By: Relevance
“…However, the memory requirements for DFSVQ-2 are increased by a factor of four in comparison with DFSVQ-1 (about 240 times more than that required by the proposed FSRVQ). Therefore, a small degradation in performance of the proposed FSRVQ scheme can be justified by a savings of [11] AND [12], RESPECTIVELY, FOR THE TEST IMAGE LENA two orders of magnitude in the memory requirements when compared with DFSVQ-2. Table IV shows the performance of the proposed FSRVQ along with the FSVQ schemes developed in [11] and [12], respectively, for the test image Lena.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…However, the memory requirements for DFSVQ-2 are increased by a factor of four in comparison with DFSVQ-1 (about 240 times more than that required by the proposed FSRVQ). Therefore, a small degradation in performance of the proposed FSRVQ scheme can be justified by a savings of [11] AND [12], RESPECTIVELY, FOR THE TEST IMAGE LENA two orders of magnitude in the memory requirements when compared with DFSVQ-2. Table IV shows the performance of the proposed FSRVQ along with the FSVQ schemes developed in [11] and [12], respectively, for the test image Lena.…”
Section: Resultsmentioning
confidence: 99%
“…At the th stage of the th state-RVQ, the error that has to be minimized to obtain the best th reproduction codevector for the respective stage-wise residual input vector is given by (12) The update equation for each weight vector is given by (13) where is the gradient with respect to , or equivalently (14) After differentiating with respect to , we get (15) for . and are defined in a similar manner as and .…”
Section: B Tree-structured Competitive Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…Properties such as topology preserving, flexibility, potential as a robust substitute for clustering and visualization analysis [25], learning ability from complex, multi-dimensional data and transformation to visual clusters [26] make a SOM to a very efficient tool for many applications. Some examples are: Yardangs idenfication [27], image data compression [28], image or character recognition [29,30], robot control [31,32], lithological discrimination using Landsat TM [33], urban land use classification [34], ecological modeling [22], landscape elements analysis [35], visualizing high-dimensional data [36], mapping surface geology [37], financial markets [38,39], data mining and knowledge discovery [40][41][42], hyperspectral image classification [43][44][45] and classification of remote sensing data [46,47].…”
Section: Introductionmentioning
confidence: 99%
“…Such types of topological structures would also denote the characteristic of the input samples. SOM has been employed in a wide range of applications in various different domains, including speech recognition, image data compression, robot control, pattern recognition, and medical diagnosis [13], [14], [17], [18], [20], [22], [24], [26], [27].…”
Section: Introductionmentioning
confidence: 99%