2001
DOI: 10.1117/1.1412423
|View full text |Cite
|
Sign up to set email alerts
|

Practical methods for speeding-up the pairwise nearest neighbor method

Abstract: The pairwise nearest neighbor (PNN) method is a simple and well-known method for codebook generation in vector quantization. In its exact form, it provides a good-quality codebook but at the cost of high run time. A fast exact algorithm was recently introduced to implement the PNN an order of magnitude faster than the original O(N 3 K) time algorithm. The run time, however, is still lower bounded by O(N 2 K), and therefore, additional speed-ups may be required in applications where time is an important factor.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
10
0

Year Published

2006
2006
2018
2018

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 11 publications
(10 citation statements)
references
References 14 publications
0
10
0
Order By: Relevance
“…All computing is performed on an Intel Core 2 Quad 2.4 GHz PC with 2 GB of memory. In examples 2-6, the proposed algorithm DKNNA is compared with the FPNN [7] and the DLA [8] in terms of the number of distance calculations, computing time and mean square error using three different nearest-neighbor search algorithms: full search (FULL), PDS + MPS + Lazy [27] and fast search (FS) [18,21]. It should be noted that the proposed method will not generate exact the same results as those of Ward's method and the FPNN, although their differences are not significant any more.…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…All computing is performed on an Intel Core 2 Quad 2.4 GHz PC with 2 GB of memory. In examples 2-6, the proposed algorithm DKNNA is compared with the FPNN [7] and the DLA [8] in terms of the number of distance calculations, computing time and mean square error using three different nearest-neighbor search algorithms: full search (FULL), PDS + MPS + Lazy [27] and fast search (FS) [18,21]. It should be noted that the proposed method will not generate exact the same results as those of Ward's method and the FPNN, although their differences are not significant any more.…”
Section: Resultsmentioning
confidence: 99%
“…Table 3 shows the mean square errors of the FPNN, DLA, and DKNNA using three different search algorithms: full search, MPS + PDS + Lazy [27] and fast search [18,21]. It can be seen from Table 3 that there is little difference between the mean square errors of the FPNN and DKNNA, since there are many cluster pairs with the same cluster distance in the early stage of a cluster merge.…”
Section: Example 2: Data Set Generated From the Image ''Lena''mentioning
confidence: 95%
See 3 more Smart Citations