2002
DOI: 10.1016/s0167-8655(02)00024-7
|View full text |Cite
|
Sign up to set email alerts
|

Dissimilarity representations allow for building good classifiers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
128
0
1

Year Published

2005
2005
2014
2014

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 193 publications
(130 citation statements)
references
References 5 publications
1
128
0
1
Order By: Relevance
“…. , I n } by their pair-wise dissimilarities d(I i , I j ) and build classifiers on the obtained dissimilarity representation [8]. From the matrix of pair-wise image dissimilarities D = [d(I i , I j )] n×n computed from the set of images, there exist different ways of arriving at a feature vector space where traditional vector space methods can be applied.…”
Section: Image Dissimilarity Spacementioning
confidence: 99%
See 1 more Smart Citation
“…. , I n } by their pair-wise dissimilarities d(I i , I j ) and build classifiers on the obtained dissimilarity representation [8]. From the matrix of pair-wise image dissimilarities D = [d(I i , I j )] n×n computed from the set of images, there exist different ways of arriving at a feature vector space where traditional vector space methods can be applied.…”
Section: Image Dissimilarity Spacementioning
confidence: 99%
“…Further, problems where only a global image label is available are handled automatically since the classification is done at the image level. The images are mapped into a dissimilarity space [8] in which a standard vector space-based classifier can be directly applied, and the soft output of this classifier is used as quantitative measure of disease. The measure used to compute the dissimilarity between two images is the crucial component in this approach, and we evaluate four different image dissimilarity measures in the experiments.…”
Section: Introductionmentioning
confidence: 99%
“…However, while the advantage of considering the notion of similarity between datasets instead of between feature vectors has been recognized [2][3][4], attempts at formulating such measures have been mostly application dependent, often relying heavily on heuristics. A notable exception is a proposed universal normalized compression metric (NCM) based on Kolmogorov's notion of algorithmic complexity [5].…”
Section: The Hidden Modelsmentioning
confidence: 99%
“…, p r }, is needed. The dissimilarity representation allows to symbolize individual feature-patterns by pairwise dissimilarities computed between examples from the training set T and objects from the representation set R. Thus the dissimilarity vectors can be interpreted as numerical features and describe the relation between each object with the rest of objects [3].…”
Section: Introductionmentioning
confidence: 99%