2017
DOI: 10.1016/j.neucom.2016.12.091
|View full text |Cite
|
Sign up to set email alerts
|

Types of (dis-)similarities and adaptive mixtures thereof for improved classification learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 22 publications
(13 citation statements)
references
References 44 publications
0
13
0
Order By: Relevance
“…If the number of samples N ≤ 1000, then the rank parameter k 30, otherwise k 100. 3 The shift parameter λ is calculated on the low-rank approximated matrix, using a von Mises or power iteration [59] to determine the respective largest negative eigenvalue of the matrix. As shift parameter, we use the absolute value of λ for further steps.…”
Section: Advanced Shift Correctionmentioning
confidence: 99%
See 1 more Smart Citation
“…If the number of samples N ≤ 1000, then the rank parameter k 30, otherwise k 100. 3 The shift parameter λ is calculated on the low-rank approximated matrix, using a von Mises or power iteration [59] to determine the respective largest negative eigenvalue of the matrix. As shift parameter, we use the absolute value of λ for further steps.…”
Section: Advanced Shift Correctionmentioning
confidence: 99%
“…In the following, we expect that these proximities are at least symmetric, but do not necessarily obey metric properties. See e.g., [3] for an extended discussion.…”
Section: Introductionmentioning
confidence: 99%
“…where d is a given dissimilarity measure [11]. We denote w s(x) as the winner prototype of the competition.…”
Section: Learning Vector Quantizers For Prototype Based Classification a Standard Learning Vector Quantizationmentioning
confidence: 99%
“…A disadvantage of MLP or deep architectures is that the trained networks are essentially black-box systems, i. e. the functionality of each subnet/ layer is difficult to interpret. This is in contrast to prototype based networks like learning vector quantization for classification (LVQ, [8], [9]), which are intuitively understandable due to their reference principle based on dissimilarity comparisons between data and representative prototypes [10], [11]. Further, in recent years, many improvements of the basic LVQ schemes were proposed which have led to a strong mathematical foundation based on cost functions (Generalized LVQ -GLVQ [43]) as well as to sophisticated variants covering many advanced learning problems like automatic dissimilarity adaptation and feature relevance learning, learning from imbalanced data or optimization of statistical measure instead of simple accuracy learning [12], [13].…”
Section: Introductionmentioning
confidence: 99%
“…The most prominent example is that kernels as inner products in a Hilbert space do not necessarily be similarity measures. For a respective discussion we refer to [40,59].…”
Section: Beyond the Euclidean World -Glvq With Non-standard Dissimilamentioning
confidence: 99%