2018
DOI: 10.1007/978-3-319-91253-0_67
|View full text |Cite
|
Sign up to set email alerts
|

Probabilistic Learning Vector Quantization with Cross-Entropy for Probabilistic Class Assignments in Classification Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 13 publications
0
9
0
Order By: Relevance
“…Interpretability increases the trustworthiness and hence the acceptance of models for the potential users [27]. Further extensions improving transparency of the decision and already known for GLVQ approaches are the incorporation of reject options for ambiguous decisions or outliers as well as the use of interpretable probabilistic classifiers [133][134][135].…”
Section: Conclusion Remarks and Future Workmentioning
confidence: 99%
“…Interpretability increases the trustworthiness and hence the acceptance of models for the potential users [27]. Further extensions improving transparency of the decision and already known for GLVQ approaches are the incorporation of reject options for ambiguous decisions or outliers as well as the use of interpretable probabilistic classifiers [133][134][135].…”
Section: Conclusion Remarks and Future Workmentioning
confidence: 99%
“…We note that the log-likelihood ratio used in RSLVQ can be extended to a more natural cross-entropy cost function employed in Probabilistic LVQ (PLVQ) (Villmann et al, 2018). In fact, PLVQ coincides with RSLVQ if the only stochastic component in the joint distribution p(x, y) over R n × {1, 2, ..., C} is the marginal over the inputs p(x) and the input-conditional class distributions p(y|x) are delta-functions (see (Villmann et al, 2018)). The framework of PLVQ is preferable in cases of genuine class uncertainty in (at least some regions of) the input space, leading to more representative class prototypes.…”
Section: Robust Soft Leaning Vector Quantizationmentioning
confidence: 99%
“…Originally, the learning process of the prototypes was heuristically motivated, later it was refined to optimize a cost function which approximates the overall classification accuracy [32,33]. Probabilistic variants of LVQ are the Robust Soft LVQ (RSLVQ [34]), the soft nearest prototype classifier [35] or the probabilistic LVQ [36] to name just a few. See Fig.…”
Section: Vector Quantization and Clusteringmentioning
confidence: 99%
“…in the RSLVQ network, which obeys a Gaussian probability in presence of the squared Euclidean distance. It turns out that the maximum log-likelihood loss function of RSLVQ is equivalent to the cross entropy loss between p (x) and p (x) [36].…”
Section: Lvq Layers As Final Classification Layersmentioning
confidence: 99%
See 1 more Smart Citation