2020
DOI: 10.1007/s11263-020-01390-3
|View full text |Cite
|
Sign up to set email alerts
|

Learning Extremal Representations with Deep Archetypal Analysis

Abstract: Archetypes represent extreme manifestations of a population with respect to specific characteristic traits or features. In linear feature space, archetypes approximate the data convex hull allowing all data points to be expressed as convex mixtures of archetypes. As mixing of archetypes is performed directly on the input data, linear Archetypal Analysis requires additivity of the input, which is a strong assumption unlikely to hold e.g. in case of image data. To address this problem, we propose learning an app… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(21 citation statements)
references
References 34 publications
0
13
0
Order By: Relevance
“…It has found widespread applications in recent years. Archetypes are derived as solutions to an iterative nonlinear optimization algorithm that minimizes RSS (residual sum of squares), i.e., average distance between observations and archetypes [ 2 , 3 , 4 ].…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…It has found widespread applications in recent years. Archetypes are derived as solutions to an iterative nonlinear optimization algorithm that minimizes RSS (residual sum of squares), i.e., average distance between observations and archetypes [ 2 , 3 , 4 ].…”
Section: Methodsmentioning
confidence: 99%
“…The optimization procedure concerns also the hyperparameter , the number of archetypes. Thanks to the development of computational tools, archetypal analysis has gained much attention in recent years [ 4 , 5 , 6 ].…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Several autoencoder architectures similar to our work have been previously proposed. Some examples include the Dirichlet Variational Autoencoder [8], Deep Archetypal Analysis (DeepAA) [9], and Genotype Convolutional Autoencoder (GCAE) [10]. Such networks encode each sample as a point within a convex hull, or as a set of proportions and probabilities.…”
Section: Related Workmentioning
confidence: 99%
“…However, it was shown that inductive bias on the dataset and learning approach is necessary to obtain disentanglement [25]. Inductive biases allow us to express assumptions about the generative process and to prioritise different solutions not only in terms of disentanglement [5,13,21,35,44], but also in terms of constrained latent space structures [15,16], preservation of causal relationships [40], or interpretability [45].…”
mentioning
confidence: 99%