2021
DOI: 10.1371/journal.pcbi.1009086
|View full text |Cite
|
Sign up to set email alerts
|

Mixture-of-Experts Variational Autoencoder for clustering and generating from similarity-based representations on single cell data

Abstract: Clustering high-dimensional data, such as images or biological measurements, is a long-standing problem and has been studied extensively. Recently, Deep Clustering has gained popularity due to its flexibility in fitting the specific peculiarities of complex data. Here we introduce the Mixture-of-Experts Similarity Variational Autoencoder (MoE-Sim-VAE), a novel generative clustering model. The model can learn multi-modal distributions of high-dimensional data and use these to generate realistic data with high e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(24 citation statements)
references
References 33 publications
0
20
0
Order By: Relevance
“…Any arbitrary (continuous) distribution can be modelled by using a mixture of a sufficient number of Gaussians, with appropriate mixture weights [3]. Mixtures of Gaussians have been used to model posterior distributions in variational autoencoders for semi-supervised classification [19] and clustering [8], [16].…”
Section: Mixture Of Gaussiansmentioning
confidence: 99%
“…Any arbitrary (continuous) distribution can be modelled by using a mixture of a sufficient number of Gaussians, with appropriate mixture weights [3]. Mixtures of Gaussians have been used to model posterior distributions in variational autoencoders for semi-supervised classification [19] and clustering [8], [16].…”
Section: Mixture Of Gaussiansmentioning
confidence: 99%
“…VAEs reduces the dimensionality of input data to arbitrary dimensions 16 and has been previously used for clustering cell data. 8,9…”
Section: Unsupervised Clustering Using Deep Learningmentioning
confidence: 99%
“…These issues motivated us to look into deep learning models for clustering, namely a variational autoencoder (VAE), which has previously shown success in clustering cell data. 8,9…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Unsupervised learning for multivariate time series: Recent work on unsupervised learning for multivariate time series has predominantly employed autoencoders, trained with an input reconstruction objective and implemented either as Multi-Layer Perceptrons (MLP) or RNN (most commonly, LSTM) networks. As interesting variations of the former, Kopf et al (2019) and additionally incorporated Variational Autoencoding into this approach, but focused on clustering and the visualization of shifting sample topology with time. As an example of the lat-ter, Malhotra et al (2017) presented a multi-layered RNN sequence-to-sequence autoencoder, while Lyu et al (2018) developed a multi-layered LSTM with an attention mechanism and evaluated both an input reconstruction (autoencoding) as well as a forecasting loss for unsupervised representation learning of Electronic Healthcare Record multivariate time series.…”
Section: Related Workmentioning
confidence: 99%