2019
DOI: 10.1609/aaai.v33i01.33014114
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Source Neural Variational Inference

Abstract: Learning from multiple sources of information is an important problem in machine-learning research. The key challenges are learning representations and formulating inference methods that take into account the complementarity and redundancy of various information sources. In this paper we formulate a variational autoencoder based multi-source learning framework in which each encoder is conditioned on a different information source. This allows us to relate the sources via the shared latent variables by computin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
39
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 31 publications
(45 citation statements)
references
References 16 publications
0
39
0
Order By: Relevance
“…Schema’s measure of inter-modality alignment is based on the Pearson correlation of distances, which is optimized via a quadratic programming algorithm, for which further details are provided in “ Methods .” An important advantage of Schema’s algorithm is that it returns coefficients that weight features in the primary dataset based on their agreement with the secondary modalities (for example, weighting genes in a primary RNA-seq dataset that best agree with secondary developmental age information). These feature weights enable greater interpretability into data transformations that is not immediately achievable by more complex, nonlinear transformation approaches [ 27 – 33 ]. We demonstrate this interpretability throughout our applications of Schema.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Schema’s measure of inter-modality alignment is based on the Pearson correlation of distances, which is optimized via a quadratic programming algorithm, for which further details are provided in “ Methods .” An important advantage of Schema’s algorithm is that it returns coefficients that weight features in the primary dataset based on their agreement with the secondary modalities (for example, weighting genes in a primary RNA-seq dataset that best agree with secondary developmental age information). These feature weights enable greater interpretability into data transformations that is not immediately achievable by more complex, nonlinear transformation approaches [ 27 – 33 ]. We demonstrate this interpretability throughout our applications of Schema.…”
Section: Resultsmentioning
confidence: 99%
“…In general, synthesis of multimodal data can also be done by statistical techniques like canonical correlation analysis (CCA) or deep learning architectures that represent multiple modalities in a shared latent space [ 27 – 33 ]. A key conceptual advance of Schema over these approaches is its emphasis on limiting the distortion of the high-confidence reference modality, allowing it to extract signal from the lower-confidence secondary modalities without overfitting to their noise and artifacts.…”
Section: Resultsmentioning
confidence: 99%
“…Some recent efforts propose to use mixture models based on VAEs for learning complex structures behind the data. Kurle et al [18] introduced a mixture model, called Multi-Source Neural Variational Inference (MSVI) aiming to capture probabilistic characteristics from multiple sources. However, MSVI relies on multiple source domains and would not encourage disentanglement between encoding distributions.…”
Section: Deep Mixture Models Using Vaesmentioning
confidence: 99%
“…1) We propose an efficient network architecture design for the VAE mixture model. Unlike in other mixture models using deep networks for the decoder [12], [18], MVAE implements the decoder of each component as a simple non-linear mapping requiring few parameters and low computational costs. A Dirichlet sampling process is used for assigning mixing parameters for each component.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation