Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence 2020
DOI: 10.24963/ijcai.2020/595
|View full text |Cite
|
Sign up to set email alerts
|

Disentangled Variational Autoencoder based Multi-Label Classification with Covariance-Aware Multivariate Probit Model

Abstract: Multi-label classification is the challenging task of predicting the presence and absence of multiple targets, involving representation learning and label correlation modeling. We propose a novel framework for multi-label classification, Multivariate Probit Variational AutoEncoder (MPVAE), that effectively learns latent embedding spaces as well as label correlations. MPVAE learns and aligns two probabilistic embedding spaces for labels and features respectively. The decoder of MPVAE takes in the sample… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
47
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

4
4

Authors

Journals

citations
Cited by 29 publications
(47 citation statements)
references
References 18 publications
0
47
0
Order By: Relevance
“…The integration of AI techniques in the present work builds upon the foundation of our prior work developing general multi-label classification and multi-target regression approaches that were initially motivated by ecology applications. These prior works demonstrated the utility of multivariate Gaussian used for pairwise correlation learning, 36 model alignment with a VAE during training, 37,39 and high-order correlation learning via an attention graph neural network. 38 Our careful crafting of model architecture is particularly motivated by the need to make predictions in new composition spaces.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The integration of AI techniques in the present work builds upon the foundation of our prior work developing general multi-label classification and multi-target regression approaches that were initially motivated by ecology applications. These prior works demonstrated the utility of multivariate Gaussian used for pairwise correlation learning, 36 model alignment with a VAE during training, 37,39 and high-order correlation learning via an attention graph neural network. 38 Our careful crafting of model architecture is particularly motivated by the need to make predictions in new composition spaces.…”
Section: Discussionmentioning
confidence: 99%
“…We tackle this challenge with correlation learning, which has been demonstrated to enhance multi-label classification and multi-target regression. [36][37][38][39] Since the multiple properties being predicted may not be explicitly correlated, we developed a framework to learn correlations in latent embeddings of the multiple properties.…”
Section: Introductionmentioning
confidence: 99%
“…The integration of AI techniques in the present work build upon the foundation of our prior work developing general multi-label classification and and multi-target regression approaches that were initially motivated by ecology applications. These prior works demonstrated the utility of multivariate Gaussian used for pairwise correlation learning, 35 model alignment with a VAE during training, 36,38 and high-order correlation learning via an attention graph neural network. 37 Our careful crafting of model architecture is particularly motivated by the need to make predictions in new composition spaces.…”
Section: Discussionmentioning
confidence: 99%
“…We tackle this challenge with correlation learning, which has been demonstrated to enhance multilabel classification and multi-target regression. [35][36][37][38] Since the multiple properties being predicted may not be explicitly correlated, we developed a framework to learn correlations in latent embeddings of the multiple properties.…”
Section: Introductionmentioning
confidence: 99%
“…Even if the corresponding feature and label codes are close, we cannot guarantee the decoded targets are similar. To address this concern, MPVAE (Bai, Kong, and Gomes 2020) proposes to replace the deterministic latent space with a probabilistic subspace under a variational autoencoder (VAE) framework. The Gaussian latent spaces are aligned with KL-divergence, and the sampling process enforces smoothness.…”
Section: Introductionmentioning
confidence: 99%