2017 IEEE International Conference on Big Data (Big Data) 2017
DOI: 10.1109/bigdata.2017.8257992
|View full text |Cite
|
Sign up to set email alerts
|

VIGAN: Missing view imputation with generative adversarial networks

Abstract: In an era when big data are becoming the norm, there is less concern with the quantity but more with the quality and completeness of the data. In many disciplines, data are collected from heterogeneous sources, resulting in multi-view or multi-modal datasets. The missing data problem has been challenging to address in multi-view data analysis. Especially, when certain samples miss an entire view of data, it creates the missing view problem. Classic multiple imputations or matrix completion methods are hardly e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
49
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 88 publications
(49 citation statements)
references
References 35 publications
0
49
0
Order By: Relevance
“…In an adversarial process, the generator learns to generate samples that are as close as possible to the data distribution, and the discriminator learns to distinguish whether an example is true or generated. Imputation approaches based on GANs include those in the work of Yoon et al (2018) ; Shang et al (2017) ; and Li et al (2019) . Here, we employ one of the most popular approaches of GAN-based imputation, Generative Adversarial Imputation Nets (GAIN) ( Yoon et al, 2018 ).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In an adversarial process, the generator learns to generate samples that are as close as possible to the data distribution, and the discriminator learns to distinguish whether an example is true or generated. Imputation approaches based on GANs include those in the work of Yoon et al (2018) ; Shang et al (2017) ; and Li et al (2019) . Here, we employ one of the most popular approaches of GAN-based imputation, Generative Adversarial Imputation Nets (GAIN) ( Yoon et al, 2018 ).…”
Section: Methodsmentioning
confidence: 99%
“…More recently, also ML approaches have increasingly been used for imputation. Popular methods include k-nearest neighbors ( k -NNs) ( Batista and Monard, 2003 ), matrix factorization ( Troyanskaya et al, 2001 ; Koren et al, 2009 ; Mazumder et al, 2010 ), random-forest–based approaches ( Stekhoven and Bühlmann, 2012 ), discriminative deep learning methods ( Biessmann et al, 2018 ), and generative deep learning methods ( Shang et al, 2017 ; Yoon et al, 2018 ; Li et al, 2019 ; Nazábal et al, 2020 ; Qiu et al, 2020 ).…”
Section: Introductionmentioning
confidence: 99%
“…Mejjati et al [3] and Chen et al [38] improve results with attention, learning which areas of the images on which to focus. Shang et al [214] improve results by feeding the mapped images into a denoising autoencoder. While CycleGAN and similar approaches use two generators, one for each mapping direction, Benaim et al [14] developed a method for one-sided mapping that maintains distances between pairs of samples when mapped from the source to the target domain rather than (or in addition to) using a cycle consistency loss, and Fu et al [70] developed an alternative one-sided mapping using a geometric constraint (e.g., vertical flipping or 90 degree rotation).…”
Section: Conditional Gan For Image-to-image Translationmentioning
confidence: 99%
“…MisGAN, a GAN-based framework for learning from complex, high-dimensional incomplete data [38], was proposed for the task of imputing missing data. VIGAN [39] also deals with imputation, but in this case with scenarios when certain samples miss an entire view of data. While Douzas and Bacao [37], Li et al [38], and Shang et al [39] generate specific part of data (imbalanced classes or missing data), our work deals with generating new energy data samples for training ML models.…”
Section: Generative Adversarial Network (Gans)mentioning
confidence: 99%