2019
DOI: 10.1007/s10994-019-05853-8
|View full text |Cite
|
Sign up to set email alerts
|

Online Bayesian max-margin subspace learning for multi-view classification and regression

Abstract: Multi-view data have become increasingly popular in many real-world applications where data are generated from different information channels or different views such as image + text, audio + video, and webpage + link data. Last decades have witnessed a number of studies devoted to multi-view learning algorithms, especially the predictive latent subspace learning approaches which aim at obtaining a subspace shared by multiple views and then learning models in the shared subspace. However, few efforts have been … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 39 publications
0
5
0
Order By: Relevance
“…The network model can integrate multiview and feature data in multilevel, and the calculation of relevant parameters is repeatable and in line with common sense; however, the complexity of the model is high and its calculations are inefficient. He et al [38] presented an online Bayesian multiview learning algorithm that learns the prediction subspace based on the principle of maximum margin, defines the potential marginal loss, and minimizes the learning problems associ-ated with various Bayesian frameworks by using the theory of pseudo-likelihood and data enhancement. It obtains an approximate a posteriori change according to past samples.…”
Section: Multiview-basedmentioning
confidence: 99%
“…The network model can integrate multiview and feature data in multilevel, and the calculation of relevant parameters is repeatable and in line with common sense; however, the complexity of the model is high and its calculations are inefficient. He et al [38] presented an online Bayesian multiview learning algorithm that learns the prediction subspace based on the principle of maximum margin, defines the potential marginal loss, and minimizes the learning problems associ-ated with various Bayesian frameworks by using the theory of pseudo-likelihood and data enhancement. It obtains an approximate a posteriori change according to past samples.…”
Section: Multiview-basedmentioning
confidence: 99%
“…In multiple kernel learning, [5] proposed a two-view Support Vector Machine method (SVM-2K) to find multiple kernels to maximize the correlation of the two views; [28] proposed a coregularization method to jointly regularize two Reproducing Kernel Hilbert Spaces associated with the two views; [1] proposed Deep Canonical Correlation Analysis to find two deep networks such that the output layers of the two networks are maximally correlated. As for the subspace learning, the authors of [14] proposed a deep multiview robust representation learning algorithm based on auto-encoder to learn a shared representation from multi-view observations; [11] proposed online Bayesian subspace multi-view learning by modeling the variational approximate posterior inferred from the past samples; [51] proposed M2VW for multi-view multi-worker learning problem by leveraging the structural information between multiple views and multiple workers; [30] proposed CR-GAN method to learn a complete representation for multi-view generations in the adversarial setting by the collaboration of two learning pathways in a parameter-sharing manner. Different from [30,[41][42][43][44][50][51][52][53], in this paper, we focus on multi-view classification problem and aim to extract both the shared information and the view-specific information in the adversarial setting, and the view consistency constraint with label information is utilized to further regularize the generated representation in order to improve the predictive performance.…”
Section: Multi-view Learning and Interpretable Learningmentioning
confidence: 99%
“…Some detailed surveys about this framework can be found in [29], [40]. A large portion of multi-view learning literatures mainly focus on supervised (or semi-supervised settings) learning tasks, such as recommendation [41] and classification [42], [43]. Besides, Some other works consider an unsupervised setting.…”
Section: B Multi-view Learningmentioning
confidence: 99%