2021
DOI: 10.1609/aaai.v35i9.16924
|View full text |Cite
|
Sign up to set email alerts
|

Uncertainty-Aware Multi-View Representation Learning

Abstract: Learning from different data views by exploring the underlying complementary information among them can endow the representation with stronger expressive ability. However, high-dimensional features tend to contain noise, and furthermore, quality of data usually varies for different samples (even for different views), i.e., one view may be informative for one sample but not the case for another. Therefore, it is quite challenging to integrate multi-view noisy data under unsupervised setting. Traditional multi-v… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 33 publications
(15 citation statements)
references
References 22 publications
0
10
0
Order By: Relevance
“…Unsupervised learning uses unlabeled data. In multi-view representation learning, DUA-Nets (Geng et al 2021) combined inverse networks through unsupervised learning to automatically evaluate the quality of different views. Through unsupervised training, contrastive learning has achieved great success in the computer vision domain (He et al 2020a).…”
Section: Data Annotationsmentioning
confidence: 99%
See 1 more Smart Citation
“…Unsupervised learning uses unlabeled data. In multi-view representation learning, DUA-Nets (Geng et al 2021) combined inverse networks through unsupervised learning to automatically evaluate the quality of different views. Through unsupervised training, contrastive learning has achieved great success in the computer vision domain (He et al 2020a).…”
Section: Data Annotationsmentioning
confidence: 99%
“…Finally, the Dempster Shafer theory was used to integrate the multi-view opinions. Geng et al (2021) designed an unsupervised multi-view learning method that estimated views quality online through uncertainty modeling and integrated inherent information from multiple views to obtain a noise-free representation, thereby reducing the impact of quality imbalances of different views. Wang et al (2019b) studied a negative log-likelihood error loss, which achieved single-value prediction and uncertainty quantification simultaneously.…”
Section: Trusted Multi-view Learningmentioning
confidence: 99%
“…There are mainly two types of uncertainty: data uncertainty and model uncertainty [7,23,40,24]. Many tasks have considered the uncertainty to improve the robustness and interpretability of models, such as face recognition [41,25,1], semantic segmentation [19,24] and Multi-view learning [10]. In the ReID task, prior arts [52,55,43,22] consider data uncertainty to alleviate the problem of label noise or data outliers.…”
Section: Related Workmentioning
confidence: 99%
“…However, numerous classic and effective algorithms [1,2,3] are designed for single-view data and can not be leveraged to multi-view data directly. Compared with traditional single-view data, multi-view data are informative and can provide a more comprehensive description [4,5,6,7]. Thanks to these appealing properties, the research of multi-view learning attracts increasing attention, and one of the challenging branches is Unsupervised Multi-view Representation Learning (UMRL).…”
Section: Introductionmentioning
confidence: 99%
“…AE 2 -Nets [6] introduces the nested autoencoder networks to learn the unified feature representation. DUA-Nets [4] investigates the information of multiple views by employing uncertainty modeling and learns the noise-free feature representation. Although gratifying progress is made and the promising unified multi-view representation can be learned by these aforementioned methods, they are all focused on fusing the multi-view information in the feature space while neglecting the important information in the semantic space.…”
Section: Introductionmentioning
confidence: 99%