2021
DOI: 10.1109/access.2021.3064530
|View full text |Cite
|
Sign up to set email alerts
|

AutoBayes: Automated Bayesian Graph Exploration for Nuisance- Robust Inference

Abstract: Learning data representations that capture task-related features, but are invariant to nuisance variations remains a key challenge in machine learning. We introduce an automated Bayesian inference framework, called AutoBayes, that explores different graphical models linking classifier, encoder, decoder, estimator and adversarial network blocks to optimize nuisance-invariant machine learning pipelines. Auto-Bayes also enables learning disentangled representations, where the latent variable is split into multipl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 30 publications
0
1
0
Order By: Relevance
“…Compared to a standard CNN classifier of similar model size, GNN models significantly improve the accuracy by 2.0% for ErrP, whereas the improvement for RSVP is 0.4%. Compared to AutoBayes classifiers [15], which detect the conditional relationship between data features, task labels, nuisance variation labels (subject IDs), and potential latent variables in DNN architectures to identify the best inference strategy, GNN models still have higher classification performance, while reducing model size more than 20x. Although GNN models don't take advantage from adversarial learning using variations in subject IDs, they perform 0.8% better for ErrP, and nearly same for RSVP.…”
Section: Resultsmentioning
confidence: 99%
“…Compared to a standard CNN classifier of similar model size, GNN models significantly improve the accuracy by 2.0% for ErrP, whereas the improvement for RSVP is 0.4%. Compared to AutoBayes classifiers [15], which detect the conditional relationship between data features, task labels, nuisance variation labels (subject IDs), and potential latent variables in DNN architectures to identify the best inference strategy, GNN models still have higher classification performance, while reducing model size more than 20x. Although GNN models don't take advantage from adversarial learning using variations in subject IDs, they perform 0.8% better for ErrP, and nearly same for RSVP.…”
Section: Resultsmentioning
confidence: 99%