2018
DOI: 10.1137/17m1134214
|View full text |Cite
|
Sign up to set email alerts
|

Uncertainty Quantification in Graph-Based Classification of High Dimensional Data

Abstract: Classification of high dimensional data finds wide-ranging applications. In many of these applications equipping the resulting classification with a measure of uncertainty may be as important as the classification itself. In this paper we introduce, develop algorithms for, and investigate the properties of, a variety of Bayesian models for the task of binary classification; via the posterior distribution on the classification labels, these methods automatically give measures of uncertainty. The methods are all… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
99
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 62 publications
(102 citation statements)
references
References 35 publications
(76 reference statements)
3
99
0
Order By: Relevance
“…Such problems appear in Bayesian probabilistic numerics, and other settings where noise-free data is considered. This may also include the Bayesian formulation of machine learning problems with discrete loss models, like 0-1-loss, or Bayesian formulations of classification problems, see [4].…”
Section: Discussionmentioning
confidence: 99%
“…Such problems appear in Bayesian probabilistic numerics, and other settings where noise-free data is considered. This may also include the Bayesian formulation of machine learning problems with discrete loss models, like 0-1-loss, or Bayesian formulations of classification problems, see [4].…”
Section: Discussionmentioning
confidence: 99%
“…There are a variety of ways in which one can construct the regularizer R(u; x) including graph-based and low-density separation methods [7,6]. In this work, we will study a nonparametric graph approach where we think of Z as indexing the nodes on a graph.…”
Section: Semi-supervisedmentioning
confidence: 99%
“…For supervised and online learning these amount to specifying the dependence of G on u; for semi-supervised learning this corresponds to determining a basis in which to seek the parameter u. We do not give further details for the semi-supervised case as our numerics fit in the context of Example 2.2, but we refer the reader to [7] for a detailed discussion. Subsection 3.1 details feed-forward neural networks with subsections 3.1.1 and 3.1.2 showing the parameterizations of dense and convolutional networks respectively.…”
Section: Approximation Architecturesmentioning
confidence: 99%
“…Other approaches include higher order Laplacian regularization [6,17,53] and using a spectral cut-off [5].…”
Section: Introductionmentioning
confidence: 99%
“…To illustrate what is happening near a labeled point, consider Γ " t0u and take the domain from which the points are sampled to be the unit ball Ω " Bp0, 1q in R d . The continuum variational problem corresponding to (5) involves minimizing (6) Irus " ż Bp0,1q…”
Section: Introductionmentioning
confidence: 99%