2017
DOI: 10.1007/978-3-319-71249-9_19
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian Nonlinear Support Vector Machines for Big Data

Abstract: We propose a fast inference method for Bayesian nonlinear support vector machines that leverages stochastic variational inference and inducing points. Our experiments show that the proposed method is faster than competing Bayesian approaches and scales easily to millions of data points. It provides additional features over frequentist competitors such as accurate predictive uncertainty estimates and automatic hyperparameter search.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
20
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
3

Relationship

2
8

Authors

Journals

citations
Cited by 22 publications
(21 citation statements)
references
References 17 publications
0
20
0
Order By: Relevance
“…For the analysis, we also used a hybrid system, where the four methods mentioned above were connected, namely RF [40], SVM [41], GP [42], and NN [43] of the intelligent system, into one Euler graph method of machine learning (Figure 8). Ensemble methods are learning algorithms that build a series of classifiers and then classify new data points, summarizing the results of their predictions.…”
Section: Analysis Of Geomorphometric Parametersmentioning
confidence: 99%
“…For the analysis, we also used a hybrid system, where the four methods mentioned above were connected, namely RF [40], SVM [41], GP [42], and NN [43] of the intelligent system, into one Euler graph method of machine learning (Figure 8). Ensemble methods are learning algorithms that build a series of classifiers and then classify new data points, summarizing the results of their predictions.…”
Section: Analysis Of Geomorphometric Parametersmentioning
confidence: 99%
“…Polson et al (2013) developed an approach with the logistic likelihood, this work was further expanded by Wenzel et al (2019) to big data. The augmentation done on the Bayesian Support Vector Machine of Polson et al (2011) and scaled up by Wenzel et al (2017), is similar to our method but is based on a different augmentation approach. Note that our method covers all these cases exactly but do not rely on any manual derivations.…”
Section: Related Workmentioning
confidence: 99%
“…whereby the normalization term in the denominator of (4) is omitted. This approximation was also used by Fu et al (2010), Mao et al (2014), Lai et al (2015), and Wenzel et al (2017), where it is argued that it is more computationally feasible than MLE (since it is a concave optimization problem) and that the functional form more closely resembles the SVM problem from which it is derived.…”
Section: Introductionmentioning
confidence: 99%