2021
DOI: 10.1016/j.patcog.2021.108151
|View full text |Cite
|
Sign up to set email alerts
|

Conditional information gain networks as sparse mixture of experts

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 10 publications
0
5
0
Order By: Relevance
“…• Bayesian Optimization has been used in previous work to set optimal probability thresholds for routing in a tree structure rather than a trellis [16]. Bayesian Optimization relies on sequentially updating a surrogate model (often a Gaussian Process) with new data points based on an acquisition function.…”
Section: Discussion Of Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…• Bayesian Optimization has been used in previous work to set optimal probability thresholds for routing in a tree structure rather than a trellis [16]. Bayesian Optimization relies on sequentially updating a surrogate model (often a Gaussian Process) with new data points based on an acquisition function.…”
Section: Discussion Of Resultsmentioning
confidence: 99%
“…Bayesian Optimization relies on sequentially updating a surrogate model (often a Gaussian Process) with new data points based on an acquisition function. [16] used the Expected Improvement for this purpose. Users cannot explicitly define the data-generating process for the samples.…”
Section: Discussion Of Resultsmentioning
confidence: 99%
See 3 more Smart Citations