2017
DOI: 10.1186/s12859-017-1893-4
|View full text |Cite
|
Sign up to set email alerts
|

Incorporating biological prior knowledge for Bayesian learning via maximal knowledge-driven information priors

Abstract: BackgroundPhenotypic classification is problematic because small samples are ubiquitous; and, for these, use of prior knowledge is critical. If knowledge concerning the feature-label distribution – for instance, genetic pathways – is available, then it can be used in learning. Optimal Bayesian classification provides optimal classification under model uncertainty. It differs from classical Bayesian methods in which a classification model is assumed and prior distributions are placed on model parameters. With o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
38
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 40 publications
(38 citation statements)
references
References 66 publications
0
38
0
Order By: Relevance
“…This has been studied under different conditions in the context of the OBC: using the data from unused features to infer a prior distribution [41], deriving the prior distribution from models of the datagenerating technology [35], and applying constraints based on prior knowledge to map the prior knowledge into a prior distribution via optimization [42], [43], [44]. The methods in [42], [43] are very general and have been placed into a formal mathematical structure in [44], where the prior results from an optimization involving the Kullback-Leibler (KL) divergence constrained by conditional probability statements characterizing physical knowledge, such as genetic pathways in genomic medicine. A key focus of our future work will be to extend this general framework to the OBTL, which will require a formulation that incorporates knowledge relating the source and target domains.…”
Section: Discussionmentioning
confidence: 99%
“…This has been studied under different conditions in the context of the OBC: using the data from unused features to infer a prior distribution [41], deriving the prior distribution from models of the datagenerating technology [35], and applying constraints based on prior knowledge to map the prior knowledge into a prior distribution via optimization [42], [43], [44]. The methods in [42], [43] are very general and have been placed into a formal mathematical structure in [44], where the prior results from an optimization involving the Kullback-Leibler (KL) divergence constrained by conditional probability statements characterizing physical knowledge, such as genetic pathways in genomic medicine. A key focus of our future work will be to extend this general framework to the OBTL, which will require a formulation that incorporates knowledge relating the source and target domains.…”
Section: Discussionmentioning
confidence: 99%
“…The so-called Bayesian Optimization (BO) [12] in the literature corresponds to these cases, where the prior model is sequentially updated after each experiment. Bayesian parametric and nonparametric models are widely used in other fields such as bioinformatics [13][14][15][16][17][18]. When prior knowledge about the form of the objective function exists and/or many observations of the objective values at different parts of the input space are available, one can use a parametric model as a surrogate model.…”
Section: B Experiments Designmentioning
confidence: 99%
“…Examples of constructing informative prior distributions using additional prior knowledge can be found in [66,67]. The lower and upper bounds for these prior distributions, (α θ i , β θ i ), were set equal to the lower and upper bounds of the parameters space T min = {20%, 40%, 1}, and T max = {70%, 90%, 25}.…”
Section: Prediction Using the Calibrated Modelmentioning
confidence: 99%