2012
DOI: 10.1515/1544-6115.1793
|View full text |Cite
|
Sign up to set email alerts
|

QTL Mapping Using a Memetic Algorithm with Modifications of BIC as Fitness Function

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
5
1
1

Relationship

4
3

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 0 publications
0
7
0
Order By: Relevance
“…Yet another approach for Bayesian model selection is addressed by Bottolo et al (2011), who propose the moves of MCMC between local optima through a permutation based genetic algorithm that has a pool of solutions in a current generation suggested by parallel tempered chains. A similar idea is considered by Frommlet et al (2012). Multiple try MCMC methods with local optimization have been described by Liu et al (2000).…”
Section: Introductionmentioning
confidence: 99%
“…Yet another approach for Bayesian model selection is addressed by Bottolo et al (2011), who propose the moves of MCMC between local optima through a permutation based genetic algorithm that has a pool of solutions in a current generation suggested by parallel tempered chains. A similar idea is considered by Frommlet et al (2012). Multiple try MCMC methods with local optimization have been described by Liu et al (2000).…”
Section: Introductionmentioning
confidence: 99%
“…Our heuristic search strategy is attempting to get close to the global minimum, but we know that in most cases it will fail to find the best solution. More involved search strategies will further improve our method, and we are currently exploring the use of memetic algorithms, which have been successfully applied already in the context of QTL mapping [22] .…”
Section: Discussionmentioning
confidence: 99%
“…Here L is the number of layers, p (l) is the number of nodes within the layer while σ In our notation we explicitly differentiate between discrete model configurations defined by the vectors γ = {γ (l) kj , j = 1, .., p (l+1) , k = 0, ..., p (l) , l = 1, ..., L} (further referred to as models) constituting the model space Γ and parameters of the models, conditional on these configurations θ|γ = {β, φ|γ}, where only those β (l) kj for which γ (l) kj = 1 are included. This approach is a rather standard (in statistical science literature) way to explicitly specify the model uncertainty in a given class of models and is used in Clyde et al (2011), Frommlet et al (2012, Hubin & Storvik (2018), Hubin et al (2018b,a). A Bayesian approach is obtained by specification of model priors p(γ) and parameter priors for each model p(β|γ).…”
Section: The Modelmentioning
confidence: 99%
“…At the same time pruning is done in an implicit manner by deleting the weights via ad-hoc thresholding. In Bayesian model selection problems there have been numerous works showing efficiency and accuracy of model selection by means of introducing latent variables corresponding to different discrete model configurations, and then conditioning on their marginal posterior to both select the best sparse configuration and address the joint model-and-parameters-uncertainty explicitly (George & McCulloch 1993, Clyde et al 2011, Frommlet et al 2012, Hubin & Storvik 2018, Hubin et al 2018b. For instance, Hubin et al (2018a) address inference in the class of deep Bayesian regression models (DBRM), which generalizes the class of Bayesian neural networks.…”
Section: Introductionmentioning
confidence: 99%