2019
DOI: 10.1101/614479
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Benchmarking algorithms for genomic prediction of complex traits

Abstract: The usefulness of Genomic Prediction (GP) in crop and livestock breeding programs has led to efforts to develop new and improved GP approaches including non-linear algorithm, such as artificial neural networks (ANN) (i.e. deep learning) and gradient tree boosting. However, the performance of these algorithms has not been compared in a systematic manner using a wide range of GP datasets and models. Using data of 18 traits across six plant species with different marker densities and training population sizes, we… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
16
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(21 citation statements)
references
References 60 publications
(38 reference statements)
4
16
1
Order By: Relevance
“…In this study, several GP models that included statistical and Machine Learning algorithms from parametric, semi-parametric, and nonparametric approaches were used to predict FAW and MW resistance traits. These GP algorithms, as expected, performed differently on the different traits although the predictive variations were generally minimal, especially when large TS were involved, similar to earlier model benchmarking reports (101,102). Bayesian models (parametric: BLR and BRR, and semiparametric: RKHS) performed better on MW traits, GWL, AP, and AK, while nonparametric Machine Learning algorithms (missForest, here) and, to a lesser extent, linear mixed model (especially in the PBTS approach), achieved the highest PAs on FAW datasets.…”
Section: Gp Algorithms Performed Differently On Faw and Mw Maize Resisupporting
confidence: 82%
“…In this study, several GP models that included statistical and Machine Learning algorithms from parametric, semi-parametric, and nonparametric approaches were used to predict FAW and MW resistance traits. These GP algorithms, as expected, performed differently on the different traits although the predictive variations were generally minimal, especially when large TS were involved, similar to earlier model benchmarking reports (101,102). Bayesian models (parametric: BLR and BRR, and semiparametric: RKHS) performed better on MW traits, GWL, AP, and AK, while nonparametric Machine Learning algorithms (missForest, here) and, to a lesser extent, linear mixed model (especially in the PBTS approach), achieved the highest PAs on FAW datasets.…”
Section: Gp Algorithms Performed Differently On Faw and Mw Maize Resisupporting
confidence: 82%
“…In this study, several GP models including statistical and Machine Learning algorithms, parametric, semi-parametric, and nonparametric approaches were used to predict FAW and MW resistance traits. These GP algorithms, as expected, performed differently on the different traits although the predictive variations were generally minimal, especially when large TS were involved, similar to earlier model benchmarking reports (112,113). Bayesian models (parametric: BLR and BRR, and semi-parametric: RKHS) performed better on MW traits, GWL, AP, and AK, while nonparametric Machine Learning algorithms (missForest) and, to a lesser extent, linear mixed model (especially in the PBTS approach), achieved the highest PAs on FAW datasets.…”
Section: Gp Algorithms Performed Differently On Faw and Mw Maize Resisupporting
confidence: 82%
“…pattern recognition (Drayer and Brox, 2014;Liang and Hu, 2015;IƟin et al, 2016;Badrinarayanan et al, 2017) and natural language processing (NLP) (Deng and Liu, 2018). The DL implementation in regression tasks is less abundant and the benefit of using these methods remains uncertain (Bellot et al, 2018;Montesinos-LĂłpez et al, 2018a;Azodi et al, 2019). Most GP problems fall into the regression task due to the complex nature of quantitative traits (MacKay, 2009).…”
Section: Discussionmentioning
confidence: 99%