2022
DOI: 10.3390/genes13122247
|View full text |Cite
|
Sign up to set email alerts
|

Systematic Evaluation of Genomic Prediction Algorithms for Genomic Prediction and Breeding of Aquatic Animals

Abstract: The extensive use of genomic selection (GS) in livestock and crops has led to a series of genomic-prediction (GP) algorithms despite the lack of a single algorithm that can suit all the species and traits. A systematic evaluation of available GP algorithms is thus necessary to identify the optimal GP algorithm for selective breeding in aquaculture species. In this study, a systematic comparison of ten GP algorithms, including both traditional and machine-learning algorithms, was conducted using publicly availa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 67 publications
0
3
0
Order By: Relevance
“…On the other hand, surprisingly, both locus filtering based on MAF and LD pruning appeared to have no significant effect on predictive accuracy. Within our study population, MAF was used as a preliminary criterion for judging whether loci were affected by artificial selection, although this was not precise [ 51 , 52 ]. It is generally considered that loci under strong artificial selection would show reduced polymorphism, reflected in lower MAF values [ 53 , 54 ].…”
Section: Discussionmentioning
confidence: 99%
“…On the other hand, surprisingly, both locus filtering based on MAF and LD pruning appeared to have no significant effect on predictive accuracy. Within our study population, MAF was used as a preliminary criterion for judging whether loci were affected by artificial selection, although this was not precise [ 51 , 52 ]. It is generally considered that loci under strong artificial selection would show reduced polymorphism, reflected in lower MAF values [ 53 , 54 ].…”
Section: Discussionmentioning
confidence: 99%
“…Finally, overfitting is a prevalent problem in AI, characterized by models performing well on training data but poorly on testing data. Techniques such as enhancing data samples, data augmentation, cross-validation, and selecting appropriate algorithms can mitigate overfitting [ 83 ]. However, not all studies in this review have addressed overfitting concerns.…”
Section: Reviewmentioning
confidence: 99%
“…This means the model performs excessively well in the training datasets but shows unsatisfactory performance in the testing dataset [187]. Factors like data insufficiency, low data heterogeneity and excessive variables could all lead to overfitting [188]. Methods like improving data samples, data augmentation, regularization, cross-validation and specific algorithms have all been reported to prevent overfitting [171,[189][190][191].…”
Section: Limitations and Future Perspectivesmentioning
confidence: 99%