2020
DOI: 10.1111/jbg.12468
|View full text |Cite
|
Sign up to set email alerts
|

Improving genomic prediction accuracy for meat tenderness in Nellore cattle using artificial neural networks

Abstract: Brazil has the second largest commercial beef cattle herd in the world, occupying a prominent position in the global beef market. Zebu cattle (Bos indicus) comprise more than 80% of the beef cattle in Brazil, and the vast majority of these animals are from Nellore breed, given their tolerance to tropical climate and high resistance to ectoparasites (Baldi et al., 2012). Despite their advantages for production in tropical environments, Zebu cattle tend to produce tougher meat than Bos taurus breeds (Reverter et… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

2
12
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(15 citation statements)
references
References 41 publications
2
12
0
Order By: Relevance
“…In this sense, Liu and Wang (2017) found that DL outperformed traditional statistical models (RR-BLUP, BL, and Bayes A) in the genomic prediction of grain yield, in soybean, and stem height, in loblolly pine. In a GP study for meat tenderness, Lopes et al (2020) found similar results to the current study, in which the DL model had higher PA than all models of the Bayesian alphabet (Bayes A, Bayes B, Bayes Cπ, BRR, and BL). Notably, in this study, ReLU was the best activation function used for training DL, because it is faster to learn than sigmoid and hyperbolic tangent functions, and it has better performance during the random grid search.…”
Section: Discussionsupporting
confidence: 86%
“…In this sense, Liu and Wang (2017) found that DL outperformed traditional statistical models (RR-BLUP, BL, and Bayes A) in the genomic prediction of grain yield, in soybean, and stem height, in loblolly pine. In a GP study for meat tenderness, Lopes et al (2020) found similar results to the current study, in which the DL model had higher PA than all models of the Bayesian alphabet (Bayes A, Bayes B, Bayes Cπ, BRR, and BL). Notably, in this study, ReLU was the best activation function used for training DL, because it is faster to learn than sigmoid and hyperbolic tangent functions, and it has better performance during the random grid search.…”
Section: Discussionsupporting
confidence: 86%
“…For the deep learning models, two regularization strategies were employed during the training phase, i.e., weight decay and dropout. Besides, previous studies point that the ML methods can be useful even when the number of explanatory variables (p) vastly exceeds the number of available phenotypes (n), as demonstrated in the genome-enabled prediction of complex traits (Lopes et al, 2020;Bargelloni et al, 2021). Hence, the higher number of explanatory variables itself does not fully explain the lower classification performance of the models feed with the MFCC data.…”
Section: Discussionmentioning
confidence: 99%
“…In recent years, the use of ANN statistical models in the studies of genomic selection (prediction) has shown an increasing trend [11] [12] [13] [14] [15] [16]. In the studies of plant and animal breeding, the number of parameters (SNP marker effects) to be estimated is much more than the available sample size and the computational costs of training the ANN applications are high.…”
Section: Introductionmentioning
confidence: 99%