2022
DOI: 10.3390/pr10071387
|View full text |Cite
|
Sign up to set email alerts
|

A Comparative Analysis of Machine Learning Models in Prediction of Mortar Compressive Strength

Abstract: Predicting the mechanical properties of cement-based mortars is essential in understanding the life and functioning of structures. Machine learning (ML) algorithms in this regard can be especially useful in prediction scenarios. In this paper, a comprehensive comparison of nine ML algorithms, i.e., linear regression (LR), random forest regression (RFR), support vector regression (SVR), AdaBoost regression (ABR), multi-layer perceptron (MLP), gradient boosting regression (GBR), decision tree regression (DT), hi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 28 publications
(9 citation statements)
references
References 45 publications
0
4
0
Order By: Relevance
“…To predict the SCP values for MOF/ethanol working pairs, six commonly used ML algorithms, namely, the random forest (RF), light gradient boosting machine (LGBM), , extreme gradient boosting (XGB), gradient boosting regression (GBR), hist gradient boosting regression (HGBR), and gradient boosted regression tree (GBRT) taken from the scikit-learn package, were trained in this work with all of the descriptors mentioned. We randomly partitioned the data from HTCS into training and testing sets, allocating 80% of the data for model training and reserving 20% for testing purposes.…”
Section: Methodsmentioning
confidence: 99%
“…To predict the SCP values for MOF/ethanol working pairs, six commonly used ML algorithms, namely, the random forest (RF), light gradient boosting machine (LGBM), , extreme gradient boosting (XGB), gradient boosting regression (GBR), hist gradient boosting regression (HGBR), and gradient boosted regression tree (GBRT) taken from the scikit-learn package, were trained in this work with all of the descriptors mentioned. We randomly partitioned the data from HTCS into training and testing sets, allocating 80% of the data for model training and reserving 20% for testing purposes.…”
Section: Methodsmentioning
confidence: 99%
“…The hist gradient boosting method utilizes its algorithm to implement the processing of input variables. Every tree incorporated into an ensemble endeavors to rectify the predicted flaws by leveraging the existing models within the ensemble [ 48 ].…”
Section: Methodsmentioning
confidence: 99%
“…Neste método, várias árvores são treinadas simultaneamente usando a técnica de boostrap ou bagging. São selecionadas diversas amostras aleatórias dos dados de treinamento, aprimorando as árvores de decisão, as quais não interagem entre si durante a fase de construção e são geradas em paralelo (Gayathri, Rani, Cepová, Rajesh, & Kalita, 2022).…”
Section: Introductionunclassified