2016
DOI: 10.1155/2016/7648467
|View full text |Cite
|
Sign up to set email alerts
|

Prediction of Compressive Strength of Concrete Using Artificial Neural Network and Genetic Programming

Abstract: An effort has been made to develop concrete compressive strength prediction models with the help of two emerging data mining techniques, namely, Artificial Neural Networks (ANNs) and Genetic Programming (GP). The data for analysis and model development was collected at 28-, 56-, and 91-day curing periods through experiments conducted in the laboratory under standard controlled conditions. The developed models have also been tested on in situ concrete data taken from literature. A comparison of the prediction r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
71
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 143 publications
(72 citation statements)
references
References 15 publications
1
71
0
Order By: Relevance
“…For ANN architectures and training of the same, the significant internal parameters include learning rate, initial weights, learning cycle ,number of training epochs, numbers of hidden layers, numbers of neurons in each hidden layer and transfer functions for hidden layers and output layers [11]. In this study, a three-layered feed-forward network was trained with back-propagation (BP) training algorithm.…”
Section: Development Of the Model Using Annmentioning
confidence: 99%
See 2 more Smart Citations
“…For ANN architectures and training of the same, the significant internal parameters include learning rate, initial weights, learning cycle ,number of training epochs, numbers of hidden layers, numbers of neurons in each hidden layer and transfer functions for hidden layers and output layers [11]. In this study, a three-layered feed-forward network was trained with back-propagation (BP) training algorithm.…”
Section: Development Of the Model Using Annmentioning
confidence: 99%
“…The numbers of neurons in the hidden layer was adjusted 13 after doing many trial and errors. A non-linear hyperbolic tangent sigmoid function and linear function were used as transfer functions in hidden and an output layer respectively due to their ability to learn the complex non-linear relation between an input parameter and an output parameter [11]. Network training parameters adopted to construct an ANN model are summarized in Tab.…”
Section: Development Of the Model Using Annmentioning
confidence: 99%
See 1 more Smart Citation
“…The architecture of ANN consists of artificial neurons analogous to natural neurons of the human brain that are clumped into series of input, hidden and output layers. According to Chopra et al [18] there are three essentials to consider before structuring the architecture of ANN model, namely: 1) Topologyorganization and interconnection of a neural network into layers; 2) Learning -information storage in the network; and 3) Recall -retrieval of information from the network. These essentials are reflected with the following internal parameters: 1) performance function, 2) learning function, 3) weights and biases, 4) hidden layers and neurons, 5) and transfer function.…”
Section: B Model Developmentmentioning
confidence: 99%
“…Specifically, tansig function operates by returning outputs compressed between -1 and 1 in which it has an ability to learn complex nonlinear relation between the input and output parameters. On the other hand, Levernberg-Marquardt learning function is the most suitable algorithm for concrete related data according to available literatures [18,19]. It is attributed to be significantly high speed training method especially for moderately sized feedforward neural networks as well as non-linear problems.…”
Section: Regional Conference In Civil Engineering (Rcce)mentioning
confidence: 99%