2006
DOI: 10.1016/j.engappai.2006.05.003
|View full text |Cite
|
Sign up to set email alerts
|

ANN inverse analysis based on stochastic small-sample training set simulation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
49
0
3

Year Published

2009
2009
2023
2023

Publication Types

Select...
7
1

Relationship

4
4

Authors

Journals

citations
Cited by 88 publications
(52 citation statements)
references
References 19 publications
0
49
0
3
Order By: Relevance
“…Since in many engineering problems, measurements are represented by single or several curves usually discretized into discrete points, a number of observed quantities is often very large. Therefore, a global sensitivity analysis is applied to choose only quantities important for a particular model parameter to be identified [24]. A number of neurons in hidden layer is determined by consecutively increasing their number taking the over-training and under-training issues into account.…”
Section: Neural Network Architecturementioning
confidence: 99%
See 2 more Smart Citations
“…Since in many engineering problems, measurements are represented by single or several curves usually discretized into discrete points, a number of observed quantities is often very large. Therefore, a global sensitivity analysis is applied to choose only quantities important for a particular model parameter to be identified [24]. A number of neurons in hidden layer is determined by consecutively increasing their number taking the over-training and under-training issues into account.…”
Section: Neural Network Architecturementioning
confidence: 99%
“…Log-sigmoid functions are considered as activation functions in all neurons. In all cases studied in this work, 60 training samples are generated by Latin Hypercube Sampling method and optimized by Simulated Annealing in order to minimize the correlation among all samples [24]. Those two methods are implemented in FREET software [25] that has been used.…”
Section: Experimental Validationmentioning
confidence: 99%
See 1 more Smart Citation
“…Note that in our identification methodology we do not suppose to have an expert capable of giving the initial estimate of material parameters values, as in e.g. [10,23,20]. Therefore, the bounds on model parameters were kept rather wide.…”
Section: A Brief Description Of the Identified Modelmentioning
confidence: 99%
“…The first one utilizes ANN too, but in a different way: Computational time is reduced by using a small-sample simulation technique called Latin hypercube sampling (LHS) in ANN based inverse problem proposed by Novák and Lehký in [10] and [11] first.…”
Section: Introductionmentioning
confidence: 99%