Probability Theory and Mathematical Statistics 1996
DOI: 10.1142/9789814532181
|View full text |Cite
|
Sign up to set email alerts
|

Probability Theory and Mathematical Statistics

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2001
2001
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 16 publications
(18 citation statements)
references
References 0 publications
0
18
0
Order By: Relevance
“…The comparison indicates that the GA-optimized BP neural network provides better training results for the samples. According to statistical definitions [15], a smaller MSE between predicted values and outcomes indicates a smaller disparity between the estimated value and the sample data, which in turn suggests higher accuracy of the trained model. The genetically optimized BP neural network exhibits a smaller MSE, indicating more precise model training compared to the standard BP neural network.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The comparison indicates that the GA-optimized BP neural network provides better training results for the samples. According to statistical definitions [15], a smaller MSE between predicted values and outcomes indicates a smaller disparity between the estimated value and the sample data, which in turn suggests higher accuracy of the trained model. The genetically optimized BP neural network exhibits a smaller MSE, indicating more precise model training compared to the standard BP neural network.…”
Section: Resultsmentioning
confidence: 99%
“…It is noticeable that both the BP neural network's sample data and the model-predicted data converge to the optimal training value around the 12th cycle, with the mean squared error dropping below 10 -3 . The genetically optimized BP neural network's sample data and model prediction achieve their best training value in the 15th cycle, with the mean squared error decreasing below 10 -4 .According to statistical definitions[15], a smaller MSE between predicted values and outcomes indicates a smaller disparity between the estimated value and the sample data, which in turn suggests higher accuracy of the trained model. The genetically optimized BP neural network exhibits a smaller MSE, indicating more precise model training compared to the standard BP neural network.Figure11is a comparison of the predicted and actual values of the test set.…”
mentioning
confidence: 99%
“…(3) (4) where PS(x) and PV(x) are the probabilities of the presence of defect-active impulses of elastic deformation waves in a sufficiently thin analyzed surface layer of the material, and in the general case of fluctuations of the SSS that can lead to critical growth of surface and subsurface defects respectively; CS and CV are the average "concentrations" of these defectcritical ultrajet pulses impacting the surface and subsurface layer of the OA material Based on the informative-physical logic of the above relations, it is not difficult to notice that (1) and ( 2) are the necessary, and ( 2) and ( 3) are the sufficient conditions for the material particle separation due to ultrajet erosion of the surface layer of the diagnosed OA due to the critical growth of the surface and/or subsurface defect. Therefore, according to the addition theorem of probability [13], we have: (5) where is the probability of erosion particle separation with a characteristic geometrical size x.…”
Section: Probabilistic Modelingmentioning
confidence: 99%
“…Assume that Ω is the sample space of a random test, if for any sample point ω Ω , there is a unique real numbers X = X ( ω ) corresponding to ω , and X takes different values (or values in different ranges) with a certain probability, then X is called a random variable (Chen, 2009).…”
Section: General Uncertainty Data and General Uncertainty Variablementioning
confidence: 99%