2023
DOI: 10.1038/s41598-023-32112-7
|View full text |Cite
|
Sign up to set email alerts
|

Stochastic gradient descent for optimization for nuclear systems

Abstract: The use of gradient descent methods for optimizing k-eigenvalue nuclear systems has been shown to be useful in the past, but the use of k-eigenvalue gradients have proved computationally challenging due to their stochastic nature. ADAM is a gradient descent method that accounts for gradients with a stochastic nature. This analysis uses challenge problems constructed to verify if ADAM is a suitable tool to optimize k-eigenvalue nuclear systems. ADAM is able to successfully optimize nuclear systems using the gra… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 14 publications
0
3
0
Order By: Relevance
“…Like NNs, the training process for PINNs corresponds to the minimization problem min P J(X; P). Training of network parameters P is carried out using a gradient descent approach such as Adam [75] or L-BFGS-B (limited memory algorithm Broyden-Fletcher-Goldfarb-Shanno) [76]. However, the required number of iterations depends highly on the problem "(e.g., smoothness of the solution)" see [57].…”
Section: Te Combined Adam and L-bfgs-b Optimizationmentioning
confidence: 99%
“…Like NNs, the training process for PINNs corresponds to the minimization problem min P J(X; P). Training of network parameters P is carried out using a gradient descent approach such as Adam [75] or L-BFGS-B (limited memory algorithm Broyden-Fletcher-Goldfarb-Shanno) [76]. However, the required number of iterations depends highly on the problem "(e.g., smoothness of the solution)" see [57].…”
Section: Te Combined Adam and L-bfgs-b Optimizationmentioning
confidence: 99%
“…We perform fitting and calculation using the Adam gradient method. Unlike the conventional gradient descent method, the Adam gradient can automatically adjust the step size at each iteration as follows [21][22][23][24][25]:…”
Section: Mean Elements Of Space Debrismentioning
confidence: 99%
“…We perform fitting and calculation using the Adam gradient method. Unlike the conventional gradient descent method, the Adam gradient can automatically adjust the step size at each iteration as follows [ 21 , 22 , 23 , 24 , 25 ]: where = 0.9, = 0.999, = 1 × 10 −8 , , and ; is an orbit element, is the partial derivative of the RMSE with respect to element , and t is the number of iterations.…”
Section: Calculation Of Ballistic Coefficients From Optical Angle Mea...mentioning
confidence: 99%