2021
DOI: 10.1007/jhep08(2021)161
|View full text |Cite
|
Sign up to set email alerts
|

Quark Mass Models and Reinforcement Learning

Abstract: In this paper, we apply reinforcement learning to the problem of constructing models in particle physics. As an example environment, we use the space of Froggatt-Nielsen type models for quark masses. Using a basic policy-based algorithm we show that neural networks can be successfully trained to construct Froggatt-Nielsen models which are consistent with the observed quark masses and mixing. The trained policy networks lead from random to phenomenologically acceptable models for over 90% of episodes and after … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(10 citation statements)
references
References 27 publications
0
2
0
Order By: Relevance
“…The hyperparameters in -greedy method are described in the previous section. About the step number N step = 32, the same value was used in the previous research that focuses on only the quark sector [12]. In ref.…”
Section: Neural Networkmentioning
confidence: 99%
See 2 more Smart Citations
“…The hyperparameters in -greedy method are described in the previous section. About the step number N step = 32, the same value was used in the previous research that focuses on only the quark sector [12]. In ref.…”
Section: Neural Networkmentioning
confidence: 99%
“…In ref. [12], it was shown that terminal states can be reached after a sufficient amount of learning. Therefore, it is expected that N step = 32 is enough to achieve terminal states in the current situation where the quark sector and the lepton sector are searched separately.…”
Section: Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…However, unlike [11,18], instead of using direct numerical methods, we take advantage of recent developments in the field of Artificial Intelligence and Machine Learning, where processes dealing with large parameter spaces have been proven to be very effective. Our preferred type of algorithms are Reinforcement-Learning (RL) algorithms [20]; RL implementations have also recently appeared in the context of String Theory [21][22][23][24][25]. By default, these do not require externally provided data for training-they learn on their own through exploration.…”
Section: Introductionmentioning
confidence: 99%
“…This technique allows people to learn lots of quantities of Calabi-Yau manifolds, from its toric building blocks like the polytope structure [21,22] and triangulations [23,24], to the calculation of Hodge numbers [25][26][27][28], numerical metrics [29][30][31][32] and line bundle cohomologies [33,34]. Besides, machine learning has also been applied to study and find certain structures on Calabi-Yau for model building [35][36][37][38][39][40][41][42].…”
Section: Introductionmentioning
confidence: 99%