2023
DOI: 10.1016/j.ress.2022.108908
|View full text |Cite
|
Sign up to set email alerts
|

Deep reinforcement learning for predictive aircraft maintenance using probabilistic Remaining-Useful-Life prognostics

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
17
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 75 publications
(28 citation statements)
references
References 44 publications
1
17
0
Order By: Relevance
“…On the other hand, reinforcement learning methods have driven impressive advances in artificial intelligence in recent years, surpassing human performance in many domains [22][23][24][25][26][27]. More recently, some researchers have begun to use reinforcement learning to model collective motion in a learning way [28][29][30][31].…”
Section: Related Workmentioning
confidence: 99%
“…On the other hand, reinforcement learning methods have driven impressive advances in artificial intelligence in recent years, surpassing human performance in many domains [22][23][24][25][26][27]. More recently, some researchers have begun to use reinforcement learning to model collective motion in a learning way [28][29][30][31].…”
Section: Related Workmentioning
confidence: 99%
“…Generally, the existing approaches for RUL estimation can be categorized into three main groups: model-based approaches [1]- [6], data-driven approaches [7]- [27] and hybrid approaches [28], [29]. Model-based approaches can precisely estimate the RUL if the degradation process of physical systems is accurately modeled.…”
Section: Introductionmentioning
confidence: 99%
“…In addition to DL, deep reinforcement learning (DRL) is also introduced to the field of prognostics. Lee and Mitici [27] proposed a framework integrating RUL prognostics into predictive maintenance planning. In this framework, the RUL distribution is firstly estimated by CNN combining with Monte Carlo dropout.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…It has heard a lot about RL since it beat the thenreigning world champion in a game of Go. In fact, RL has been successfully applied to many use cases [23]- [25] due to its remarkable ability to interact with various real-world environments. For the issue of dropout-layer inconsistencies, we adopt a method called Regularized Dropout (R-Drop) [26] to maintain the consistency among dropout layers.…”
Section: Introductionmentioning
confidence: 99%