2021
DOI: 10.48550/arxiv.2102.01434
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

An Abstraction-based Method to Check Multi-Agent Deep Reinforcement-Learning Behaviors

Abstract: Multi-agent reinforcement learning (RL) often struggles to ensure the safe behaviours of the learning agents, and therefore it is generally not adapted to safety-critical applications. To address this issue, we present a methodology that combines formal verification with (deep) RL algorithms to guarantee the satisfaction of formally-specified safety constraints both in training and testing. The approach we propose expresses the constraints to verify in Probabilistic Computation Tree Logic (PCTL) and builds an … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 15 publications
(40 reference statements)
0
1
0
Order By: Relevance
“…Three features make it a challenging problem to verifying DRL systems. First, the state space of such control systems is usually infinite and continuous, but most of the model checking-based approaches can only handle finitestate models [20]. Second, the system dynamics are generally nonlinear, which increases the complexity of formal verification [4].…”
Section: Introductionmentioning
confidence: 99%
“…Three features make it a challenging problem to verifying DRL systems. First, the state space of such control systems is usually infinite and continuous, but most of the model checking-based approaches can only handle finitestate models [20]. Second, the system dynamics are generally nonlinear, which increases the complexity of formal verification [4].…”
Section: Introductionmentioning
confidence: 99%