2022
DOI: 10.1007/978-3-031-06773-0_10
|View full text |Cite
|
Sign up to set email alerts
|

Verified Probabilistic Policies for Deep Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 32 publications
0
3
0
Order By: Relevance
“…All experiments were executed on 1 MLflow is a platform to streamline ML development, including tracking experiments, packaging code into reproducible experiments, and sharing and deploying models [41]. 2 We refer the interested reader to the repository https://github.com/LAVA-LAB/ COOL-MC of the tool for more experiments with these and other environments.…”
Section: Numerical Experimentsmentioning
confidence: 99%
See 2 more Smart Citations
“…All experiments were executed on 1 MLflow is a platform to streamline ML development, including tracking experiments, packaging code into reproducible experiments, and sharing and deploying models [41]. 2 We refer the interested reader to the repository https://github.com/LAVA-LAB/ COOL-MC of the tool for more experiments with these and other environments.…”
Section: Numerical Experimentsmentioning
confidence: 99%
“…Thus, the input of COOL-MC consists of two models of the environment: (1) an OpenAI-gym compatible environment, to train an RL policy; (2) an Markov decision process (MDP), specified using the PRISM language [31], to verify the policy together with a formal specification e.g., a probabilistic computation tree logic (PCTL) formula. Only the MDP model of the environment is required: If no OpenAI-gym environment is given, COOL-MC provides a wrapper to cast the MDP as an OpenAI gym environment.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation