2017
DOI: 10.48550/arxiv.1711.09602
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep Reinforcement Learning for Sepsis Treatment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
60
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 31 publications
(63 citation statements)
references
References 0 publications
0
60
0
Order By: Relevance
“…Consistent with previous work [13], [14], [15], we focus on vasopressor and IV fluid treatments, which whilst regarded important, studies have shown significant variation among patient responses to them [3], [4] [16]. Further there is little agreement among medical researchers on best practices that guide fluid or vasopressor administration beyond initial resuscitation.…”
Section: Introductionmentioning
confidence: 74%
See 2 more Smart Citations
“…Consistent with previous work [13], [14], [15], we focus on vasopressor and IV fluid treatments, which whilst regarded important, studies have shown significant variation among patient responses to them [3], [4] [16]. Further there is little agreement among medical researchers on best practices that guide fluid or vasopressor administration beyond initial resuscitation.…”
Section: Introductionmentioning
confidence: 74%
“…This approach was extended by the authors to alternative reward schemes more in line with current medical decision making, and continuous state spaces. Raghu et al [14] used DRL (with a Dueling DDQN [30] algorithm) and a continuous state representation. Peng et al [15] considers partial observability.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…So, its recommendations can be reviewed or changed by the decision-maker before they are applied. Similar methodologies were applied to sepsis treatment [19], [20]. Offline reinforcement learning can also be applied in many other sequential decision-making problems as in healthcare, spoken dialogue systems, self-driving cars, and robotics [15].…”
Section: A Related Workmentioning
confidence: 99%
“…Off-policy policy evaluation tackles the performance prediction problem producing an estimate that minimizes some concept of error [16]. Alternatives to OPE are, for example, crowd-sourced human labeling agent actions [37], an expert qualitative analysis [20] and policy ranking [27].…”
Section: Off-policy Policy Evaluation (Ope)mentioning
confidence: 99%