2022
DOI: 10.1109/jbhi.2022.3183854
|View full text |Cite
|
Sign up to set email alerts
|

Supervised Optimal Chemotherapy Regimen Based on Offline Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(4 citation statements)
references
References 29 publications
0
4
0
Order By: Relevance
“…Emerson et al [17] proposed using ORL to learn a safer blood glucose control strategy for people with Type 1 diabetes. Shiranthika et al [18] developed the supervised optimal chemotherapy regimen, which can provide cancer patients with an optimal chemotherapy-dosing schedule, thus assisting oncologists in clinical decision-making. Wang et al [19] used ORL to learn the optimal treatment strategy for sepsis patients in ICU.…”
Section: Offline Reinforcement Learningmentioning
confidence: 99%
“…Emerson et al [17] proposed using ORL to learn a safer blood glucose control strategy for people with Type 1 diabetes. Shiranthika et al [18] developed the supervised optimal chemotherapy regimen, which can provide cancer patients with an optimal chemotherapy-dosing schedule, thus assisting oncologists in clinical decision-making. Wang et al [19] used ORL to learn the optimal treatment strategy for sepsis patients in ICU.…”
Section: Offline Reinforcement Learningmentioning
confidence: 99%
“…It's noteworthy that some research endeavors have harnessed offline RL techniques in sensitive domains like personalized patient treatment [30,12]. Intriguingly, as of our knowledge cutoff, there have been no prior attempts to apply offline RL to the conditional de novo drug design.…”
Section: Related Workmentioning
confidence: 99%
“…The identified algorithms primarily aided in the dose individualization of anticoagulants, immunosuppressants and antibiotics [1–5]. In oncology, most of the studies we identified used reinforcement learning, including classical Q-Learning [69], deep Q-Learning [10, 11], deep double Q-Learning [12], fuzzy reinforcement learning [13, 14], conservative Q-Learning [15] and other approaches [16, 17]. Recently, several models have been proposed using neural networks for the prediction of drug concentrations [4, 18–20].…”
Section: Introductionmentioning
confidence: 99%