Background: Timely documentation of care preferences is an endorsed quality indicator for seriously ill patients admitted to intensive care units. Clinicians document their conversations about these preferences as unstructured free text in clinical notes from electronic health records. Aim: To apply deep learning algorithms for automated identification of serious illness conversations documented in physician notes during intensive care unit admissions. Design: Using a retrospective dataset of physician notes, clinicians annotated all text documenting patient care preferences (goals of care or code status limitations), communication with family, and full code status. Clinician-coded text was used to train algorithms to identify documentation and to validate algorithms. The validated algorithms were deployed to assess the percentage of intensive care unit admissions of patients aged ⩾75 that had care preferences documented within the first 48 h. Setting/participants: Patients admitted to one of five intensive care units. Results: Algorithm performance was calculated by comparing machine-identified documentation to clinician-coded documentation. For detecting care preference documentation at the note level, the algorithm had F1-score of 0.92 (95% confidence interval, 0.89 to 0.95), sensitivity of 93.5% (95% confidence interval, 90.0% to 98.0%), and specificity of 91.0% (95% confidence interval, 86.4% to 95.3%). Applied to 1350 admissions of patients aged ⩾75, we found that 64.7% of patient intensive care unit admissions had care preferences documented within the first 48 h. Conclusion: Deep learning algorithms identified patient care preference documentation with sensitivity and specificity approaching that of clinicians and computed in a tiny fraction of time. Future research should determine the generalizability of these methods in multiple healthcare systems.
Understanding decision-making in clinical environments is of paramount importance if we are to bring the strengths of machine learning to ultimately improve patient outcomes. Several factors including the availability of public data, the intrinsically offline nature of the problem, and the complexity of human decision making, has meant that the mainstream development of algorithms is often geared towards optimal performance in tasks that do not necessarily translate well into the medical regime; often overlooking more niche issues commonly associated with the area. We therefore present a new benchmarking suite designed specifically for medical sequential decision making: the Medkit-Learn(ing) Environment, a publicly available Python package providing simple and easy access to high-fidelity synthetic medical data. While providing a standardised way to compare algorithms in a realistic medical setting we employ a generating process that disentangles the policy and environment dynamics to allow for a range of customisations, thus enabling systematic evaluation of algorithms' robustness against specific challenges prevalent in healthcare.Preprint. Under review.
Human decision making is well known to be imperfect and the ability to analyse such processes individually is crucial when attempting to aid or improve a decisionmaker's ability to perform a task, e.g. to alert them to potential biases or oversights on their part. To do so, it is necessary to develop interpretable representations of how agents make decisions and how this process changes over time as the agent learns online in reaction to the accrued experience. To then understand the decisionmaking processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem. By interpreting actions within a potential outcomes framework, we introduce a meaningful mapping based on agents choosing an action they believe to have the greatest treatment effect. We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them, using a novel architecture built upon an expressive family of deep state-space models. Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.