Motivated by the patient triage problem in emergency response, we consider a single-server clearing system in which jobs may abandon the system if they are not taken into service within their "lifetime." In this system, jobs are characterized by their lifetime and service time distributions.Our objective is to dynamically determine the optimal or near-optimal order of service for jobs so as to minimize the total number of abandonments. We first show that if the jobs can be ordered in such a way that the job with the shortest lifetime (in the sense of hazard rate ordering) also has the shortest service time (in the sense of likelihood ratio ordering), then the optimal policy gives the highest priority to this "time-critical" job independently of the system state. For the case where the jobs with shorter lifetimes have longer service times, we observed that the optimal policy generally has a complex structure that may depend on the type and number of jobs available. For this case, we provide partial characterizations of the optimal policy and obtain sufficient conditions under which a state-independent policy is optimal. Furthermore, we develop two state-dependent heuristic policies, and by means of a numerical study, show that these heuristics perform well, especially when jobs abandon the system at a relatively faster rate when compared to service rates.Based on our analytical and numerical results, we develop several insights on patient triage in the immediate aftermath of a mass casualty event. For example, we conclude that in a worstcase scenario, where medical resources are overwhelmed with a large number of casualties who need immediate attention, it is crucial to implement state-dependent policies such as the heuristic policies proposed in this paper.
In the aftermath of mass-casualty events, key resources (such as ambulances and operating rooms) can be overwhelmed by the sudden jump in patient demand. To ration these resources, patients are assigned different priority levels, a process that is called triage. According to triage protocols in place, each patient's priority level is determined based on that patient's injuries only. However, recent work from the emergency medicine literature suggests that when determining priorities, resource limitations and the scale of the event should also be taken into account in order to do the greatest good for the greatest number. This article investigates how this can be done and what the potential benefits would be. We formulate the problem as a priority assignment problem in a clearing system with multiple classes of impatient jobs. Jobs are classified based on their lifetime (i.e., their tolerance for wait), service time, and reward distributions. Our objective is to maximize the expected total reward, e.g., the expected total number of survivors. Using sample-path methods and stochastic dynamic programming, we identify conditions under which the state information is not needed for prioritization decisions. In the absence of these conditions, we partially characterize the optimal policy, which is possibly state dependent, and we propose a number of heuristic policies. By means of a numerical study, we demonstrate that simple state-dependent policies that prioritize less urgent jobs when the total number of jobs is large perform well, especially when jobs are time-critical.
The most widely used standard for mass-casualty triage, START, relies on a fixed-priority ordering among different classes of patients, and does not explicitly consider resource limitations or the changes in survival probabilities with respect to time. We construct a fluid model of patient triage in a mass-casualty incident that incorporates these factors and characterize its optimal policy. We use this characterization to obtain useful insights about the type of simple policies that have a good chance to perform well in practice, and we demonstrate how one could develop such a policy. Using a realistic simulation model and data from emergency medicine literature, we show that the policy we developed based on our fluid formulation outperforms START in all scenarios considered-sometimes substantially.
According to American College of Emergency Physicians, emergency department (ED) crowding occurs when the identified need for emergency services exceeds available resources for patient care in the ED, hospital, or both. ED crowding is a widely reported problem and several crowding scores are proposed to quantify crowding using hospital and patient data as inputs for assisting healthcare professionals in anticipating imminent crowding problems. Using data from a large academic hospital in North Carolina, we evaluate three crowding scores, namely, EDWIN, NEDOCS, and READI by assessing strengths and weaknesses of each score, particularly their predictive power. We perform these evaluations by first building a discrete-event simulation model of the ED, validating the results of the simulation model against observations at the ED under consideration, and utilizing the model results to investigate each of the three ED crowding scores under normal operating conditions and under two simulated outbreak scenarios in the ED. We conclude that, for this hospital, both EDWIN and NEDOCS prove to be helpful measures of current ED crowdedness, and both scores demonstrate the ability to anticipate impending crowdedness. Utilizing both EDWIN and NEDOCS scores in combination with the threshold values proposed in this work could provide a real-time alert for clinicians to anticipate impending crowding, which could lead to better preparation and eventually better patient care outcomes.
To estimate the variance parameter (i.e., the sum of covariances at all lags) for a steady-state simulation output process, we formulate certain statistics that are computed from overlapping batches separately and then averaged over all such batches. We form overlapping versions of the area and Cramér–von Mises estimators using the method of standardized time series. For these estimators, we establish (i) their limiting distributions as the sample size increases while the ratio of the sample size to the batch size remains fixed; and (ii) their mean-square convergence to the variance parameter as both the batch size and the ratio of the sample size to the batch size increase. Compared with their counterparts computed from nonoverlapping batches, the estimators computed from overlapping batches asymptotically achieve reduced variance while maintaining the same bias as the sample size increases; moreover, the new variance estimators usually achieve similar improvements compared with the conventional variance estimators based on nonoverlapping or overlapping batch means. In follow-up work, we present several analytical and Monte Carlo examples, and we formulate efficient procedures for computing the overlapping estimators with only order-of-sample-size effort.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.