In this paper, we consider two fundamental inventory models, the single-period newsvendor problem and its multi-period extension, but under the assumption that the explicit demand distributions are not known and that the only information available is a set of independent samples drawn from the true distributions. Under the assumption that the demand distributions are given explicitly, these models are well-studied and relatively straightforward to solve. However, in most real-life scenarios, the true demand distributions are not available or they are too complex to work with. Thus, a sampling-driven algorithmic framework is very attractive, both in practice and in theory.We shall describe how to compute sampling-based policies, that is, policies that are computed based only on observed samples of the demands without any access to, or assumptions on, the true demand distributions. Moreover, we establish bounds on the number of samples required to guarantee that with high probability, the expected cost of the sampling-based policies is arbitrarily close (i.e., with arbitrarily small relative error) compared to the expected cost of the optimal policies which have full access to the demand distributions. The bounds that we develop are general, easy to compute and do not depend at all on the specific demand distributions.Key words: Inventory, Approximation ; Sampling ;Algorithms ; Nonparametric MSC2000 Subject Classification: Primary: 90B05 , ; Secondary: 62G99 , OR/MS subject classification: Primary: inventory/production , approximation/heuristics ; Secondary: production/scheduling , approximation/heuristics, learning 1. Introduction In this paper, we address two fundamental models in stochastic inventory theory, the single-period newsvendor model and its multiperiod extension, under the assumption that the explicit demand distributions are not known and that the only information available is a set of independent samples drawn from the true distributions. Under the assumption that the demand distributions are specified explicitly, these models are well-studied and usually straightforward to solve. However, in most real-life scenarios, the true demand distributions are not available or they are too complex to work with. Usually, the information that is available comes from historical data, from a simulation model, and from forecasting and market analysis of future trends in the demands. Thus, we believe that a sampling-driven algorithmic framework is very attractive, both in practice and in theory. In this paper, we shall describe how to compute sampling-based policies, that is, policies that are computed based only on observed samples of the demands without any access to and assumptions on the true demand distributions. This is usually called a non-parametric approach. Moreover, we shall prove that the quality (expected cost) of these policies is very close to that of the optimal policies that are defined with respect to the true underlying demand distributions.
Significance This paper compares the probabilistic accuracy of short-term forecasts of reported deaths due to COVID-19 during the first year and a half of the pandemic in the United States. Results show high variation in accuracy between and within stand-alone models and more consistent accuracy from an ensemble model that combined forecasts from all eligible models. This demonstrates that an ensemble model provided a reliable and comparatively accurate means of forecasting deaths during the COVID-19 pandemic that exceeded the performance of all of the models that contributed to it. This work strengthens the evidence base for synthesizing multiple models to support public-health action.
Consider the newsvendor model, but under the assumption that the underlying demand distribution is not known as part of the input. Instead, the only information available is a random, independent sample drawn from the demand distribution. This paper analyzes the sample average approximation (SAA) approach for the data-driven newsvendor problem. We obtain a new analytical bound on the probability that the relative regret of the SAA solution exceeds a threshold. This bound is significantly tighter than existing bounds, and it matches the empirical accuracy of the SAA solution observed in extensive computational experiments.This bound reveals that the demand distribution's weighted mean spread (WMS) affects the accuracy of the SAA heuristic.
We consider two classical stochastic inventory control models, the periodic-review stochastic inventory control problem and the stochastic lot-sizing problem. The goal is to coordinate a sequence of orders of a single commodity, aiming to supply stochastic demands over a discrete, finite horizon with minimum expected overall ordering, holding and backlogging costs. In this paper, we address the important problem of finding computationally efficient and provably good inventory control policies for these models in the presence of correlated and non-stationary (time-dependent) stochastic demands. This problem arises in many domains and has many practical applications in supply chain management.Our approach is based on a new marginal cost accounting scheme for stochastic inventory control models combined with novel cost-balancing techniques. Specifically, in each period, we balance the expected cost of over ordering (i.e, costs incurred by excess inventory) against the expected cost of under ordering (i.e., costs incurred by not satisfying demand on time). This leads to what we believe to be the first computationally efficient policies with constant worst-case performance guarantees for a general class of important stochastic inventory models. That is, there exists a constant C such that, for any instance of the problem, the expected cost of the policy is at most C times the expected cost of an optimal policy. In particular, we provide worst-case guarantee of 2 for the periodic-review stochastic inventory control problem and a worst-case guarantee of 3 for the stochastic lot-sizing problem. Our results are valid for all of the currently known approaches in the literature to model correlation and non-stationarity of demands over time.
Using the well-known product-limit form of the Kaplan-Meier estimator from statistics, we propose a new class of nonparametric adaptive data-driven policies for stochastic inventory control problems. We focus on the distribution-free newsvendor model with censored demands. The assumption is that the demand distribution is not known and there are only sales data available. We study the theoretical performance of the new policies and show that for discrete demand distributions they converge almost surely to the set of optimal solutions. Computational experiments suggest that the new policies converge for general demand distributions, not necessarily discrete, and demonstrate that they are significantly more robust than previously known policies. As a by-product of the theoretical analysis, we obtain new results on the asymptotic consistency of the Kaplan-Meier estimator for discrete random variables that extend existing work in statistics. To the best of our knowledge, this is the first application of the Kaplan-Meier estimator within an adaptive optimization algorithm, in particular, the first application to stochastic inventory control models. We believe that this work will lead to additional applications in other domains.
IntroductionA paucity of literature exists regarding delays in transfer out of the intensive care unit. We sought to analyze the incidence, causes, and costs of delayed transfer from a surgical intensive care unit (SICU).MethodsAn IRB-approved prospective observational study was conducted from January 24, 2010, to July 31, 2010, of all 731 patients transferred from a 20-bed SICU at a large tertiary-care academic medical center. Data were collected on patients who were medically ready for transfer to the floor who remained in the SICU for at least 1 extra day. Reasons for delay were examined, and extra costs associated were estimated.ResultsTransfer to the floor was delayed in 22% (n = 160) of the 731 patients transferred from the SICU. Delays ranged from 1 to 6 days (mean, 1.5 days; median, 2 days). The extra costs associated with delays were estimated to be $581,790 during the study period, or $21,547 per week. The most common reasons for delay in transfer were lack of available surgical-floor bed (71% (114 of 160)), lack of room appropriate for infectious contact precautions (18% (28 of 160)), change of primary service (Surgery to Medicine) (7% (11 of 160)), and lack of available patient attendant ("sitter" for mildly delirious patients) (3% (five of 160)). A positive association was found between the daily hospital census and the daily number of SICU beds occupied by patients delayed in transfer (Spearman rho = 0.27; P < 0.0001).ConclusionsDelay in transfer from the SICU is common and costly. The most common reason for delay is insufficient availability of surgical-floor beds. Delay in transfer is associated with high hospital census. Further study of this problem is necessary.
We consider stochastic control inventory models in which the goal is to coordinate a sequence of orders of a single commodity, aiming to supply stochastic demands over a discrete finite horizon with minimum expected overall ordering, holding and backlogging costs. In this paper, we address the longstanding problem of finding computationally efficient and provably good inventory control policies to these models in the presence of correlated and non-stationary (time-dependent) stochastic demands. This problem arises in many domains and has many practical applications in supply chain management. We consider two classical models, the periodic-review stochastic inventory control problem and the stochastic lot-sizing problem with correlated and non-stationary demands. Here the correlation is inter-temporal, i.e., what we observe in period s changes our forecast for the demand in future periods. We provide what we believe to be the first computationally efficient policies with constant worst-case performance guarantees; that is, there exists a constant C such that, for any instance of the problem, the expected cost of the policy is at most C times the expected cost of an optimal policy.The dominant paradigm in almost all of the existing literature has been to formulate these models using a dynamic programming framework. This approach has turned out to be very successful in characterizing the structure of the optimal policies, which follow simple forms of state-dependent base-stock policies and state-dependent (s, S) policies. However, in case the demands are non-stationary and correlated over time, computing these optimal policies is likely to be intractable.We present a new approach that leads to general approximation algorithms with constant performance guarantee for these classical models. Our approach is based on several novel ideas: we present a new (marginal) cost accounting for stochastic inventory models; we use cost-balancing techniques; and we consider non base-stock (order-up-to) policies that are extremely easy to implement on-line. Our results are valid for all of the currently known approaches in the literature to model correlation and nonstationarity of demands over time.More specifically, we provide a general 2-approximation algorithm for the periodic-review stochastic inventory control problem and a 3-approximation algorithm for the stochastic lot-sizing problem. That is, the constant guarantees are 2 and 3, respectively. For the former problem, we show that the classical myopic policy can be arbitrarily more expensive compared to the optimal policy. We also present an extended class of myopic policies that provides both upper and lower bounds on the optimal base-stock levels. *
We develop the first algorithmic approach to compute provably good ordering policies for a multiperiod, capacitated, stochastic inventory system facing stochastic nonstationary and correlated demands that evolve over time. Our approach is computationally efficient and guaranteed to produce a policy with total expected cost no more than twice that of an optimal policy. As part of our computational approach, we propose a novel scheme to account for backlogging costs in a capacitated, multiperiod environment. Our cost-accounting scheme, called the forced marginal backlogging cost-accounting scheme, is significantly different from the period-by-period accounting approach to backlogging costs used in dynamic programming; it captures the long-term impact of a decision on system performance in the presence of capacity constraints. In the likely event that the per-unit order costs are large compared to the holding and backlogging costs, a transformation of cost parameters yields a significantly improved guarantee. We also introduce new semimyopic policies based on our new cost-accounting scheme to derive bounds on the optimal base-stock levels. We show that these bounds can be used to effectively improve any policy. Finally, empirical evidence is presented that indicates that the typical performance of this approach is significantly stronger than these worst-case guarantees.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.