Interval linear programming provides a tool for solving real-world optimization problems under interval-valued uncertainty. Instead of approximating or estimating crisp input data, the coefficients of an interval program may perturb independently within the given lower and upper bounds. However, contrarily to classical linear programming, an interval program cannot always be converted into a desired form without affecting its properties, due to the so-called dependency problem.In this paper, we discuss the common transformations used in linear programming, such as imposing non-negativity on free variables or splitting equations into inequalities, and their effects on interval programs. Specifically, we examine changes in the set of all optimal solutions, optimal values and the optimal value range. Since some of the considered properties do not holds in the general case, we also study a special class of interval programs, in which uncertainty only affects the objective function and the right-hand-side vector. For this class, we obtain stronger results.
Consider a linear regression model where some or all of the observations of the dependent variable have been either rounded or interval-censored and only the resulting interval is available. Given a linear estimator β of the vector of regression parameters, we consider its possibilistic generalization for the model with rounded/censored data, which is called the OLS-set in the special case β = Ordinary Least Squares. We derive a geometric characterization of the set: we show that it is a zonotope in the parameter space. We show that even for models with a small number of regression parameters and a small number of observations, the combinatorial complexity of the polyhedron can be high. We therefore derive simple bounds on the OLS-set. These bounds allow to quantify the worst-case impact of rounding/censoring on the estimator β . This approach is illustrated by an example. We also observe that the method can be used for quantification of the rounding/censoring effect in advance, before the experiment is made, and hence can provide information on the choice of measurement precision when the experiment is being planned.
A rank estimator in robust regression is a minimizer of a function which depends (in addition to other factors) on the ordering of residuals but not on their values. Here we focus on the optimization aspects of rank estimators. We distinguish two classes of functions: the class with a continuous and convex objective function (CCC), which covers the class of rank estimators known from statistics, and also another class (GEN), which is far more general. We propose efficient algorithms for both classes. For GEN we propose an enumerative algorithm that works in polynomial time as long as the number of regressors is O(1). The proposed algorithm utilizes the special structure of arrangements of hyperplanes that occur in our problem and is superior to other known algorithms in this area. For the continuous and convex case, we propose an unconditionally polynomial algorithm finding the exact minimizer, unlike the heuristic or approximate methods implemented in statistical packages.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.