We introduce three different approaches for decision making under uncertainty if (I) there is only partial (both cardinally and ordinally scaled) information on an agent's preferences and (II) the uncertainty about the states of nature is described by a credal set (or some other imprecise probabilistic model). Particularly, situation (I) is modeled by a pair of binary relations, one specifying the partial rank order of the alternatives and the other modeling partial information on the strength of preference. Our first approach relies on decision criteria constructing complete rankings of the available acts that are based on generalized expectation intervals. Subsequently, we introduce different concepts of global admissibility that construct partial orders between the available acts by comparing them all simultaneously. Finally, we define criteria induced by suitable binary relations on the set of acts and, therefore, can be understood as concepts of local admissibility. For certain criteria, we provide linear programming based algorithms for checking optimality/admissibility of acts. Additionally, the paper includes a discussion of a prototypical situation by means of a toy example.
This paper is concerned with decision making using imprecise probabilities. In the first part, we introduce a new decision criterion that allows for explicitly modeling how far decisions that are optimal in terms of Walley's maximality are accepted to deviate from being optimal in the sense of Levi's E-admissibility. For this criterion, we also provide an efficient and simple algorithm based on linear programming theory. In the second part of the paper, we propose two new measures for quantifying the extent of E-admissibility of an E-admissible act, i.e. the size of the set of measures for which the corresponding act maximizes expected utility. The first measure is the maximal diameter of this set, while the second one relates to the maximal barycentric cube that can be inscribed into it. Also here, for both measures, we give linear programming algorithms capable to deal with them. Finally, we discuss some ideas in the context of ordinal decision theory. The paper concludes with a stylized application example illustrating all introduced concepts.
One of the most promising applications of the methodology of imprecise probabilities in statistics is the reliable analysis of interval data (or more generally coarsened data). As soon as one refrains from making strong, often unjustified assumptions on the coarsening process, statistical models are naturally only partially identified and set-valued parameter estimators (identification regions) have to be derived. In this paper we consider linear regression analysis under interval data in the dependent variable. While in the traditional case of neglected imprecision different understandings of regression modeling lead to the same parameter estimators, we now have to distinguish between two different types of identification regions, called (Sharp) Marrow Region (SMR) and (Sharp) Collection Region (SCR) here. In addition, we propose the Set-loss Region (SR) as a compromise between SMR and SCR based on a set-domained loss function. We elaborate and discuss some fundamental properties of these regions and then illustrate the methodology in detail by an example, where the influence of different covariates on wine quality, measured by a coarse rating scale, is investigated. We also compare the different identification regions to classical estimates from a naive analysis and from common interval censorship modeling.
We congratulate Ruobin Gong and Xiao-Li Meng on their thought-provoking paper demonstrating the power of imprecise probabilities in statistics. In particular, Gong and Meng clarify important statistical paradoxes by discussing them in the framework of generalized uncertainty quantification and different conditioning rules used for updating. In this note, we characterize all three conditioning rules as envelopes of certain sets of conditional probabilities. This view also suggests some generalizations that can be seen as compromise rules. Similar to Gong and Meng, our derivations mainly focus on Choquet capacities of order 2, and so we also briefly discuss in general their role as statistical models. We conclude with some general remarks on the potential of imprecise probabilities to cope with the multidimensional nature of uncertainty.
Since coarse(ned) data naturally induce set-valued estimators, analysts often assume coarsening at random (CAR) to force them to be single-valued. Focusing on a coarse categorical response variable and a precisely observed categorical covariate, we re-illustrate the impossibility to test CAR and contrast it to another type of coarsening called subgroup independence (SI), using the data of the German Panel Study "Labour Market and Social Security" as an example. It turns out that -depending on the number of subgroups and categories of the response variable -SI can be point-identifying as CAR, but testable unlike CAR. A main goal of this paper is the construction of the likelihood-ratio test for SI. All issues are similarly investigated for the here proposed generalized versions, gCAR and gSI, thus allowing a more flexible application of this hypothesis test.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.