Abstract:This paper illustrates how one can deduce preference from observed choices when attention is not only limited but also random. In contrast to earlier approaches, we introduce a Random Attention Model (RAM) where we abstain from any particular attention formation, and instead consider a large class of nonparametric random attention rules. Our model imposes one intuitive condition, termed Monotonic Attention, which captures the idea that each consideration set competes for the decision-maker's attention. We then… Show more
“…The monotonicity of attention rules in (3.14) can be viewed as regularity of the process that chooses a consideration set from the subsets of the choice set. Cattaneo, Ma, Masatlioglu, and Suleymanov (2017) show that it is implied by various models of limited attention. While the violation required in (3.16) is weak in that it needs only to occur for some G, it sheds a different light on the severity of the identification problem described at the beginning of this section.…”
Section: Unobserved Heterogeneity In Choice Sets And/or Considerationmentioning
confidence: 89%
“…Key Insight 3.5: Cattaneo, Ma, Masatlioglu, and Suleymanov (2017) show that learning features of preference orderings in Identification Problem 3.5 requires the existence in the data of choice problems where the choice probabilities satisfy (3.16). The latter is a violation of the principle of "regularity" (Luce and Suppes, 1965) according to which the probability of choosing an alternative from any set is at least as large as the probability of choosing it from any of its supersets.…”
Section: Unobserved Heterogeneity In Choice Sets And/or Considerationmentioning
Econometrics has traditionally revolved around point identification. Much effort has been devoted to finding the weakest set of assumptions that, together with the available data, deliver point identification of population parameters, finite or infinite dimensional that these might be. And point identification has been viewed as a necessary prerequisite for meaningful statistical inference. The research program on partial identification has begun to slowly shift this focus in the early 1990s, gaining momentum over time and developing into a widely researched area of econometrics. Partial identification has forcefully established that much can be learned from the available data and assumptions imposed because of their credibility rather than their ability to yield point identification. Within this paradigm, one obtains a set of values for the parameters of interest which are observationally equivalent given the available data and maintained assumptions. I refer to this set as the parameters' sharp identification region. Econometrics with partial identification is concerned with: (1) obtaining a tractable characterization of the parameters' sharp identification region; (2) providing methods to estimate it; (3) conducting test of hypotheses and making confidence statements about the partially identified parameters. Each of these goals poses challenges that differ from those faced in econometrics with point identification. This chapter discusses these challenges and some of their solution. It reviews advances in partial identification analysis both as applied to learning (functionals of) probability distributions that are well-defined in the absence of models, as well as to learning parameters that are well-defined only in the context of particular models. The chapter highlights a simple organizing principle: the source of the identification problem can often be traced to a collection of random variables that are consistent with the available data and maintained assumptions. This collection may be part of the observed data or be a model implication. In either case, it can be formalized as a random set. Random set theory is then used as a mathematical framework to unify a number of special results and produce a general methodology to conduct econometrics with partial identification.
“…The monotonicity of attention rules in (3.14) can be viewed as regularity of the process that chooses a consideration set from the subsets of the choice set. Cattaneo, Ma, Masatlioglu, and Suleymanov (2017) show that it is implied by various models of limited attention. While the violation required in (3.16) is weak in that it needs only to occur for some G, it sheds a different light on the severity of the identification problem described at the beginning of this section.…”
Section: Unobserved Heterogeneity In Choice Sets And/or Considerationmentioning
confidence: 89%
“…Key Insight 3.5: Cattaneo, Ma, Masatlioglu, and Suleymanov (2017) show that learning features of preference orderings in Identification Problem 3.5 requires the existence in the data of choice problems where the choice probabilities satisfy (3.16). The latter is a violation of the principle of "regularity" (Luce and Suppes, 1965) according to which the probability of choosing an alternative from any set is at least as large as the probability of choosing it from any of its supersets.…”
Section: Unobserved Heterogeneity In Choice Sets And/or Considerationmentioning
Econometrics has traditionally revolved around point identification. Much effort has been devoted to finding the weakest set of assumptions that, together with the available data, deliver point identification of population parameters, finite or infinite dimensional that these might be. And point identification has been viewed as a necessary prerequisite for meaningful statistical inference. The research program on partial identification has begun to slowly shift this focus in the early 1990s, gaining momentum over time and developing into a widely researched area of econometrics. Partial identification has forcefully established that much can be learned from the available data and assumptions imposed because of their credibility rather than their ability to yield point identification. Within this paradigm, one obtains a set of values for the parameters of interest which are observationally equivalent given the available data and maintained assumptions. I refer to this set as the parameters' sharp identification region. Econometrics with partial identification is concerned with: (1) obtaining a tractable characterization of the parameters' sharp identification region; (2) providing methods to estimate it; (3) conducting test of hypotheses and making confidence statements about the partially identified parameters. Each of these goals poses challenges that differ from those faced in econometrics with point identification. This chapter discusses these challenges and some of their solution. It reviews advances in partial identification analysis both as applied to learning (functionals of) probability distributions that are well-defined in the absence of models, as well as to learning parameters that are well-defined only in the context of particular models. The chapter highlights a simple organizing principle: the source of the identification problem can often be traced to a collection of random variables that are consistent with the available data and maintained assumptions. This collection may be part of the observed data or be a model implication. In either case, it can be formalized as a random set. Random set theory is then used as a mathematical framework to unify a number of special results and produce a general methodology to conduct econometrics with partial identification.
“…5Ĩ P is well defined since P is a partition of a strict rational preference P . 6 If the decision-maker could complete all comparisons, her choice would coincide with deterministic rational choice. Her (possible) inability to do so is captured by a function π :…”
Section: Gradual Pairwise Comparisonmentioning
confidence: 99%
“…|B| denotes the number of elements in the set B 6. For any i < I, by definition, |M P i (A)| ≥ |M P i+1 (A)|.…”
Guided by evidence from eye-tracking studies of choice, pairwise comparison is assumed to be the building block of the decision-making procedure. A decision-maker with a rational preference may nevertheless consider the constituent pairwise comparisons gradually, easier comparisons preceding difficult ones. Facing a choice problem she may be unable to complete all relevant comparisons and choose with equal odds from alternatives not found inferior. Stochastic choice data consistent with such behaviour is characterized and used to infer the underlying preference relation and the order of pairwise comparisons. The choice procedure offers a novel rationale for behavioural phenomena such as the similarity effect and violations of stochastic transitivity and regularity.
“…For instance, seeMasatlioglu, Nakajima, and Ozbay (2012),Manzini and Mariotti (2014), Aguiar, Boccardi, and Dean (2016),Cattaneo, Ma, Masatlioglu, and Suleymanov (2017).6 Also seeWeibull, Mattsson, and Voorneveld (2007).…”
We design a choice experiment where the objects are valued according to only a single attribute with a continuous measure and we can observe the true preferences of subjects. However, subjects have an imperfect perception of their own preferences. Subjects are given a choice set involving several lines of various lengths and are told to select one of them. They strive to select the longest line because they are paid an amount that is increasing in the length of their selection. Subjects also make their choices while they are required to remember either a 6-digit number (high cognitive load) or a 1-digit number (low cognitive load). We …nd that subjects in the high load treatment make inferior line selections and perform worse searches. When we restrict attention to the set of viewed lines, we …nd evidence that subjects in the high load treatment make worse choices than subjects in the low load treatment. Therefore the low quality searches do not fully explain the low quality choices. Our results suggest that cognition a¤ects choice, even in our idealized choice setting. We also …nd evidence of choice overload even when the choice set is small and the objects are simple. Further, our experimental design permits a multinomial discrete choice analysis on choice among single-attribute objects with an objective value. The results of our analysis suggest that the errors in our data are better described as having a Gumbel distribution rather than a normal distribution. Finally, we observe the e¤ects of limited cognition, consistent with memory decay and attention.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.