2019
DOI: 10.2139/ssrn.3419020
|View full text |Cite
|
Sign up to set email alerts
|

Learning Under Uncertainty with Multiple Priors: Experimental Investigation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…2 Stochastic choice rules are also an econometric necessity when using likelihood-based techniques: if a subject makes just one decision that does not maximize her objective function, then without a probabilistic choice rule, the likelihood function (before taking logs) is zero everywhere, and hence cannot be maximized. [12] Bayesian updating Ambiguity aversion Logistic Subjects are also assumed to behave probabilistically (reason 2), also a departure from maximizing U * i (L), which means that all elements in the choice set, not just the ones that satisfy (1) or (2), are chosen with positive probability. Let ρ i (L, L) be the probability (mass or density, depending on whether L is discrete or continuous, respectively) that i chooses element L from their choice set.…”
Section: A Classification Of Behaviormentioning
confidence: 99%
See 1 more Smart Citation
“…2 Stochastic choice rules are also an econometric necessity when using likelihood-based techniques: if a subject makes just one decision that does not maximize her objective function, then without a probabilistic choice rule, the likelihood function (before taking logs) is zero everywhere, and hence cannot be maximized. [12] Bayesian updating Ambiguity aversion Logistic Subjects are also assumed to behave probabilistically (reason 2), also a departure from maximizing U * i (L), which means that all elements in the choice set, not just the ones that satisfy (1) or (2), are chosen with positive probability. Let ρ i (L, L) be the probability (mass or density, depending on whether L is discrete or continuous, respectively) that i chooses element L from their choice set.…”
Section: A Classification Of Behaviormentioning
confidence: 99%
“…It will affect estimate of the variance of decision errors (i.e., they will be inflated by 150 −2r i ). 10 The lower bound of γ i > 0.2791 ensures that the probability weighting function is always increasing in p. 11 See Bland and Rosokha [12] for an exception.…”
Section: Data and Econometric Modelsmentioning
confidence: 99%