A sender persuades a receiver to accept a project by disclosing information about a payoff‐relevant quality. The receiver has private information about the quality, referred to as his type. We show that the sender‐optimal mechanism takes the form of nested intervals: each type accepts on an interval of qualities and a more optimistic type's interval contains a less optimistic type's interval. This nested‐interval structure offers a simple algorithm to solve for the optimal disclosure and connects our problem to the monopoly screening problem. The mechanism is optimal even if the sender conditions the disclosure mechanism on the receiver's reported type.
I study a dynamic relationship where a principal delegates experimentation to an agent. Experimentation is modeled as a one-armed bandit that yields successes following a Poisson process. Its unknown intensity is high or low. The agent has private information, his type being his prior belief that the intensity is high. The agent values successes more than the principal does, so prefers more experimentation. The optimal mechanism is a cutoff rule in the belief space: the cutoff gives pessimistic types total freedom but curtails optimistic types’ behavior. Pessimistic types overexperiment while the most optimistic ones underexperiment. This delegation rule is time consistent. (JEL D23, D82, D83, O30)
A fully committed sender seeks to sway a collective adoption decision through designing experiments. Voters have correlated payoff states and heterogeneous thresholds of doubt. We characterize the sender-optimal policy under unanimity rule for two persuasion modes. Under general persuasion, evidence presented to each voter depends on all voters' states. The sender makes the most demanding voters indifferent between decisions, while the more lenient voters strictly benefit from persuasion. Under individual persuasion, evidence presented to each voter depends only on her state. The sender designates a subgroup of rubber-stampers, another of fully informed voters, and a third of partially informed voters. The most demanding voters are strategically accorded high-quality information.A tremendous share of decision-making in economic and political realms is made within collective schemes. We explore a setting in which a sender seeks to get the unanimous approval of a group for a project he promotes. Group members care about different aspects of the project and might disagree on whether the project should be implemented. They might also vary in the loss they incur if the project is of low quality in their respective aspects. The sender designs experiments to persuade the members to approve. When deciding as part of a group, individuals understand the informational and payoff interdependencies among their decisions.Previous literature has focused mostly on the aggregation and acquisition of (costly) information from exogenous sources in collective decision-making. In contrast, our focus is on optimal persuasion of a heterogeneous group by a biased sender who is able to
We consider a platform that provides probabilistic forecasts to a customer using some algorithm. We introduce a concept of miscalibration, which measures the discrepancy between the forecast and the truth. We characterize the platform's optimal equilibrium when it incurs some cost for miscalibration, and show how this equilibrium depends on the miscalibration cost: when the miscalibration cost is low, the platform uses more distant forecasts and the customer is less responsive to the platform's forecast; when the miscalibration cost is high, the platform can achieve its commitment payoff in an equilibrium and the only extensive‐form rationalizable strategy of the platform is its strategy in the commitment solution. Our results show that miscalibration cost is a proxy for the degree of the platform's commitment power and, thus, provide a microfoundation for the commitment solution.
Communication facilitates cooperation by ensuring that deviators are collectively punished. We explore how players might misuse communication to threaten one another, and we identify ways that organizations can deter misuse and restore cooperation. In our model, a principal plays trust games with a sequence of short-run agents who communicate with one another. An agent can shirk and then extort pay by threatening to report that the principal deviated. We show that these threats can completely undermine cooperation. Investigations of agents’ efforts, or dyadic relationships between the principal and each agent, can deter extortion and restore some cooperation. Investigations of the principal’s action, on the other hand, typically do not help. Our analysis suggests that collective punishments are vulnerable to misuse unless they are designed with an eye towards discouraging it.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.