Many two-sided matching markets, from labor markets to school choice programs, use a clearinghouse based on the applicant-proposing deferred acceptance algorithm, which is well known to be strategy-proof for the applicants. Nonetheless, a growing amount of empirical evidence reveals that applicants misrepresent their preferences when this mechanism is used. This paper shows that no mechanism that implements a stable matching is obviously strategy-proof for any side of the market, a stronger incentive property than strategy-proofness that was introduced by Li (2017). A stable mechanism that is obviously strategy-proof for applicants is introduced for the case in which agents on the other side have acyclical preferences.
The Gibbard-Satterthwaite Impossibility Theorem (Gibbard, 1973;Satterthwaite, 1975) holds that dictatorship is the only Pareto optimal and strategyproof social choice function on the full domain of preferences. Much of the work in mechanism design aims at getting around this impossibility theorem. Three grand success stories stand out. On the domains of single peaked preferences, of house matching, and of quasilinear preferences, there are appealing Pareto optimal and strategyproof social choice functions. We investigate whether these success stories are robust to strengthening strategyproofness to obvious strategyproofness, recently introduced by Li (2015). A social choice function is obviously strategyproof (OSP) implementable if even cognitively limited agents can recognize their strategies as weakly dominant.For single peaked preferences, we characterize the class of OSP-implementable and unanimous social choice functions as dictatorships with safeguards against extremismmechanisms (which turn out to also be Pareto optimal) in which the dictator can choose the outcome, but other agents may prevent the dictator from choosing an outcome that is too extreme. Median voting is consequently not OSP-implementable. Indeed, the only OSP-implementable quantile rules choose either the minimal or the maximal ideal point. For house matching, we characterize the class of OSP-implementable and Pareto optimal matching rules as sequential barter with lurkers -a significant generalization over bossy variants of bipolar serially dictatorial rules. While Li (2015) shows that second-price auctions are OSP-implementable when only one good is sold, we show that this positive result does not extend to the case of multiple goods. Even when all agents' preferences over goods are quasilinear and additive, no welfare-maximizing auction where losers pay nothing is OSP-implementable when more than one good is sold. Our analysis makes use of a gradual revelation principle, an analog of the (direct) revelation principle for OSP mechanisms that we present and prove.
We consider a monopolist that is selling n items to a single additive buyer, where the buyer's values for the items are drawn according to independent distributions F 1 , F 2 , . . . , F n that possibly have unbounded support. It is well known thatunlike in the single item case -the revenue-optimal auction (a pricing scheme) may be complex, sometimes requiring a continuum of menu entries. It is also known that simple auctions with a finite bounded number of menu entries can extract a constant fraction of the optimal revenue. Nonetheless, the question of the possibility of extracting an arbitrarily high fraction of the optimal revenue via a finite menu size remained open.In this paper, we give an affirmative answer to this open question, showing that for every n and for every ε > 0, there exists a complexity bound C = C(n, ε) such that auctions of menu size at most C suffice for obtaining a (1 − ε) fraction of the optimal revenue from any F 1 , . . . , F n . We prove upper and lower bounds on the revenue approximation complexity C(n, ε), as well as on the deterministic communication complexity required to run an auction that achieves such an approximation.
The unbeatability of a consensus protocol, introduced by Halpern, Moses and Waarts in [14], is a stronger notion of optimality than the accepted notion of early stopping protocols. Using a novel knowledge-based analysis, this paper derives the first practical unbeatable consensus protocols in the literature, for the standard synchronous message-passing model with crash failures. These protocols strictly dominate the best known protocols for uniform and for nonuniform consensus, in some case beating them by a large margin. The analysis provides a new understanding of the logical structure of consensus, and of the distinction between uniform and nonuniform consensus. Finally, the first (early stopping and) unbeatable protocol that treats decision values "fairly" is presented. All of these protocols have very concise descriptions, and are shown to be efficiently implementable. 4. Early stopping protocols for consensus are traditionally one-sided, preferring to decide on 0 (or on 1) if possible. deciding on a predetermined value (say, 0) if possible, we present an An unbeatable (and early stopping) majority consensus protocol Opt Maj is presented, that prefers the majority value.5. We identify the notion of a hidden path as being crucial to decision in the consensus task. If a process identifies that no hidden path exists, then it can decide. In the fastest early-stopping protocols, a process decides after the first round in which it does not detect a new failure. By deciding based on the nonexistence of a hidden path, our unbeatable protocols can stop up to t − 3 rounds faster than the best early stopping protocols in the literature.We now sketch the intuition behind, our unbeatable consensus protocols.In the standard version of consensus, every process i starts with an initial value v i ∈ {0, 1}, and the following properties must hold in every run r:(Nonuniform) Consensus:Decision: Every correct process must decide on some value, Validity: If all initial values are v then the correct processes decide v, and Agreement: All correct processes decide on the same value.The connection between knowledge and distributed computing was proposed in [13] and has been used in the analysis of a variety of problems, including consensus (see [9] for more details and references). In this paper, we employ simpler techniques to perform a more direct knowledgebased analysis. Our approach is based on a simple principle recently formulated by Moses in [17], called the knowledge of preconditions principle (KoP), which captures an essential connection between knowledge and action in distributed and multi-agent systems. Roughly speaking, the KoP principle says that if C is a necessary condition for an action α to be performed by process i, then K i (C) -i knowing C -is a necessary condition for i performing α. E.g., it is not enough for a client to have positive credit in order to receive cash from an ATM; the ATM must know that the client has positive credit.Problem specifications typically state or imply a variety of necessary conditions...
We present a polynomial-time algorithm that, given samples from the unknown valuation distribution of each bidder, learns an auction that approximately maximizes the auctioneer's revenue in a variety of single-parameter auction environments including matroid environments, position environments, and the public project environment. The valuation distributions may be arbitrary bounded distributions (in particular, they may be irregular, and may differ for the various bidders), thus resolving a problem left open by previous papers. The analysis uses basic tools, is performed in its entirety in value-space, and simplifies the analysis of previously known results for special cases. Furthermore, the analysis extends to certain singleparameter auction environments where precise revenue maximization is known to be intractable, such as knapsack environments.
We consider the problem of welfare (and gains-from-trade) maximization in two-sided markets using simple mechanisms that are prior-independent. The seminal impossibility result of Myerson and Satterthwaite [1983] shows that even for bilateral trade, there is no feasible (individually rational, truthful, and budget balanced) mechanism that has welfare as high as the optimal-yet-infeasible VCG mechanism, which attains maximal welfare but runs a deficit. On the other hand, the optimal feasible mechanism needs to be carefully tailored to the Bayesian prior, and even worse, it is known to be extremely complex, eluding a precise description.In this paper we present Bulow-Klemperer-style results to circumvent these hurdles in doubleauction market settings. We suggest using the Buyer Trade Reduction (BTR) mechanism, a variant of McAfee's mechanism, which is feasible and simple (in particular, it is deterministic, truthful, prior-independent, and anonymous). First, in the setting in which the values of the buyers and of the sellers are sampled independently and identically from the same distribution, we show that for any such market of any size, BTR with one additional buyer whose value is sampled from the same distribution has expected welfare at least as high as the optimal-yetinfeasible VCG mechanism in the original market.We then move to a more general setting in which the values of the buyers are sampled from one distribution, and those of the sellers from another, focusing on the case where the buyers' distribution first-order stochastically dominates the sellers' distribution. We present both upper bounds and lower bounds on the number of buyers that, when added, guarantees that BTR in the augmented market have welfare at least as high as the optimal in the original market. Our lower bounds extend to a large class of mechanisms, and all of our positive and negative results extend to adding sellers instead of buyers. In addition, we present positive results about the usefulness of pricing at a sample for welfare maximization (and more precisely, for gains-fromtrade approximation) in two-sided markets under the above two settings, which to the best of our knowledge are the first sampling results in this context. * Microsoft Research 9 Moreover, the result fails even for some pair of regular distributions, a condition that is used in the proof of the BK result.10 In the absence of any prior, it is natural to treat all agents the same, and thus anonymity is a natural assumption, and one may even claim that it is in a sense a prerequisite for simplicity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.