We study the revenue maximization problem of a seller with n heterogeneous items for sale to a single buyer whose valuation function for sets of items is unknown and drawn from some distribution D. We show that if D is a distribution over subadditive valuations with independent items, then the better of pricing each item separately or pricing only the grand bundle achieves a constant-factor approximation to the revenue of the optimal mechanism. This includes buyers who are k-demand, additive up to a matroid constraint, or additive up to constraints of any downwards-closed set system (and whose values for the individual items are sampled independently), as well as buyers who are fractionally subadditive with item multipliers drawn independently. Our proof makes use of the core-tail decomposition framework developed in prior work showing similar results for the significantly simpler class of additive buyers [Li and Yao 2013;Babaioff et al. 2014].In the second part of the paper, we develop a connection between approximately optimal simple mechanisms and approximate revenue monotonicity with respect to buyers' valuations. Revenue non-monotonicity is the phenomenon that sometimes strictly increasing buyers' values for every set can strictly decrease the revenue of the optimal mechanism [Hart and Reny 2012]. Using our main result, we derive a bound on how bad this degradation can be (and dub such a bound a proof of approximate revenue monotonicity); we further show that better bounds on approximate monotonicity imply a better analysis of our simple mechanisms.
We present a new distributed model of probabilistically checkable proofs (PCP). A satisfying assignment x ∈ {0, 1} n to a CNF formula ϕ is shared between two parties, where Alice knows x 1 , . . . , x n/2 , Bob knows x n/2+1 , . . . , x n , and both parties know ϕ. The goal is to have Alice and Bob jointly write a PCP that x satisfies ϕ, while exchanging little or no information. Unfortunately, this model as-is does not allow for nontrivial query complexity. Instead, we focus on a non-deterministic variant, where the players are helped by Merlin, a third party who knows all of x.Using our framework, we obtain, for the first time, PCP-like reductions from the Strong Exponential Time Hypothesis (SETH) to approximation problems in P. In particular, under SETH we show that there are no truly-subquadratic approximation algorithms for Bichromatic Maximum Inner Product over {0, 1}-vectors, Bichromatic LCS Closest Pair over permutations, Approximate Regular Expression Matching, and Diameter in Product Metric. All our inapproximability factors are nearly-tight. In particular, for the first two problems we obtain nearly-polynomial factors of 2 (log n) 1−o(1) ; only (1 + o(1))-factor lower bounds (under SETH) were known before.1 SETH is a pessimistic version of P = NP, stating that for every ε > 0 there is a k such that k-SAT cannot be solved in O((2 − ε) n ) time.2 See the end of this section for a discussion of "bichromatic" vs "monochromatic" closest pair problems. 3 In SODA'17, two entire sessions were dedicated to algorithms for similarity search.
We prove conditional near-quadratic running time lower bounds for approximate Bichromatic Closest Pair with Euclidean, Manhattan, Hamming, or edit distance. Specifically, unless the Strong Exponential Time Hypothesis (SETH) is false, for every δ > 0 there exists a constant ε > 0 such that computing a (1 + ε)-approximation to the Bichromatic Closest Pair requires Ω n 2−δ time. In particular, this implies a near-linear query time for Approximate Nearest Neighbor search with polynomial preprocessing time.Our reduction uses the Distributed PCP framework of [ARW17], but obtains improved efficiency using Algebraic Geometry (AG) codes. Efficient PCPs from AG codes have been constructed in other settings before [BKK + 16, BCG + 17], but our construction is the first to yield new hardness results. * Harvard University aviad@seas.harvard.edu. This research was supported by a Rabin Postdoctoral Fellowship. I thank Amir Abboud, Karl Bringmann, and Pasin Manurangsi for encouraging me to write up this paper. I am also grateful to Amir, Lijie Chen, and Ryan Williams for inspiring discussions. Thanks also to Amir, Lijie, Vasileios Nakos, Zhao Song and anonymous reviewers for comments on earlier versions. Finally, this work would not have been possible without the help of Gil Cohen and Madhu Sudan with AG codes.
We prove that there exists a constant > 0 such that, assuming the Exponential Time Hypothesis for PPAD, computing an -approximate Nash equilibrium in a two-player n × n game requires time n Our proof relies on a variety of techniques from the study of probabilistically checkable proofs (PCP); this is the first time that such ideas are used for a reduction between problems inside PPAD.En route, we also prove new hardness results for computing Nash equilibria in games with many players. In particular, we show that computing an -approximate Nash equilibrium in a game with n players requires 2 Ω(n) oracle queries to the payoff tensors. This resolves an open problem posed by Hart and Nisan [43], Babichenko [13], and Chen et al. [28]. In fact, our results for n-player games are stronger: they hold with respect to the ( , δ)-WeakNash relaxation recently introduced by Babichenko et al. [15].
For a constant , we prove a polyN lower bound on the (randomized) communication complexity of -Nash equilibrium in two-player N N games. For n-player binary-action games we prove an expn lower bound for the (randomized) communication complexity of , -weak approximate Nash equilibrium, which is a profile of mixed actions such that at least 1 -fraction of the players are -best replying.
We prove that finding an -approximate Nash equilibrium is PPADcomplete for constant and a particularly simple class of games: polymatrix, degree 3 graphical games, in which each player has only two actions.As corollaries, we also prove similar inapproximability results for Bayesian Nash equilibrium in a two-player incomplete information game with a constant number of actions, for relative -Well Supported Nash Equilibrium in a two-player game, for market equilibrium in a non-monotone market, for the generalized circuit problem defined by Chen et al [CDT09], and for approximate competitive equilibrium from equal incomes with indivisible goods.
We study generalizations of the "Prophet Inequality" and "Secretary Problem", where the algorithm is restricted to an arbitrary downward-closed set system. For {0, 1} values, we give O (log n)-competitive algorithms for both problems. This is close to the Ω (log n/ log log n) lower bound due to Babaioff, Immorlica, and Kleinberg [3]. For general values, our results translate to O (log n · log r)-competitive algorithms, where r is the cardinality of the largest feasible set. This resolves (up to the O (log r · log log n) factors) an open question posed to us by Bobby Kleinberg [13].
In this paper we study the adaptivity of submodular maximization. Adaptivity quantifies the number of sequential rounds that an algorithm makes when function evaluations can be executed in parallel. Adaptivity is a fundamental concept that is heavily studied across a variety of areas in computer science, largely due to the need for parallelizing computation. For the canonical problem of maximizing a monotone submodular function under a cardinality constraint, it is well known that a simple greedy algorithm achieves a 1 − 1/e approximation [NWF78] and that this approximation is optimal for polynomial-time algorithms [NW78]. Somewhat surprisingly, despite extensive efforts on submodular optimization for large-scale datasets, until very recently there was no known algorithm that achieves a constant factor approximation for this problem whose adaptivity is sublinear in the size of the ground set n.Recent work by [BS18] describes an algorithm that obtains an approximation arbitrarily close to 1/3 in O(log n) adaptive rounds and shows that no algorithm can obtain a constant factor approximation inõ(log n) adaptive rounds. This approach achieves an exponential speedup in adaptivity (and parallel running time) at the expense of approximation quality.In this paper we describe a novel approach that yields an algorithm whose approximation is arbitrarily close to the optimal 1 − 1/e guarantee in O(log n) adaptive rounds. This algorithm therefore achieves an exponential speedup in parallel running time for submodular maximization at the expense of an arbitrarily small loss in approximation quality. This guarantee is optimal in both approximation and adaptivity, up to lower order terms. 0
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.