We study fair allocation of indivisible goods to agents with unequal entitlements. Fair allocation has been the subject of many studies in both divisible and indivisible settings. Our emphasis is on the case where the goods are indivisible and agents have unequal entitlements. This problem is a generalization of the work by Procaccia and Wang [20] wherein the agents are assumed to be symmetric with respect to their entitlements. Although Procaccia and Wang show an almost fair (constant approximation) allocation exists in their setting, our main result is in sharp contrast to their observation. We show that, in some cases with n agents, no allocation can guarantee better than 1/n approximation of a fair allocation when the entitlements are not necessarily equal. Furthermore, we devise a simple algorithm that ensures a 1/n approximation guarantee.Our second result is for a restricted version of the problem where the valuation of every agent for each good is bounded by the total value he wishes to receive in a fair allocation. Although this assumption might seem w.l.o.g, we show it enables us to find a 1/2 approximation fair allocation via a greedy algorithm. Finally, we run some experiments on real-world data and show that, in practice, a fair allocation is likely to exist. We also support our experiments by showing positive results for two stochastic variants of the problem, namely stochastic agents and stochastic items.
We study the problem of computing Nash equilibria of zero-sum games. Many natural zerosum games have exponentially many strategies, but highly structured payoffs. For example, in the well-studied Colonel Blotto game (introduced by Borel in 1921), players must divide a pool of troops among a set of battlefields with the goal of winning (i.e., having more troops in) a majority. The Colonel Blotto game is commonly used for analyzing a wide range of applications from the U.S presidential election, to innovative technology competitions, to advertisement, to sports. However, because of the size of the strategy space, standard methods for computing equilibria of zero-sum games fail to be computationally feasible. Indeed, despite its importance, only a few solutions for special variants of the problem are known.In this paper we show how to compute equilibria of Colonel Blotto games. Moreover, our approach takes the form of a general reduction: to find a Nash equilibrium of a zero-sum game, it suffices to design a separation oracle for the strategy polytope of any bilinear game that is payoff-equivalent. We then apply this technique to obtain the first polytime algorithms for a variety of games. In addition to Colonel Blotto, we also show how to compute equilibria in an infinite-strategy variant called the General Lotto game; this involves showing how to prune the strategy space to a finite subset before applying our reduction. We also consider the class of dueling games, first introduced by Immorlica et al. (2011). We show that our approach provably extends the class of dueling games for which equilibria can be computed: we introduce a new dueling game, the matching duel, on which prior methods fail to be computationally feasible but upon which our reduction can be applied. 1 difficulty of solving for equilibrium explicitly, it is natural to revisit the equilibrium computation problem for Colonel Blotto games.An Approach: Bilinear Games How should one approach equilibrium computation in such a game? The exponential size of the strategy set is not an impassable barrier; in certain cases, games with exponentially many strategies have an underlying structure that can be used to approach the equilibrium computation problem. For example, Koller, Megiddo and von Stengel [29] show how to compute equilibria for zero-sum extensive-form games with perfect recall. Immorlica et al. [27] give an approach for solving algorithmically-motivated "dueling games" with uncertainty. Letchford and Conitzer [35] compute equilibria for a variety of graphical security games. Each of these cases involve games with exponentially many strategies. In each case, a similar approach is employed: reformulating the original game as a payoff-equivalent bilinear game. In a bilinear game, the space of strategies forms a polytope in R n , and payoffs are specified by a matrix M : if the players play strategies x and y respectively, then the payoff to player 1 is x T M y. It has been observed that such bilinear games can be solved efficiently when the stra...
The edit distance between two strings is defined as the smallest number of insertions, deletions, and substitutions that need to be made to transform one of the strings to another one. Approximating edit distance in subquadratic time is "one of the biggest unsolved problems in the field of combinatorial pattern matching" [20]. Our main result is a quantum constant approximation algorithm for computing the edit distance in truly subquadratic time. More precisely, we give an O(n 1.858 ) quantum algorithm that approximates the edit distance within a factor of 7. We further extend this result to an O(n 1.781 ) quantum algorithm that approximates the edit distance within a larger constant factor.Our solutions are based on a framework for approximating edit distance in parallel settings. This framework requires as black box an algorithm that computes the distances of several smaller strings all at once. For a quantum algorithm, we reduce the black box to metric estimation and provide efficient algorithms for approximating it. We further show that this framework enables us to approximate edit distance in distributed settings. To this end, we provide a MapReduce algorithm to approximate edit distance within a factor of 3, with sublinearly many machines and sublinear memory. Also, our algorithm runs in a logarithmic number of rounds.
We study the problem of selling n items to a single buyer with an additive valuation function. We consider the valuation of the items to be correlated, i.e., desirabilities of the buyer for the items are not drawn independently. Ideally, the goal is to design a mechanism to maximize the revenue. However, it has been shown that a revenue optimal mechanism might be very complicated and as a result inapplicable to real-world auctions. Therefore, our focus is on designing a simple mechanism that achieves a constant fraction of the optimal revenue. Babaioff et al. [3] propose a simple mechanism that achieves a constant fraction of the optimal revenue for independent setting with a single additive buyer. However, they leave the following problem as an open question: "Is there a simple, approximately optimal mechanism for a single additive buyer whose value for n items is sampled from a common base-value distribution?" Babaioff et al. show a constant approximation factor of the optimal revenue can be achieved by either selling the items separately or as a whole bundle in the independent setting. We show a similar result for the correlated setting when the desirabilities of the buyer are drawn from a common base-value distribution. It is worth mentioning that the core decomposition lemma which is mainly the heart of the proofs for efficiency of the mechanisms does not hold for correlated settings. Therefore we propose a modified version of this lemma which is applicable to the correlated settings as well. Although we apply this technique to show the proposed mechanism can guarantee a constant fraction of the optimal revenue in a very weak correlation, this method alone can not directly show the efficiency of the mechanism in stronger correlations. Therefore, via a combinatorial approach we reduce the problem to an auction with a weak correlation to which the core decomposition technique is applicable. In addition, we introduce a generalized model of correlation for items and show the proposed mechanism achieves an O(log k) approximation factor of the optimal revenue in that setting.
The knapsack problem is a fundamental problem in combinatorial optimization. It has been studied extensively from theoretical as well as practical perspectives as it is one of the most well-known NP-hard problems. The goal is to pack a knapsack of size t with the maximum value from a collection of n items with given sizes and values.Recent evidence suggests that a classic O(nt) dynamic-programming solution for the knapsack problem might be the fastest in the worst case. In fact, solving the knapsack problem was shown to be computationally equivalent to the (min, +) convolution problem, which is thought to be facing a quadratic-time barrier. This hardness is in contrast to the more famous (+, ·) convolution (generally known as polynomial multiplication), that has an O(n log n)-time solution via Fast Fourier Transform.Our main results are algorithms with near-linear running times (in terms of the size of the knapsack and the number of items) for the knapsack problem, if either the values or sizes of items are small integers. More specifically, if item sizes are integers bounded by s max , the running time of our algorithm isÕ((n+t)s max ). If the item values are integers bounded by v max , our algorithm runs in timeÕ(n + tv max ). Best previously known running times were O(nt), O(n 2 s max ) and O(ns max v max ) (Pisinger, J. of Alg., 1999).At the core of our algorithms lies the prediction technique: Roughly speaking, this new technique enables us to compute the convolution of two vectors in time O(ne max ) when an approximation of the solution within an additive error of e max is available.Our results also improve the best known strongly polynomial time solutions for knapsack. In the limited size setting, when the items have multiplicities, the fastest strongly polynomial time algorithms for knapsack run in time O(n 2 s max 2 ) and O(n 3 s max 2 ) for the cases of infinite and given multiplicities, respectively. Our results improve both running times by a factor of Ω(n max{1, n/s max }).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.