We study fair allocation of indivisible goods to agents with unequal entitlements. Fair allocation has been the subject of many studies in both divisible and indivisible settings. Our emphasis is on the case where the goods are indivisible and agents have unequal entitlements. This problem is a generalization of the work by Procaccia and Wang [20] wherein the agents are assumed to be symmetric with respect to their entitlements. Although Procaccia and Wang show an almost fair (constant approximation) allocation exists in their setting, our main result is in sharp contrast to their observation. We show that, in some cases with n agents, no allocation can guarantee better than 1/n approximation of a fair allocation when the entitlements are not necessarily equal. Furthermore, we devise a simple algorithm that ensures a 1/n approximation guarantee.Our second result is for a restricted version of the problem where the valuation of every agent for each good is bounded by the total value he wishes to receive in a fair allocation. Although this assumption might seem w.l.o.g, we show it enables us to find a 1/2 approximation fair allocation via a greedy algorithm. Finally, we run some experiments on real-world data and show that, in practice, a fair allocation is likely to exist. We also support our experiments by showing positive results for two stochastic variants of the problem, namely stochastic agents and stochastic items.
Sorting extremely large datasets is a frequently occurring task in practice. These datasets are usually much larger than the computer's main memory; thus, external memory sorting algorithms, first introduced by Aggarwal and Vitter, are often used. The complexity of comparison-based external memory sorting has been understood for decades by now; however, the situation remains elusive if we assume the keys to be sorted are integers. In internal memory, one can sort a set of
n
integer keys of Θ(lg
n
) bits each in
O
(
n
) time using the classic Radix Sort algorithm; however, in external memory, there are no faster integer sorting algorithms known than the simple comparison-based ones. Whether such algorithms exist has remained a central open problem in external memory algorithms for more than three decades.
In this paper, we present a
tight
conditional lower bound on the complexity of external memory sorting of integers. Our lower bound is based on a famous conjecture in network coding by Li and Li, who conjectured that network coding cannot help anything beyond the standard multicommodity flow rate in undirected graphs.
The only previous work connecting the Li and Li conjecture to lower bounds for algorithms is due to Adler et al. Adler et al. indeed obtain relatively simple lower bounds for
oblivious
algorithms (the memory access pattern is fixed and independent of the input data). Unfortunately, obliviousness is a strong limitation, especially for integer sorting: we show that the Li and Li conjecture implies an Ω(
n
lg
n
) lower bound for internal memory
oblivious
sorting when the keys are Θ(lg
n
) bits. This is in sharp contrast to the classic (nonoblivious) Radix Sort algorithm. Indeed, going beyond obliviousness is highly nontrivial; we need to introduce several new methods and involved techniques, which are of their own interest, to obtain our tight lower bound for external memory integer sorting.
We consider the following stochastic matching problem on both weighted and unweighted graphs: A graph G = (V, E) along with a parameter p ∈ (0, 1) is given in the input. Each edge of G is realized independently with probability p. The goal is to select a degree bounded (dependent only on p) subgraph H of G such that the expected size/weight of maximum realized matching of H is close to that of G.This model of stochastic matching has attracted significant attention over the recent years due to its various applications in kidney exchange, online labor markets, and other matching markets.The most fundamental open question is the best approximation factor achievable for such algorithms that, in the literature, are referred to as non-adaptive algorithms. Prior work has identified breaking (near) half-approximation as a barrier for both weighted and unweighted graphs. Our main results are as follows:• We analyze a simple and clean algorithm and show that for unweighted graphs, it finds an (almost) 4 √ 2 − 5 (≈ 0.6568) approximation by querying O( log(1/p) p ) edges per vertex. This improves over the state-of-the-art 0.5001 approximation of Assadi et al. [EC'17].• We show that the same algorithm achieves a 0.501 approximation for weighted graphs by querying O( log(1/p) p ) edges per vertex. This is the first algorithm to break 0.5 approximation barrier for weighted graphs. It also improves the per-vertex queries of the state-ofthe-art by Yamaguchi and Maehara [SODA'18] and Behnezhad and Reyhani [EC'18].Prior results were all interestingly based on similar algorithms and differed only in the analysis. Our algorithms are fundamentally different, yet very simple and natural. For the analysis, we introduce a number of procedures that construct heavy fractional matchings. We consider the new algorithms and our analytical tools to be the main contributions of this paper. * Portion of the work was completed while some of the authors were at
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations –citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.