We show that for k ≥ 3 even the Ω(n) level of the Lasserre hierarchy cannot disprove a random k-CSP instance over any predicate type implied by k-XOR constraints, for example k-SAT or k-XOR. (One constant is said to imply another if the latter is true whenever the former is. For example k-XOR constraints imply k-CNF constraints.) As a result the Ω(n) level Lasserre relaxation fails to approximate such CSPs better than the trivial, random algorithm. As corollaries, we obtain Ω(n) level integrality gaps for the Lasserre hierarchy of 7 6 − ε for VertexCover, 2 − ε for k-UniformHypergraphVertexCover, and any constant for k-UniformHypergraphIndependentSet. This is the first construction of a Lasserre integrality gap.Our construction is notable for its simplicity. It simplifies, strengthens, and helps to explain several previous results.
Constrained submodular maximization problems have long been studied, most recently in the context of auctions and computational advertising, with near-optimal results known under a variety of constraints when the submodular function is monotone. The case of non-monotone submodular maximization is less well understood: the first approximation algorithms even for the unconstrained setting were given by Feige et al. (FOCS '07). More recently, Lee et al. (STOC '09, APPROX '09) show how to approximately maximize non-monotone submodular functions when the constraints are given by the intersection of p matroid constraints; their algorithm is based on local-search procedures that consider p-swaps, and hence the running time may be n Ω(p) , implying their algorithm is polynomial-time only for constantly many matroids.In this paper, we give algorithms that work for p-independence systems (which generalize constraints given by the intersection of p matroids), where the running time is poly(n, p). Both our algorithms and analyses are simple: our algorithm essentially reduces the non-monotone maximization problem to multiple runs of the greedy algorithm previously used in the monotone case. Our idea of using existing algorithms for monotone functions to solve the non-monotone case also works for maximizing a submodular function with respect to a knapsack constraint: we get a simple greedy-based constant-factor approximation for this problem.With these simpler algorithms, we are able to adapt our approach to constrained non-monotone submodular maximization to the (online) secretary setting, where elements arrive one at a time in random order, and the algorithm must make irrevocable decisions about whether or not to select each element as it arrives. We give constant approximations in this secretary setting when the algorithm is constrained subject to a uniform matroid or a partition matroid, and give an O(log k) approximation when it is constrained by a general matroid of rank k.
Vassilevska W. [STOC 13] show that inÕ (m √ n) time, one can compute for each v ∈ V in an undirected graph, an estimate e (v) for the eccentricity (v) such that max {R, 2 /3 • (v)} ≤ e (v) ≤ min {D, 3 /2 • (v)} where R = minv (v) is the radius of the graph. Here we improve the approximation guarantee by showing that a variant of the same algorithm can achieve estimates (v) with 3 /5 • (v) ≤ (v) ≤ (v).
In the setting where information cannot be verified, we propose a simple yet powerful information theoretical framework-the Mutual Information Paradigm-for information elicitation mechanisms. Our framework pays every agent a measure of mutual information between her signal and a peer's signal. We require that the mutual information measurement has the key property that any "data processing" on the two random variables will decrease the mutual information between them. We identify such information measures that generalize Shannon mutual information.Our Mutual Information Paradigm overcomes the two main challenges in information elicitation without verification: (1) how to incentivize effort and avoid agents colluding to report random or identical responses (2) how to motivate agents who believe they are in the minority to report truthfully.Aided by the information measures we found, (1) we use the paradigm to design a family of novel mechanisms where truth-telling is a dominant strategy and any other strategy will decrease every agent's expected payment (in the multi-question, detail free, minimal setting where the number of questions is large); (2) we show the versatility of our framework by providing a unified theoretical understanding of existing mechanisms-Peer Prediction [Miller 2005], Bayesian Truth Serum [Prelec 2004], and Dasgupta and Ghosh [2013]-by mapping them into our framework such that theoretical results of those existing mechanisms can be reconstructed easily.We also give an impossibility result which illustrates, in a certain sense, the the optimality of our framework.
We consider the problem of conducting a survey with the goal of obtaining an unbiased estimator of some population statistic when individuals have unknown costs (drawn from a known prior) for participating in the survey. Individuals must be compensated for their participation and are strategic agents, and so the payment scheme must incentivize truthful behavior. We derive optimal truthful mechanisms for this problem for the two goals of minimizing the variance of the estimator given a fixed budget, and minimizing the expected cost of the survey given a fixed variance goal. AbstractWe consider the problem of conducting a survey with the goal of obtaining an unbiased estimator of some population statistic when individuals have unknown costs (drawn from a known prior) for participating in the survey. Individuals must be compensated for their participation and are strategic agents, and so the payment scheme must incentivize truthful behavior. We derive optimal truthful mechanisms for this problem for the two goals of minimizing the variance of the estimator given a fixed budget, and minimizing the expected cost of the survey given a fixed variance goal.
We study linear programming relaxations of Vertex Cover and Max Cut arising from repeated applications of the "liftand-project" method of Lovasz and Schrijver starting from the standard linear programming relaxation.For Vertex Cover, Arora, Bollobas, Lovasz and Tourlakis prove that the integrality gap remains at least 2 − ε after Ωε(log n) rounds, where n is the number of vertices, and Tourlakis proves that integrality gap remains at least 1.5 − ε after Ω((log n) 2 ) rounds. Fernandez de la Vega and Kenyon prove that the integrality gap of Max Cut is at most 1 2 + ε after any constant number of rounds. (Their result also applies to the more powerful Sherali-Adams method.)We prove that the integrality gap of Vertex Cover remains at least 2−ε after Ωε(n) rounds, and that the integrality gap of Max Cut remains at most 1/2 + ε after Ωε(n) rounds.
A community in a social network is usually understood to be a group of nodes more densely connected with each other than with the rest of the network. This is an important concept in most domains where networks arise: social, technological, biological, etc. For many years algorithms for finding communities implicitly assumed communities are nonoverlapping (leading to use of clustering-based approaches) but there is increasing interest in finding overlapping communities. A barrier to finding communities is that the solution concept is often defined in terms of an NP-complete problem such as Clique or Hierarchical Clustering.This paper seeks to initiate a rigorous approach to the problem of finding overlapping communities, where "rigorous" means that we clearly state the following: (a) the object sought by our algorithm (b) the assumptions about the underlying network (c) the (worst-case) running time.Our assumptions about the network lie between worst-case and average-case. An averagecase analysis would require a precise probabilistic model of the network, on which there is currently no consensus. However, some plausible assumptions about network parameters can be gleaned from a long body of work in the sociology community spanning five decades focusing on the study of individual communities and ego-centric networks (in graph theoretic terms, this is the subgraph induced on a node's neighborhood). Thus our assumptions are somewhat "local" in nature. Nevertheless they suffice to permit a rigorous analysis of running time of algorithms that recover global structure.Our algorithms use random sampling similar to that in property testing and algorithms for dense graphs. We note however that our networks are not necessarily dense graphs, not even in local neighborhoods.Our algorithms explore a local-global relationship between ego-centric and socio-centric networks that we hope will provide a fruitful framework for future work both in computer science and sociology.
We consider the problem of designing a survey to aggregate non-verifiable information from a privacy-sensitive population: an analyst wants to compute some aggregate statistic from the private bits held by each member of a population, but cannot verify the correctness of the bits reported by participants in his survey. Individuals in the population are strategic agents with a cost for privacy, i.e., they not only account for the payments they expect to receive from the mechanism, but also their privacy costs from any information revealed about them by the mechanism's outcome-the computed statistic as well as the payments-to determine their utilities. How can the analyst design payments to obtain an accurate estimate of the population statistic when individuals strategically decide both whether to participate and whether to truthfully report their sensitive information?We design a differentially private peer-prediction mechanism [Miller et al. 2005] that supports accurate estimation of the population statistic as a Bayes-Nash equilibrium in settings where agents have explicit preferences for privacy. The mechanism requires knowledge of the marginal prior distribution on bits b i , but does not need full knowledge of the marginal distribution on the costs c i , instead requiring only an approximate upper bound. Our mechanism guarantees -differential privacy to each agent i against any adversary who can observe the statistical estimate output by the mechanism, as well as the payments made to the n − 1 other agents j = i. Finally, we show that with slightly more structured assumptions on the privacy cost functions of each agent [Chen et al. 2013], the cost of running the survey goes to 0 as the number of agents diverges.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.