We consider a single class open queueing network, also known as a generalized Jackson network (GJN). A classical result in heavytraffic theory asserts that the sequence of normalized queue length processes of the GJN converge weakly to a reflected Brownian motion (RBM) in the orthant, as the traffic intensity approaches unity. However, barring simple instances, it is still not known whether the stationary distribution of RBM provides a valid approximation for the steady-state of the original network. In this paper we resolve this open problem by proving that the re-scaled stationary distribution of the GJN converges to the stationary distribution of the RBM, thus validating a so-called "interchange-of-limits" for this class of networks. Our method of proof involves a combination of Lyapunov function techniques, strong approximations and tail probability bounds that yield tightness of the sequence of stationary distributions of the GJN.
We construct a deterministic fully polynomial time approximation scheme (FPTAS) for computing the total number of matchings in a bounded degree graph. Additionally, for an arbitrary graph, we construct a deterministic algorithm for computing approximately the number of matchings within running time exp(O( √ n log 2 n)), where n is the number of vertices.Our approach is based on the correlation decay technique originating in statistical physics. Previously this approach was successfully used for approximately counting the number of independent sets and colorings in some classes of graphs [1],[24],[6]. Thus we add another problem to the small, but growing, class of #P -complete problems for which there is now a deterministic FPTAS.
We establish the existence of free energy limits for several sparse random hypergraph models corresponding to certain combinatorial models on Erdös-Rényi graph G(N, c/N ) and random rregular graph G(N, r). For a variety of models, including independent sets, MAX-CUT, Coloring and K-SAT, we prove that the free energy both at a positive and zero temperature, appropriately rescaled, converges to a limit as the size of the underlying graph diverges to infinity. In the zero temperature case, this is interpreted as the existence of the scaling limit for the corresponding combinatorial optimization problem. For example, as a special case we prove that the size of a largest independent set in these graphs, normalized by the number of nodes converges to a limit w. . Among other applications, this method was used to prove the existence of free energy limits for Viana-Bray and K-SAT models on Erdös-Rényi graphs. The case of zero temperature was treated by taking limits of positive temperature models. We provide instead a simpler combinatorial approach and work with the zero temperature case (optimization) directly both in the case of Erdös-Rényi graph G(N, c/N ) and random regular graph G(N, r). In addition we establish the large deviations principle for the satisfiability property for constraint satisfaction problems such as Coloring, K-SAT and NAE-K-SAT. For example, let p(c, q, N ) and p(r, q, N ) denote, respectively, the probability that random graphs G(N, c/N ) and G(N, r) are properly q-colorable. We prove the existence of limits of N −1 log p(c, q, N ) and N −1 log p(r, q, N ), as N → ∞.
ABSTRACT:In this article we propose new methods for computing the asymptotic value for the logarithm of the partition function (free energy) for certain statistical physics models on certain type of finite graphs, as the size of the underlying graph goes to infinity. The two models considered are the hard-core (independent set) model when the activity parameter λ is small, and also the Potts (q-coloring) model. We only consider the graphs with large girth. In particular, we prove that asymptotically the logarithm of the number of independent sets of any r-regular graph with large girth when rescaled is approximately constant if r ≤ 5. For example, we show that every 4-regular n-node graph with large girth has approximately (1.494 · · · ) n -many independent sets, for large n. Further, we prove that for every r-regular graph with r ≥ 2, with n nodes and large girth, the number of proper q ≥ r + 1 colorings is approximatelyn , for large n. We also show that these results hold for random regular graphs with high probability (w.h.p.) as well.As a byproduct of our method we obtain simple algorithms for the problem of computing approximately the logarithm of the number of independent sets and proper colorings, in low degree COUNTING WITHOUT SAMPLING 453graphs with large girth. These algorithms are deterministic and use certain correlation decay properties for the corresponding Gibbs measures, and its implications to uniqueness of the Gibbs measures on the infinite trees, as well as some simple cavity trick which is well known in the physics and the Markov chain sampling literature.
Local algorithms on graphs are algorithms that run in parallel on the nodes of a graph to compute some global structural feature of the graph. Such algorithms use only local information available at nodes to determine local aspects of the global structure, while also potentially using some randomness. Recent research has shown that such algorithms show significant promise in computing structures like large independent sets in graphs locally. Indeed the promise led to a conjecture by Hatami, Lovász and Szegedy [HLS] that local algorithms may be able to compute maximum independent sets in (sparse) random dregular graphs. In this paper we refute this conjecture and show that every independent set produced by local algorithms is multiplicative factor 1/2 + 1/(2 √ 2) smaller than the largest, asymptotically as d → ∞.Our result is based on an important clustering phenomena predicted first in the literature on spin glasses, and recently proved rigorously for a variety of constraint satisfaction problems on random graphs. Such properties suggest that the geometry of the solution space can be quite intricate. The specific clustering property, that we prove and apply in this paper shows that typically every two large independent sets in a random graph either have a significant intersection, or have a nearly empty intersection. As a result, large independent sets are clustered according to the proximity to each other. While the clustering property was postulated earlier as an obstruction for the success of local algorithms, such as for example, the Belief Propagation algorithm, our result is the first one where the clustering property is used to formally prove limits on local algorithms.
ABSTRACT:With random inputs, certain decision problems undergo a "phase transition." We prove similar behavior in an optimization context. Given a conjunctive normal form (CNF) formula F on n variables and with m k-variable clauses, denote by max F the maximum number of clauses satisfiable by a single assignment of the variables. (Thus the decision problem k-SAT is to determine if max F is equal to m.) With the formula F chosen at random, the expectation of max F is trivially bounded by (3/4)m ޅ max F m. We prove that for random formulas with m ϭ cn clauses: for constants c Ͻ 1, ޅ max F is cn Ϫ ⌰(1/n); for large c, it approaches ((3/4)c ϩ ⌰( ͌ c))n; and in the "window" c ϭ 1 ϩ ⌰(n Ϫ1/3 ), it is cn Ϫ ⌰(1). Our full results are more detailed, but this already shows that the optimization problem MAX 2-SAT undergoes a phase transition just as the 2-SAT decision problem does, and at the same critical value c ϭ 1. Most of our results are established without reference to the analogous propositions for decision 2-SAT, and can be used to reproduce them.We consider "online" versions of MAX 2-SAT, and show that for one version the obvious greedy algorithm is optimal; all other natural questions remain open. We can extend only our simplest MAX 2-SAT results to MAX k-SAT, but we conjecture a "MAX k-SAT limiting function conjecture" analogous to the folklore "satisfiability threshold conjecture," but open even for k ϭ 2. Neither conjecture immediately implies the other, but it is natural to further conjecture a connection between them. We also prove analogous results for random MAX CUT.
As of May 2014 there were more than 100,000 patients on the waiting list for a kidney transplant from a deceased donor. Although the preferred treatment is a kidney transplant, every year there are fewer donors than new patients, so the wait for a transplant continues to grow. To address this shortage, kidney paired donation (KPD) programs allow patients with living but biologically incompatible donors to exchange donors through cycles or chains initiated by altruistic (nondirected) donors, thereby increasing the supply of kidneys in the system. In many KPD programs a centralized algorithm determines which exchanges will take place to maximize the total number of transplants performed. This optimization problem has proven challenging both in theory, because it is NP-hard, and in practice, because the algorithms previously used were unable to optimally search over all long chains. We give two new algorithms that use integer programming to optimally solve this problem, one of which is inspired by the techniques used to solve the traveling salesman problem. These algorithms provide the tools needed to find optimal solutions in practice.
We consider queueing systems with n parallel queues under a Join the Shortest Queue (JSQ) policy in the Halfin-Whitt heavy traffic regime. We use the martingale method to prove that a scaled process counting the number of idle servers and queues of length exactly 2 weakly converges to a two-dimensional reflected Ornstein-Uhlenbeck process, while processes counting longer queues converge to a deterministic system decaying to zero in constant time. This limiting system is comparable to that of the traditional Halfin-Whitt model, but there are key differences in the queueing behavior of the JSQ model. In particular, only a vanishing fraction of customers will have to wait, but those who do will incur a constant order waiting time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.