48Publications

1,975Citation Statements Received

966Citation Statements Given

How they've been cited

2,002

19

1,956

0

How they cite others

1,060

22

944

0

Publications

Order By: Most citations

This paper presents a combinatorial polynomial-time algorithm for minimizing submodular functions, answering an open question posed in 1981 by Grötschel, Lovász, and Schrijver. The algorithm employs a scaling scheme that uses a flow in the complete directed graph on the underlying set with each arc capacity equal to the scaled parameter. The resulting algorithm runs in time bounded by a polynomial in the size of the underlying set and the length of the largest absolute function value. The paper also presents a strongly polynomial version in which the number of steps is bounded by a polynomial in the size of the underlying set, independent of the function values.A preliminary version has appeared in

Submodular functions are a key concept in combinatorial optimization. Algorithms that involve submodular functions usually assume that they are given by a (value) oracle. Many interesting problems involving submodular functions can be solved using only polynomially many queries to the oracle, e.g., exact minimization or approximate maximization.In this paper, we consider the problem of approximating a non-negative, monotone, submodular function f on a ground set of size n everywhere, after only poly(n) oracle queries. Our main result is a deterministic algorithm that makes poly(n) oracle queries and derives a functionf such that, for every set S,f (S) approximates f (S) within a factor α(n), where α(n) = √ n + 1 for rank functions of matroids and α(n) = O( √ n log n) for general monotone submodular functions. Our result is based on approximately finding a maximum volume inscribed ellipsoid in a symmetrized polymatroid, and the analysis involves various properties of submodular functions and polymatroids.Our algorithm is tight up to logarithmic factors. Indeed, we show that no algorithm can achieve a factor better than Ω( √ n/ log n), even for rank functions of a matroid.

The state-of-the-art algorithms for solving the trust-region subproblem (TRS) are based on an iterative process, involving solutions of many linear systems, eigenvalue problems, subspace optimization, or line search steps. A relatively underappreciated fact, due to Gander, Golub, and von Matt [Linear Algebra Appl., 114 (1989), pp. 815-839], is that TRSs can be solved by one generalized eigenvalue problem, with no outer iterations. In this paper we rediscover this fact and discover its great practicality, which exhibits good performance both in accuracy and efficiency. Moreover, we generalize the approach in various directions, namely by allowing for an ellipsoidal constraint, dealing with the so-called hard case, and obtaining approximate solutions efficiently when high accuracy is unnecessary. We demonstrate that the resulting algorithm is a general-purpose TRS solver, effective both for dense and large-sparse problems, including the so-called hard case. Our algorithm is easy to implement: its essence is a few lines of MATLAB code.

This paper addresses the problems of minimizing nonnegative submodular functions under covering constraints, which generalize the vertex cover, edge cover, and set cover problems. We give approximation algorithms for these problems exploiting the discrete convexity of submodular functions. We first present a rounding 2-approximation algorithm for the submodular vertex cover problem based on the half-integrality of the continuous relaxation problem, and show that the rounding algorithm can be performed by one application of submodular function minimization on a ring family. We also show that a rounding algorithm and a primal-dual algorithm for the submodular cost set cover problem are both constant factor approximation algorithms if the maximum frequency is fixed. In addition, we give an essentially tight lower bound on the approximability of the submodular edge cover problem.

This paper presents the first combinatorial polynomialtime algorithm for minimizing submodular functions, answering an open question posed in 1981 by GrStschel, Love%sz, and Schrijver. The algorithm employs a scaling scheme that uses a flow in the complete directed graph on the underlying set with each arc capacity equal to the scaled parameter. The resulting algorithm runs in time bounded by a polynomial in the size of the underlying set and the largest length of the function value. The paper also presents a strongly polynomial-time version that runs in time bounded by a polynomial in the size of the underlying set independent of the function value.

Submodular function minimization (SFM) is a fundamental discrete optimization problem which generalizes many well known problems, has applications in various fields, and can be solved in polynomial time. Owing to applications in computer vision and machine learning, fast SFM algorithms are highly desirable. The current fastest algorithms [36] run in O(n 2 log nM · EO + n 3 log O(1) nM) time and O(n 3 log 2 n·EO+n 4 log O(1) n) time respectively, where M is the largest absolute value of the function (as-suming the range is integers) and EO is the time taken to evaluate the function on any set. Although the best known lower bound on the query complexity is only Ω(n) [23], the current shortest non-deterministic proof [10] certifying the optimum value of a function requires Ω(n 2) function evaluations. The main contribution of this paper are subquadratic SFM algorithms. For integer-valued submodular functions, we give an SFM algorithm which runs in O(nM 3 log n · EO) time giving the first nearly linear time algorithm in any known regime. For real-valued submodular functions with range in [−1, 1], we give an algorithm which iñ O(n 5/3 · EO/ε 2) time returns an ε-additive approximate solution. At the heart of it, our algorithms are projected stochastic subgradient descent methods on the Lovasz extension of submodular functions where we crucially exploit submodularity and data structures to obtain fast, i.e. sublinear time subgradient updates. The latter is crucial for beating the n 2 bound-we show that algorithms which access only subgradients of the Lovasz extension, and these include the empirically fast Fujishige-Wolfe heuristic [48, 15] and the theoretically best cutting plane methods [36] , must make Ω(n) subgradient calls (even for functions whose range is {−1, 0, 1}).

scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.

hi@scite.ai

334 Leonard St

Brooklyn, NY 11211

Copyright © 2024 scite LLC. All rights reserved.

Made with 💙 for researchers

Part of the Research Solutions Family.