2020
DOI: 10.1609/aaai.v34i02.5504
|View full text |Cite
|
Sign up to set email alerts
|

Optimization of Chance-Constrained Submodular Functions

Abstract: Submodular optimization plays a key role in many real-world problems. In many real-world scenarios, it is also necessary to handle uncertainty, and potentially disruptive events that violate constraints in stochastic settings need to be avoided. In this paper, we investigate submodular optimization problems with chance constraints. We provide a first analysis on the approximation behavior of popular greedy algorithms for submodular problems with chance constraints. Our results show that these algorithms are hi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
23
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
2

Relationship

5
4

Authors

Journals

citations
Cited by 29 publications
(26 citation statements)
references
References 14 publications
(31 reference statements)
1
23
0
Order By: Relevance
“…Furthermore, we study GSEMO experimentally on the influence maximization problem in social networks and the maximum coverage problem. Our results show that GSEMO significantly outperforms the greedy approach [6] for the considered chance constrained submodular optimisation problems. Furthermore, we use the multi-objective problem formulation in a standard setting of NSGA-II.…”
Section: Introductionmentioning
confidence: 84%
See 1 more Smart Citation
“…Furthermore, we study GSEMO experimentally on the influence maximization problem in social networks and the maximum coverage problem. Our results show that GSEMO significantly outperforms the greedy approach [6] for the considered chance constrained submodular optimisation problems. Furthermore, we use the multi-objective problem formulation in a standard setting of NSGA-II.…”
Section: Introductionmentioning
confidence: 84%
“…The GSEMO algorithm has already been widely studied in the area of runtime analysis in the field of evolutionary computation [10] and more broadly in the area of artificial intelligence where the focus has been on submodular functions and Pareto optimisation [31,30,29,32]. We analyse this algorithm in the chance constrained submodular optimisation setting investigated in [6] in the context of greedy algorithms. Our analyses show that GSEMO is able to achieve the same approximation guarantee in expected polynomial time for uniform IID weights and the same approximation quality in expected pseudo-polynomial time for independent uniform weights having the same dispersion.…”
Section: Introductionmentioning
confidence: 99%
“…They also carried out bi-objective optimization with respect to the profit and probability of constraint violation when the capacity is static. Doerr et al [9] have investigated adaptations of classical greedy algorithms for the optimization of submodular functions with chance constraints of knapsack type. They have shown that the adapted greedy algorithms maintain asymptotically almost the same approximation quality as in the deterministic setting when considering uniform distributions with the same dispersion for the knapsack weights.…”
Section: Related Workmentioning
confidence: 99%
“…Recent studies investigated the classical knapsack problem in static [31,32] and dynamic settings [1] as well as complex stockpile blending problems [33] and the optimization of submodular functions [20]. Theoretical analyses for submodular problems with chance constraints, where each stochastic component is uniformly distributed and has the same amount of uncertainty, have shown that greedy algorithms and evolutionary Pareto optimization approaches only lose a small amount in terms of approximation quality when comparing against the corresponding deterministic problems [7,20] and that evolutionary algorithms significantly outperform the greedy approaches in practice. Other recent theoretical runtime analyses of evolutionary algorithms have produced initial results for restricted classes of instances of the knapsack problem where the weights are chosen randomly [22,34].…”
Section: Introductionmentioning
confidence: 99%