2023
DOI: 10.1613/jair.1.14151
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Bayesian: A Continuous Distributed Constraint Optimization Problem Solver

Abstract: In this paper, the novel Distributed Bayesian (D-Bay) algorithm is presented for solving multi-agent problems within the Continuous Distributed Constraint Optimization Problem (C-DCOP) framework. This framework extends the classical DCOP framework towards utility functions with continuous domains. D-Bay solves a C-DCOP by utilizing Bayesian optimization for the adaptive sampling of variables. We theoretically show that D-Bay converges to the global optimum of the C-DCOP for Lipschitz continuous utility functio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
4
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 53 publications
0
4
0
Order By: Relevance
“…Since continuous nonlinear optimization methods such as gradient descent require derivative computations, HCMS is difficult to solve non-differentiable problems and cannot guarantee convergence. B-DPOP [12] extends DPOP algorithm by adding Bayesian optimization and Gaussian process models to solve dynamic coordination problems in continuous domains. It converges to optimal solution in fewer sampling iterations, but has high computational complexity requiring significant computing resources and time for larger problems.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Since continuous nonlinear optimization methods such as gradient descent require derivative computations, HCMS is difficult to solve non-differentiable problems and cannot guarantee convergence. B-DPOP [12] extends DPOP algorithm by adding Bayesian optimization and Gaussian process models to solve dynamic coordination problems in continuous domains. It converges to optimal solution in fewer sampling iterations, but has high computational complexity requiring significant computing resources and time for larger problems.…”
Section: Introductionmentioning
confidence: 99%
“…Unfortunately, C-CoCoA lacks the anytime attribute and has poor robustness on complex problems. Jeroen proposed the Distributed Bayesian (D-Bay) algorithm, which solves C-DCOP by utilizing Bayesian optimization for adaptive sampling of variables [11].…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, Stranders et al [32] proposed the Continuous DCOP (C-DCOP) framework to extend DCOPs whose variables are continuous and the constraint utilities are functional forms. The C-DOCP algorithms include Continuous MS (CMS) [32], Hybrid CMS (HCMS) [33], Bayesian DPOP (B-DPOP) [34], Particle Swarm Based Continuous DCOP (PCD) [35], Exact Continuous DPOP (EC-DCOP) [36] and extensions (Approximate Continuous DPOP (AC-DPOP), Clustered AC-DPOP (CAC-DPOP), Continuous DSA (C-DSA)), Continuous Cooperative Constraint Approximation (C-CoCoA) [37], and Particle Swarm with Local Decision Based Continuous DCOP (PCD-LD) [38].…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, Continuous DCOPs (C-DCOPs) [34] were proposed to model problems with continuous variables, and the constraint utilities are functional forms. Correspondingly, researchers proposed new C-DCOP algorithms to deal with the modification of the C-DCOP formulation, including Continuous Max-Sum (CMS) [34], Hybrid CMS (HCMS) [35], Bayesian Distributed Pseudo-tree Optimization Procedure (B-DPOP) [36], Particle Swarm Based Continuous DCOP Disclaimer/Publisher's Note: The statements, opinions, and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions, or products referred to in the content.…”
Section: Introductionmentioning
confidence: 99%