2021
DOI: 10.48550/arxiv.2103.12840
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Survey of Distributed Optimization Methods for Multi-Robot Systems

Abstract: Distributed optimization consists of multiple computation nodes working together to minimize a common objective function through local computation iterations and networkconstrained communication steps. In the context of robotics, distributed optimization algorithms can enable multi-robot systems to accomplish tasks in the absence of centralized coordination. We present a general framework for applying distributed optimization as a module in a robotics pipeline. We survey several classes of distributed optimiza… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
19
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(26 citation statements)
references
References 99 publications
0
19
0
Order By: Relevance
“…Additionally, if two agents i, j are mutual neighbors, i.e., j ∈ N i and i ∈ N j , then the constraint involving i and j will appear twice in C (i.e., as duplicate row entries). We therefore refer to (22) as the Reduced Duplicate Centralized Problem. There are two reasons we choose this problem for solving the backward pass over the original Reduced Non-Duplicate Centralized Problem (RNDCP): 1) Construction of the non-duplicate constraint matrix C requires checking if mutual neighbors exist for every agent which does not scale for large N 2) C can be easily constructed using the matrices A i used in the forward pass of Merged CADMM-OSQP To justify the usage of ( 22), we make the following assumption about the corresponding RNDCP: Assumption 3.…”
Section: Backward Passmentioning
confidence: 99%
See 2 more Smart Citations
“…Additionally, if two agents i, j are mutual neighbors, i.e., j ∈ N i and i ∈ N j , then the constraint involving i and j will appear twice in C (i.e., as duplicate row entries). We therefore refer to (22) as the Reduced Duplicate Centralized Problem. There are two reasons we choose this problem for solving the backward pass over the original Reduced Non-Duplicate Centralized Problem (RNDCP): 1) Construction of the non-duplicate constraint matrix C requires checking if mutual neighbors exist for every agent which does not scale for large N 2) C can be easily constructed using the matrices A i used in the forward pass of Merged CADMM-OSQP To justify the usage of ( 22), we make the following assumption about the corresponding RNDCP: Assumption 3.…”
Section: Backward Passmentioning
confidence: 99%
“…In our case, we additionally rely on LICQ to ensure that the Lagrange multipliers satisfying the KKT conditions are unique [45, Section 3]. The connection between the duplicate problem (22) and the corresponding RNDCP is shown in the following lemma.…”
Section: Backward Passmentioning
confidence: 99%
See 1 more Smart Citation
“…More specifically, every local entity transmits multiple inverted blocks of the covariance matrix per MLE iteration. Recently, Xu et al reformulated the factorized GP training method using the exact consensus alternating direction method of multipliers (ADMM) [Boyd et al, 2011], which is appealing in centralized multi-agent settings [Halsted et al, 2021]. Consensus ADMM reduces the communication overhead of GP training, but requires high computational resources to solve a nested optimization problem at every ADMM-iteration.…”
Section: Introductionmentioning
confidence: 99%
“…A suitable method for deriving such algorithms is the Alternating Direction Method of Multipliers (ADMM) [9]. In particular, several ADMM-based methods such as [13,22,44], have been proposed, yielding elegant decentralized solutions for multi-agent control. Furthermore, recent works employing ADMM in a stochastic setting [37,26], have shown to be capable of successfully encompassing both the safety under uncertainty and scalability desired attributes.…”
Section: Introductionmentioning
confidence: 99%