2022
DOI: 10.1109/lra.2022.3142402
|View full text |Cite
|
Sign up to set email alerts
|

DiNNO: Distributed Neural Network Optimization for Multi-Robot Collaborative Learning

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 19 publications
(9 citation statements)
references
References 32 publications
0
7
0
Order By: Relevance
“…This is because, depending on your computing power, the system may be able to process more samples at one time. For training with large amounts of data such as MNIST, researchers prefer to use batch size to avoid computer overload and speed up training [37]. It is chosen for this study to see if the batch size seems to affect your training.…”
Section: Figure 3: Impact Of Dropout On Standard Neural Networkmentioning
confidence: 99%
“…This is because, depending on your computing power, the system may be able to process more samples at one time. For training with large amounts of data such as MNIST, researchers prefer to use batch size to avoid computer overload and speed up training [37]. It is chosen for this study to see if the batch size seems to affect your training.…”
Section: Figure 3: Impact Of Dropout On Standard Neural Networkmentioning
confidence: 99%
“…supervised learning. See [25] for an ADMM-based distributed optimization approach to solving this problem.…”
Section: E Multi-robot Learningmentioning
confidence: 99%
“…The authors decompose an original state into two sub-states and protect data privacy when interacting and communicating with multi-robot by adding perturbations, which can effectively resist external attacks. The authors of the literature [52] proposed a distributed multi-robot training framework based on the augmented Lagrangian function, which finally reaches a consensus on the local model weights. The method can effectively solve the single point of failure problem and is robust to mid-robot dropouts.…”
Section: Multi-robot Distributed Learning Privacy Protectionmentioning
confidence: 99%