2016 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW) 2016
DOI: 10.1109/ipdpsw.2016.162
|View full text |Cite
|
Sign up to set email alerts
|

Testing Fine-Grained Parallelism for the ADMM on a Factor-Graph

Abstract: Abstract-There is an ongoing effort to develop tools that apply distributed computational resources to tackle large problems or reduce the time to solve them. In this context, the Alternating Direction Method of Multipliers (ADMM) arises as a method that can exploit distributed resources like the dual ascent method and has the robustness and improved convergence of the augmented Lagrangian method. Traditional approaches to accelerate the ADMM using multiple cores are problem-specific and often require multi-co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 25 publications
0
9
0
Order By: Relevance
“…6 Convergence of ADMM algorithm (Boyd et al 2011) computing DSL2 on two copies of the collaboration graph as a function of time, implemented using Apache Spark (Zaharia et al 2010) on a 40 CPU machine of the communication network in a cluster than, e.g. gradient descent (França and Bento 2017a;2017b), and it parallelizes well both on share-memory multiprocessor systems, GPUs and computer clusters (Boyd et al 2011;Parikh and Boyd 2014;Hao et al 2016).…”
Section: Resultsmentioning
confidence: 99%
“…6 Convergence of ADMM algorithm (Boyd et al 2011) computing DSL2 on two copies of the collaboration graph as a function of time, implemented using Apache Spark (Zaharia et al 2010) on a 40 CPU machine of the communication network in a cluster than, e.g. gradient descent (França and Bento 2017a;2017b), and it parallelizes well both on share-memory multiprocessor systems, GPUs and computer clusters (Boyd et al 2011;Parikh and Boyd 2014;Hao et al 2016).…”
Section: Resultsmentioning
confidence: 99%
“…The Efficient Lifelong Learning Algorithm (ELLA) framework (Ruvolo & Eaton, 2013) used this same approach of a shared latent dictionary, trained online, to facilitate transfer as tasks arrive consecutively. The ELLA framework was first created for regression and classification (Ruvolo & Eaton, 2013), and later developed for policy gradient reinforcement learning (PG-ELLA) (Bou Ammar, Eaton, & Ruvolo, 2014) and collective multi-agent learning (Rostami, Kolouri, Kim, & Eaton, 2018) using distributed optimization (Hao, Oghbaee, Rostami, Derbinsky, & Bento, 2016). Other approaches that extend MTL to online settings also exist (Cavallanti, Cesa-Bianchi, & Gentile, 2010).…”
Section: Related Workmentioning
confidence: 99%
“…There are 3 + k function-nodes and variablenodes in total. We interpret ADMM as an iterative scheme that operates on iterates that live on the edges/nodes of the factor-graph in Figure 1, similar to the approaches in [11,12,20]. The function-nodes have the following labels: quadratic (QP), sparse (SP), bi-linear (BI (1) , .…”
Section: Solution Procedures Using Admmmentioning
confidence: 99%
“…To maximize parallelism, we implemented a fine-grained version of our algorithm, similar to [20]. The fact that we are using ADMM is important to achieve this.…”
Section: Multi-core Speedupmentioning
confidence: 99%