1993
DOI: 10.1007/bf02024486
|View full text |Cite
|
Sign up to set email alerts
|

A parallel interior point algorithm for linear programming on a network of transputers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

1996
1996
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(15 citation statements)
references
References 27 publications
0
15
0
Order By: Relevance
“…Consequently, cut net n 8,7 incurs a cost of λ(n 8,7 ) − 1 = 3 − 1 = 2 to the cutsize. Since v 8,7 Figure 2 processor P 1 is responsible for accumulating the partial nonzero results obtained from the outer-product computations. P 2 will send the partial result c …”
Section: The Results Of Its Local Outer-product Computations and Sendmentioning
confidence: 99%
See 2 more Smart Citations
“…Consequently, cut net n 8,7 incurs a cost of λ(n 8,7 ) − 1 = 3 − 1 = 2 to the cutsize. Since v 8,7 Figure 2 processor P 1 is responsible for accumulating the partial nonzero results obtained from the outer-product computations. P 2 will send the partial result c …”
Section: The Results Of Its Local Outer-product Computations and Sendmentioning
confidence: 99%
“…Hence, accumulation of c 8,7 by P 1 will incur a communication cost of two words. Therefore, we have the equivalence between λ(n 8,7 ) − 1 and the communication volume regarding the accumulation of c 8,7 in the summation phase. Similarly, since λ(n 8,4 ) − 1 = 1, λ(n 11,7 ) − 1 = 1, and λ(n 11,4 ) − 1 = 1 for the other cut nets, the total cutsize is five.…”
Section: C577mentioning
confidence: 99%
See 1 more Smart Citation
“…The fi rst parallel interior-point LP solver we are aware of, was developed by Bisseling et al [4] at Shell in the early 1990's. The code was specially written for a network of transputers (distributed memory).…”
Section: Ldrd Final Report Onmentioning
confidence: 99%
“…We restrict our attention to 1-dimensional data decompositions, that is, matrices are partitioned either by rows or columns. There are some indications that 2-dimensional decompositions are better [4,40] and this option should be considered for future versions. We did not attempt 2-dimensional decompositions for the current code because this is more diffi cult to implement and has not been well tested in Epetra, our underlying parallel matrix library.…”
Section: Data Distributionmentioning
confidence: 99%