2009
DOI: 10.1145/1486525.1486528
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Winograd's matrix multiplications

Abstract: Modern architectures have complex memory hierarchies and increasing parallelism (e.g., multicores). These features make achieving and maintaining good performance across rapidly changing architectures increasingly difficult. Performance has become a complex trade-off, not just a simple matter of counting cost of simple CPU operations.We present a novel, hybrid, and adaptive recursive Strassen-Winograd's matrix multiplication (MM) that uses automatically tuned linear algebra software (ATLAS) or GotoBLAS. Our al… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2011
2011
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(7 citation statements)
references
References 43 publications
0
7
0
Order By: Relevance
“…The implementation of these old algorithms has been extensively worked on by many authors and makes up a valuable part of the present day MM software (cf. [14], [81], [63], [68], [69], [34], [56], [8], [18], [33], the references therein, and in [78,Chapter 1]). This work, intensified lately, is still mostly devoted to the implementation of very old algorithms, ignoring, for example, the advanced implementations of fast MM in [90] and [91] (see Section 17.1) and the significant improvement in [45] and [98] We hope that our survey will motivate advancing the State of the Art both in the design of fast algorithms for feasible MM and in their efficient implementation.…”
Section: Numerical Implementation Of Fast MMmentioning
confidence: 99%
“…The implementation of these old algorithms has been extensively worked on by many authors and makes up a valuable part of the present day MM software (cf. [14], [81], [63], [68], [69], [34], [56], [8], [18], [33], the references therein, and in [78,Chapter 1]). This work, intensified lately, is still mostly devoted to the implementation of very old algorithms, ignoring, for example, the advanced implementations of fast MM in [90] and [91] (see Section 17.1) and the significant improvement in [45] and [98] We hope that our survey will motivate advancing the State of the Art both in the design of fast algorithms for feasible MM and in their efficient implementation.…”
Section: Numerical Implementation Of Fast MMmentioning
confidence: 99%
“…In Section 1.3 we commented on numerical stability issues for fast feasible MM, and we refer the reader to [12], [73], [52], [61], [62], [28], [45], [71, Chapter 1], [13], [8], [11], and the bibliography therein for their previous and current numerical implementation. In the next section we discuss symbolic application of the latter algorithm (WRB-MM) in some detail.…”
Section: Summary Of the Study Of The MM Exponents After 1978mentioning
confidence: 99%
“…When the matrix's dimension is larger than the recursion point, the Strassen/Winograd's MM is more appealing. D'alberto et al provides the parallel implementation of fast MM algorithms in Strassen's, Winograd's, and 3M to further improve the speed of matrix multiplication. These general‐purpose fast MM schemes can be seen as the bases of our work in which we reduce the dimensions of matrices and render them to these MM schemes to accomplish matrix disguising.…”
Section: Related Workmentioning
confidence: 99%