2001
DOI: 10.1006/jpdc.2000.1733
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation of Collective I/O Implementations on Parallel Architectures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2003
2003
2016
2016

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(9 citation statements)
references
References 20 publications
0
9
0
Order By: Relevance
“…This approach requires a tight coupling between MPI and the underlying file system [25]. Algorithms termed as "two-phase I/O" [9], [29] enable efficient collective I/O implementations by aggregating requests and by adapting the write pattern to the file layout across multiple data servers [7]. Collective I/O avoids metadata redundancy as opposed to the file-per-process approach.…”
Section: B Approaches To I/o Management In Hpc Simulationsmentioning
confidence: 99%
“…This approach requires a tight coupling between MPI and the underlying file system [25]. Algorithms termed as "two-phase I/O" [9], [29] enable efficient collective I/O implementations by aggregating requests and by adapting the write pattern to the file layout across multiple data servers [7]. Collective I/O avoids metadata redundancy as opposed to the file-per-process approach.…”
Section: B Approaches To I/o Management In Hpc Simulationsmentioning
confidence: 99%
“…Collective I/O plays a critical role in cooperating processes to generate aggregated I/O requests, instead of performing noncontiguous small I/Os independently [9,37]. As we discussed in the background section, a widely-used implementation of collective I/O is the two-phase I/O protocol [37].…”
Section: Motivationmentioning
confidence: 99%
“…5 shows the aggregator's sending time, which is the time this aggregator takes to send the data to the collective buffer. The lower dashed portion in other bars (2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) shows the waiting time of non-aggregators and the higher portion refers to their receiving time. We can see that the data exchange time between aggregator and non-aggregators is various, due to which, we can argue that the shuffle cost is determined by the maximum data exchange time (in this case, i.e., 2.3 ms).…”
Section: Analysis and Prediction Of Shuffle Costmentioning
confidence: 99%
“…[1,2,7,9,11,12,13,17]. Most work is about the implementation of the well-known, portable open-source ROMIO implementation of MPI-IO [12].…”
Section: Introductionmentioning
confidence: 99%
“…For collective I/O, data sieving is combined with techniques like two-phase I/O [11,13]. Orthogonal work has dealt with hiding file access time through active buffering and I/O threads [2,7], or optimizations for specific, low-level file systems [1,9]. To the best of our knowledge, no attention per se has been paid to the efficient handling of non-contiguous, typed MPI data buffers as are always involved in non-contiguous file access.…”
Section: Introductionmentioning
confidence: 99%