2008
DOI: 10.1088/1742-6596/125/1/012068
|View full text |Cite
|
Sign up to set email alerts
|

Enabling petascale science: data management, troubleshooting, and scalable science services

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2013
2013
2019
2019

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 5 publications
0
2
0
Order By: Relevance
“…Thus, approximately 1 in every 300,000,000 (65 K × 5 K) packets are accepted with corruption. It has been reported that an average of 40 errors per 1,000,000 transfers is detected on data transferred by the D0 experiment [16]. Projects such as DES require verification of checksums as part of their regular data movement process in order to detect file corruption due to software bugs or human error.…”
Section: Introductionmentioning
confidence: 99%
“…Thus, approximately 1 in every 300,000,000 (65 K × 5 K) packets are accepted with corruption. It has been reported that an average of 40 errors per 1,000,000 transfers is detected on data transferred by the D0 experiment [16]. Projects such as DES require verification of checksums as part of their regular data movement process in order to detect file corruption due to software bugs or human error.…”
Section: Introductionmentioning
confidence: 99%
“…However, with an exception of integration of GridFTP with OPeNDAP [7] (which only provides limited flexibility and efficiency), the unit of data transfer for GridFTP is a single file. While enhancing and optimizing data transfer frameworks [2,14,16,17,18,13,4] has continued to be an active area of research, the ability to reduce data volume that needs to be transferred over the wide-area, by providing support for user-defined data subsetting at the server-side, is clearly needed.…”
Section: Introductionmentioning
confidence: 99%