2018
DOI: 10.1088/1742-6596/1085/3/032025
|View full text |Cite
|
Sign up to set email alerts
|

A federated Xrootd cache

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
20
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(20 citation statements)
references
References 2 publications
0
20
0
Order By: Relevance
“…• SoCal working set: only considers access at the UCSD and Caltech sites. It is calculated using the XRootD monitoring data of the SoCal cache [5].…”
Section: Data Tiermentioning
confidence: 99%
See 1 more Smart Citation
“…• SoCal working set: only considers access at the UCSD and Caltech sites. It is calculated using the XRootD monitoring data of the SoCal cache [5].…”
Section: Data Tiermentioning
confidence: 99%
“…The cache fetches files from the federation, storing them locally. For the CMS LHC-Run-II, UCSD scale tested and deployed a multi node federated XCache (i.e federated implies the cache is presented as one logical entrypoint to the client, but it is made out of several servers) that could serve a limited part of the analysis namespace [5]. We refer to this deployment as the SoCal cache.…”
Section: Introductionmentioning
confidence: 99%
“…It comes with no surprise that XRootD development is largely driven by the use cases coming from the WLCG project, as it is the backbone of numerous software defined storage solutions (like EOS [2] and DPM [3]) used to accommodate the vast amount of data registered by the LHC experiments at CERN, most notably Atlas [4], CMS [5], LHCb [6] and Alice [7]. One of the key components of the XRootD framework is the C++ client, which is fundamental not only to the command line utilities like xrdcp and xrdfs, but also to XCache (a XRootD file-based caching proxy) [8] [9], XrootdFS (a FUSE based mountable file system) [10] and EOS (the storage service of choice at CERN). In addition, the XRootD client is employed to provide remote data access in many physics analysis frameworks like ROOT [11] and in data movers like FTS [12].…”
Section: Introductionmentioning
confidence: 99%
“…For this study, we collected data access measurements from the Southern California Petabyte Scale Cache [8], where client jobs requested data files for High-Luminosity Large Hadron Collider (HL-LHC) analysis. We studied how much data is shared, how much network traffic volume is consequently saved, and how much the innetwork data cache increases application performance.…”
Section: Introductionmentioning
confidence: 99%