High Performance Computing – HiPC 2007
DOI: 10.1007/978-3-540-77220-0_52
|View full text |Cite
|
Sign up to set email alerts
|

The CMS Remote Analysis Builder (CRAB)

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 31 publications
(16 citation statements)
references
References 0 publications
0
16
0
Order By: Relevance
“…Additionally, the data popularity service is monitoring the data access via CRAB (the CMS distributed analysis tool, see e.g. [29], [30]), identifying idle/hot data and suggesting what to clean/replicate. The two aforementioned activities are already running: despite relatively young in CMS Computing, they are growing fast and both of them will be consolidated before LHC resumes operations.…”
Section: Network Information and Analysis Throughputmentioning
confidence: 99%
“…Additionally, the data popularity service is monitoring the data access via CRAB (the CMS distributed analysis tool, see e.g. [29], [30]), identifying idle/hot data and suggesting what to clean/replicate. The two aforementioned activities are already running: despite relatively young in CMS Computing, they are growing fast and both of them will be consolidated before LHC resumes operations.…”
Section: Network Information and Analysis Throughputmentioning
confidence: 99%
“…The CMS Remote Analysis Builder (CRAB) [7] has been developed as a user-friendly interface to handle data analysis in a local or distributed environment, hiding the complexity of interactions with the Grid and CMS services. It allows the user to run over large distributed data samples with the same analysis code he has developed locally in a small scale test.…”
Section: Crabmentioning
confidence: 99%
“…The interaction with the Grid can be either direct with a thin CRAB client or using an intermediate CRAB Analysis Server [7] (see Fig. 2).…”
Section: Crabmentioning
confidence: 99%
“…These include generic university clusters, commercial cloud computing, and other computing infrastructure systems. Such systems are normally not prepared to run HEP workflows, as they lack the software stack, data connectivity, and infrastructure for common workflow management tools [6,7,5] to access them. Privileged access may not be available, and alterations in machine setup may not be permitted.…”
Section: Introductionmentioning
confidence: 99%