2020
DOI: 10.1051/epjconf/202024503016
|View full text |Cite
|
Sign up to set email alerts
|

Evolution of the CMS Global Submission Infrastructure for the HL-LHC Era

Abstract: Efforts in distributed computing of the CMS experiment at the LHC at CERN are now focusing on the functionality required to fulfill the projected needs for the HL-LHC era. Cloud and HPC resources are expected to be dominant relative to resources provided by traditional Grid sites, being also much more diverse and heterogeneous. Handling their special capabilities or limitations and maintaining global flexibility and efficiency, while also operating at scales much higher than the current capacity, are the major… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6

Relationship

3
3

Authors

Journals

citations
Cited by 6 publications
(9 citation statements)
references
References 7 publications
0
9
0
Order By: Relevance
“…The SI comprises multiple interconnected HTCondor pools [372], as shown in figure 139, redundantly deployed at CERN and FNAL in order to ensure a high-availability service. The main component of the SI, the Global Pool, obtains the majority of its resources via the submission of pilot jobs to WLCG [328] and open science grid (OSG) sites.…”
Section: Central Processing and Productionmentioning
confidence: 99%
See 1 more Smart Citation
“…The SI comprises multiple interconnected HTCondor pools [372], as shown in figure 139, redundantly deployed at CERN and FNAL in order to ensure a high-availability service. The main component of the SI, the Global Pool, obtains the majority of its resources via the submission of pilot jobs to WLCG [328] and open science grid (OSG) sites.…”
Section: Central Processing and Productionmentioning
confidence: 99%
“…Schematic diagram of the submission infrastructure, including multiple distributed central processing and production (WMAgent) and analysis (CRAB) job submission agents (schedds). Reproduced from[372]. CC BY 4.0.…”
mentioning
confidence: 99%
“…Figure 1 presents a schematic view of this infrastructure. The SI is in continuous evolution (see for example [8]), managing an ever growing collection of computing resources, connecting new and more diverse resource types and resource providers (WLCG, HPC, Cloud, volunteer). The main challenge for the SI team is to drive the evolution of our infrastructure, while maintaining the efficiency of use in all the available resources, maximizing data processing throughput and enforcing task priorities according to CMS research program.…”
Section: The Cms Submission Infrastructurementioning
confidence: 99%
“…Test jobs are submitted to the SI schedds targeting any available GPUs, with the intention of matching as many as possible. Allocation of GPUs peaked at over 150 GPUs in parallel, accessing in total about 230 unique opportunistic GPUs, located mainly at CERN and US Tier-2 sites (see [15]).…”
Section: Availability Of Gpus In Cms Simentioning
confidence: 99%
“…not involving for example experimental data reprocessing tasks). Technically, such restrictions would be enforced by using a similar approach to that of CMS site-customizable pilots [17]. For the current tests, only payload jobs known to work a priori (described in the next section) were allowed matchmaking the BSC slots.…”
Section: Integration With Cms Workload Management Systemsmentioning
confidence: 99%