CMS has a distributed computing model, based on a hierarchy of tiered regional computing centres. However, the end physicist is not interested in the details of the computing model nor the complexity of the underlying infrastructure, but only to access and use efficiently and easily the remote services. The CMS Remote Analysis Builder (CRAB) is the official CMS tool that allows the access to the distributed data in a transparent way. We present the current development direction, which is focused on improving the interface presented to the user and adding intelligence to CRAB such that it can be used to automate more and more the work done on behalf of user. We also present the status of deployment of the CRAB system and the lessons learnt in deploying this tool to the CMS collaboration. Energy and Nuclear Physics,21-27 March 2007,Prague,Czech republic,15/05/2009 Automation of user analysis workflow in CMS Abstract. CMS has a distributed computing model, based on a hierarchy of tiered regional computing centres. However, the end physicist is not interested in the details of the computing model nor the complexity of the underlying infrastructure, but only to access and use efficiently and easily the remote services. The CMS Remote Analysis Builder (CRAB) is the official CMS tool that allows the access to the distributed data in a transparent way. We present the current development direction, which is focused on improving the interface presented to the user and adding intelligence to CRAB such that it can be used to automate more and more the work done on behalf of user. We also present the status of deployment of the CRAB system and the lessons learnt in deploying this tool to the CMS collaboration.
Presented at 17th International Conference on Computing in High
IntroductionThe Compact Muon Solenoid (CMS) experiment [1] is one of the two general purpose physics experiments at the European Laboratory for Particle Physics (CERN) [2] which is starting operation in 2009. The scientific analysis of data taken by the detector and MonteCarlo events simulation requires a large amount of well organized computing resources. To guarantee more than 2000 CMS collaborators located in 40 countries around the world to be able to carry out their physics analysis with minimal geographical and processing constraints the CMS experiment has had a worldwide distributed computing model [3] from the beginning. The CMS distributed model implements the Grid middleware to manage three main levels, or tiers, of computing. Tier 0 (T0) is located at CERN where the accelerator and experiment are located and includes 20% of the total required computing resources of CMS. The next level is represented by Tier 1 (T1) regional centers which represent 40% followed by the Tier 2 (T2) centers which represent another 40% of the total required computing resources of CMS. Each Tier level has its well defined responsibilities mainly differentiated by their resources dedication. Also a set of specialized tools have been developed on top of World...