The ATLAS experiment is preparing for data taking at 14 TeV collision energy. A rich discovery physics program is being prepared in addition to the detailed study of Standard Model processes which will be produced in abundance. The ATLAS multi-level trigger system is designed to accept one event in 2 • 10B to enable the selection of rare and unusual physics events. The ATLAS calorimeter system is a precise instrument, which includes liquid Argon electromagnetic and hadronic components as well as a scintillator-tile hadronic calorimeter. All these components are used in the various levels of the trigger system. A wide physics coverage is ensured by inclusively selecting events with candidate electrons, photons, taus, jets or those with large missing transverse energy. The commissioning of the trigger system is being performed with cosmic ray events and by replaying simulated Monte Carlo events through the trigger and data acquisition system.
In this paper, we present the computational task-management tool Ganga, which allows for the specification, submission, bookkeeping and post-processing of computational tasks on a wide set of distributed resources. Ganga has been developed to solve a problem increasingly common in scientific projects, which is that researchers must regularly switch between different processing systems, each with its own command set, to complete their computational tasks. Ganga provides a homogeneous environment for processing data on heterogeneous resources. We give examples from High Energy Physics, demonstrating how an analysis can be developed on a local system and then transparently moved to a Grid system for processing of all available data. Ganga has an API that can be used via an interactive interface, in scripts, or through a GUI. Specific knowledge about types of tasks or computational resources is provided at run-time through a plugin system, making new developments easy to integrate. We give an overview of the Ganga architecture, give examples of current use, and demonstrate how Ganga can be used in many different areas of science.
The first collider search for dark matter arising from a strongly coupled hidden sector is presented and uses a data sample corresponding to 138 fb−1, collected with the CMS detector at the CERN LHC, at $$ \sqrt{s} $$ s = 13 TeV. The hidden sector is hypothesized to couple to the standard model (SM) via a heavy leptophobic Z′ mediator produced as a resonance in proton-proton collisions. The mediator decay results in two “semivisible” jets, containing both visible matter and invisible dark matter. The final state therefore includes moderate missing energy aligned with one of the jets, a signature ignored by most dark matter searches. No structure in the dijet transverse mass spectra compatible with the signal is observed. Assuming the Z′ boson has a universal coupling of 0.25 to the SM quarks, an inclusive search, relevant to any model that exhibits this kinematic behavior, excludes mediator masses of 1.5–4.0 TeV at 95% confidence level, depending on the other signal model parameters. To enhance the sensitivity of the search for this particular class of hidden sector models, a boosted decision tree (BDT) is trained using jet substructure variables to distinguish between semivisible jets and SM jets from background processes. When the BDT is employed to identify each jet in the dijet system as semivisible, the mediator mass exclusion increases to 5.1 TeV, for wider ranges of the other signal model parameters. These limits exclude a wide range of strongly coupled hidden sector models for the first time.
As part of its HL-LHC upgrade program, the CMS Collaboration is developing a High Granularity Calorimeter (CE) to replace the existing endcap calorimeters. The CE is a sampling calorimeter with unprecedented transverse and longitudinal readout for both electromagnetic (CE-E) and hadronic (CE-H) compartments. The calorimeter will be built with ∼30,000 hexagonal silicon modules. Prototype modules have been constructed with 6-inch hexagonal silicon sensors with cell areas of 1.1 cm 2 , and the SKIROC2-CMS readout ASIC. Beam tests of different sampling configurations were conducted with the prototype modules at DESY and CERN in 2017 and 2018. This paper describes the construction and commissioning of the CE calorimeter prototype, the silicon modules used in the construction, their basic performance, and the methods used for their calibration.
The GridPP Collaboration is building a UK computing Grid for particle physics, as part of the international effort towards computing for the Large Hadron Collider. The project, funded by the UK Particle Physics and Astronomy Research Council (PPARC), began in September 2001 and completed its first phase 3 years later. GridPP is a collaboration of approximately 100 researchers in 19 UK university particle physics groups, the Council for the Central Laboratory of the Research Councils and CERN, reflecting the strategic importance of the project. In collaboration with other European and US efforts, the first phase of the project demonstrated the feasibility of developing, deploying and operating a Grid-based computing system to meet the UK needs of the Large Hadron Collider experiments. This note describes the work undertaken to achieve this goal. S Supplementary documentation is available from stacks.iop.org/JPhysG/32/N1. References to sections S1, S2.1, etc are to sections within this online supplement.
Concomitant with this increase will be an increase in the number of interactions in each bunch crossing and a significant increase in the total ionising dose and fluence. One part of this upgrade is the replacement of the current endcap calorimeters with a high granularity sampling calorimeter equipped with silicon sensors, designed to manage the high collision rates [2]. As part of the development of this calorimeter, a series of beam tests have been conducted with different sampling configurations using prototype segmented silicon detectors. In the most recent of these tests, conducted in late 2018 at the CERN SPS, the performance of a prototype calorimeter equipped with ≈12, 000 channels of silicon sensors was studied with beams of high-energy electrons, pions and muons. This paper describes the custom-built scalable data acquisition system that was built with readily available FPGA mezzanines and low-cost Raspberry PI computers.
The Compact Muon Solenoid collaboration is designing a new high-granularity endcap calorimeter, HGCAL, to be installed later this decade. As part of this development work, a prototype system was built, with an electromagnetic section consisting of 14 double-sided structures, providing 28 sampling layers. Each sampling layer has an hexagonal module, where a multipad large-area silicon sensor is glued between an electronics circuit board and a metal baseplate. The sensor pads of approximately 1.1 cm2 are wire-bonded to the circuit board and are readout by custom integrated circuits. The prototype was extensively tested with beams at CERN's Super Proton Synchrotron in 2018. Based on the data collected with beams of positrons, with energies ranging from 20 to 300 GeV, measurements of the energy resolution and linearity, the position and angular resolutions, and the shower shapes are presented and compared to a detailed Geant4 simulation.
Measurements are presented of the reduction of signal output due to radiation damage for two types of plastic scintillator tiles used in the hadron endcap (HE) calorimeter of the CMS detector. The tiles were exposed to particles produced in proton-proton (pp) collisions at the CERN LHC with a center-of-mass energy of 13 TeV, corresponding to a delivered luminosity of 50 fb−1. The measurements are based on readout channels of the HE that were instrumented with silicon photomultipliers, and are derived using data from several sources: a laser calibration system, a movable radioactive source, as well as hadrons and muons produced in pp collisions. Results from several irradiation campaigns using 60Co sources are also discussed. The damage is presented as a function of dose rate. Within the range of these measurements, for a fixed dose the damage increases with decreasing dose rate.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.