Many searches for physics beyond the Standard Model at the Large Hadron Collider (LHC) rely on top tagging algorithms, which discriminate between boosted hadronic top quarks and the much more common jets initiated by light quarks and gluons. We note that the hadronic calorimeter (HCAL) effectively takes a "digital image" of each jet, with pixel intensities given by energy deposits in individual HCAL cells. Viewed in this way, top tagging becomes a canonical pattern recognition problem. With this motivation, we present a novel top tagging algorithm based on an Artificial Neural Network (ANN), one of the most popular approaches to pattern recognition. The ANN is trained on a large sample of boosted tops and light quark/gluon jets, and is then applied to independent test samples. The ANN tagger demonstrated excellent performance in a Monte Carlo study: for example, for jets with p T in the 1100-1200 GeV range, 60% top-tag efficiency can be achieved with a 4% mis-tag rate. We discuss the physical features of the jets identified by the ANN tagger as the most important for classification, as well as correlations between the ANN tagger and some of the familiar top-tagging observables and algorithms.
Over the past decade, a large number of jet substructure observables have been proposed in the literature, and explored at the LHC experiments. Such observables attempt to utilize the internal structure of jets in order to distinguish those initiated by quarks, gluons, or by boosted heavy objects, such as top quarks and W bosons. This report, originating from and motivated by the BOOST2013 workshop, presents original particle-level studies that aim to improve our understanding of the relationships between jet substructure observables, their complementarity, and their dependence on the underlying jet properties, particularly the jet radius and jet transverse momentum. This is explored in the context of quark/gluon discrimination, boosted W boson tagging and boosted top quark tagging.
We present MadDM v.3.0, a numerical tool to compute particle dark matter observables in generic new physics models. The new version features a comprehensive and automated framework for dark matter searches at the interface of collider physics, astrophysics and cosmology and is deployed as a plugin of the MadGraph5 aMC@NLO platform, inheriting most of its features. With respect to the previous version, MadDM v.3.0 can now provide predictions for indirect dark matter signatures in astrophysical environments, such as the annihilation cross section at present time and the energy spectra of prompt photons, cosmic rays and neutrinos resulting from dark matter annihilation. MadDM indirect detection features support both 2 → 2 and 2 → n dark matter annihilation processes. In addition, the ability to compare theoretical predictions with experimental constraints is extended by including the Fermi-LAT likelihood for gamma-ray constraints from dwarf spheroidal galaxies and by providing an interface with the nested sampling algorithm PyMultiNest to perform high dimensional parameter scans efficiently. We validate the code for a wide set of dark matter models by comparing the results from MadDM v.3.0 to existing tools and results in the literature.
Weakly-coupled TeV-scale particles may mediate the interactions between normal matter and dark matter. If so, the LHC would produce dark matter through these mediators, leading to the familiar "mono-X" search signatures, but the mediators would also produce signals without missing momentum via the same vertices involved in their production. This document from the LHC Dark Matter Working Group suggests how to compare searches for these two types of signals in case of vector and axial-vector mediators, based on a workshop that took place on September 19/20, 2016 and subsequent discussions. These suggestions include how to extend the spin-1 mediated simplified models already in widespread use to include lepton couplings. This document also provides analytic calculations of the relic density in the simplified models and reports an issue that arose when ATLAS and CMS first began to use preliminary numerical calculations of the dark matter relic density in these models.
Weakly interacting dark matter particles can be pair-produced at colliders and detected through signatures featuring missing energy in association with either QCD/EW radiation or heavy quarks. In order to constrain the mass and the couplings to standard model particles, accurate and precise predictions for production cross sections and distributions are of prime importance. In this work, we consider various simplified models with s-channel mediators. We implement such models in the FeynRules/MadGraph5_aMC@NLO framework, which allows to include higher-order QCD corrections in realistic simulations and to study their effect systematically. As a first phenomenological application, we present predictions for dark matter production in association with jets and with a top-quark pair at the LHC, at next-to-leading order accuracy in QCD, including matching/merging to parton showers. Our study shows that higher-order QCD corrections to dark matter production via s-channel mediators have a significant impact not only on total production rates, but also on shapes of distributions. We also show that the inclusion of next-to-leading order effects results in a sizeable reduction of the theoretical uncertainties.
We present MadDM v.1.0, a numerical tool to compute dark matter relic abundance in a generic model. The code is based on the existing MadGraph 5 architecture and as such is easily integrable into any MadGraph collider study. A simple Python interface offers a level of user-friendliness characteristic of MadGraph 5 without sacrificing functionality. MadDM is able to calculate the dark matter relic abundance in models which include a multi-component dark sector, resonance annihilation channels and co-annihilations. We validate the code in a wide range of dark matter models by comparing the relic density results from MadDM to the existing tools and literature.
Abstract:We propose a simplified model of dark matter with a scalar mediator to accommodate the di-photon excess recently observed by the ATLAS and CMS collaborations. Decays of the resonance into dark matter can easily account for a relatively large width of the scalar resonance, while the magnitude of the total width combined with the constraint on dark matter relic density leads to sharp predictions on the parameters of the Dark Sector. Under the assumption of a rather large width, the model predicts a signal consistent with ∼ 300 GeV dark matter particle and ∼ 750 GeV scalar mediator in channels with large missing energy. This prediction is not yet severely bounded by LHC Run I searches and will be accessible at the LHC Run II in the jet plus missing energy channel with more luminosity. Our analysis also considers astro-physical constraints, pointing out that future direct detection experiments will be sensitive to this scenario.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.