A measurement of the Higgs boson mass is presented based on the combined data samples of the ATLAS and CMS experiments at the CERN LHC in the H → γγ and H → ZZ → 4l decay channels. The results are obtained from a simultaneous fit to the reconstructed invariant mass peaks in the two channels and for the two experiments. The measured masses from the individual channels and the two experiments are found to be consistent among themselves. The combined measured mass of the Higgs boson is m H ¼ 125.09 AE 0.21 ðstatÞ AE 0.11 ðsystÞ GeV. DOI: 10.1103/PhysRevLett.114.191803 PACS numbers: 14.80.Bn, 13.85.Qk The study of the mechanism of electroweak symmetry breaking is one of the principal goals of the CERN LHC program. In the standard model (SM), this symmetry breaking is achieved through the introduction of a complex doublet scalar field, leading to the prediction of the Higgs boson H [1-6], whose mass m H is, however, not predicted by the theory. In 2012, the ATLAS and CMS Collaborations at the LHC announced the discovery of a particle with Higgs-boson-like properties and a mass of about 125 GeV [7][8][9]. The discovery was based primarily on mass peaks observed in the γγ and ZZ → l þ l − l 0þ l 0−(denoted H → ZZ → 4l for simplicity) decay channels, where one or both of the Z bosons can be off shell and where l and l 0 denote an electron or muon. With m H known, all properties of the SM Higgs boson, such as its production cross section and partial decay widths, can be predicted. Increasingly precise measurements [10][11][12][13] have established that all observed properties of the new particle, including its spin, parity, and coupling strengths to SM particles are consistent within the uncertainties with those expected for the SM Higgs boson.The ATLAS and CMS Collaborations have independently measured m H using the samples of proton-proton collision data collected in 2011 and 2012, commonly referred to as LHC Run 1. The analyzed samples correspond to approximately 5 fb −1 of integrated luminosity at ffiffi ffi s p ¼ 7 TeV, and 20 fb −1 at ffiffi ffi s p ¼ 8 TeV, for each experiment. Combined results in the context of the separate experiments, as well as those in the individual channels, are presented in Refs. [12,[14][15][16].This Letter describes a combination of the Run 1 data from the two experiments, leading to improved precision for m H . Besides its intrinsic importance as a fundamental parameter, improved knowledge of m H yields more precise predictions for the other Higgs boson properties. Furthermore, the combined mass measurement provides a first step towards combinations of other quantities, such as the couplings. In the SM, m H is related to the values of the masses of the W boson and top quark through loopinduced effects. Taking into account other measured SM quantities, the comparison of the measurements of the Higgs boson, W boson, and top quark masses can be used to directly test the consistency of the SM [17] and thus to search for evidence of physics beyond the SM.The combination is performed usin...
Analyzing massive complex networks yields promising insights about our everyday lives. Building scalable algorithms to do so is a challenging task that requires a careful analysis and an extensive evaluation. However, engineering such algorithms is often hindered by the scarcity of publicly available datasets.Network generators serve as a tool to alleviate this problem by providing synthetic instances with controllable parameters. However, many network generators fail to provide instances on a massive scale due to their sequential nature or resource constraints. Additionally, truly scalable network generators are few and often limited in their realism.In this work, we present novel generators for a variety of network models that are frequently used as benchmarks. By making use of pseudorandomization and divide-and-conquer schemes, our generators follow a communication-free paradigm. The resulting generators are thus embarrassingly parallel and have a near optimal scaling behavior. This allows us to generate instances of up to 2 43 vertices and 2 47 edges in less than 22 minutes on 32 768 cores. Therefore, our generators allow new graph families to be used on an unprecedented scale.
A search for the production of a heavy B quark, having electric charge −1=3 and vector couplings to W, Z, and H bosons, is carried out using proton-proton collision data recorded at the CERN LHC by the CMS experiment, corresponding to an integrated luminosity of 19.7 fb −1 . The B quark is assumed to be pair produced and to decay in one of three ways: to tW, bZ, or bH. The search is carried out in final states with one, two, and more than two charged leptons, as well as in fully hadronic final states. Each of the channels in the exclusive final-state topologies is designed to be sensitive to specific combinations of the B quarkantiquark pair decays. The observed event yields are found to be consistent with the standard model expectations in all the final states studied. A statistical combination of these results is performed, and upper limits are set on the cross section of the strongly produced B quark-antiquark pairs as a function of the B quark mass. Lower limits on the B quark mass between 740 and 900 GeV are set at a 95% confidence level, depending on the values of the branching fractions of the B quark to tW, bZ, and bH. Overall, these limits are the most stringent to date.
Computing the Delaunay triangulation (DT) of a given point set in R D is one of the fundamental operations in computational geometry. In this paper we present a novel divide-and-conquer (D&C) algorithm that lends itself equally well to shared and distributed memory parallelism. While previous D&C algorithms generally suffer from a complex -often sequential -merge or divide step, we reduce the merging of two partial triangulations to re-triangulating a small subset of their vertices using the same parallel algorithm and combining the three triangulations via parallel hash table lookups. In experiments we achieve a reasonable speedup on shared memory machines and compare favorably to CGAL's three-dimensional parallel DT implementation on some inputs.In the distributed memory setting we show that our approach scales to 2048 processing elements, which allows us to compute 3-D DTs for inputs with billions of points.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.