In this paper, we use density functional theory (DFT) calculations on highly parallel computing resources to study size-dependent changes in the chemical and electronic properties of platinum (Pt) for a number of fixed freestanding clusters ranging from 13 to 1415 atoms, or 0.7-3.5 nm in diameter. We find that the surface catalytic properties of the clusters converge to the single crystal limit for clusters with as few as 147 atoms (1.6 nm). Recently published results for gold (Au) clusters showed analogous convergence with size. However, this convergence happened at larger sizes, because the Au d-states do not contribute to the density of states around the Fermi-level, and the observed level fluctuations were not significantly damped until the cluster reached ca. 560 atoms (2.7 nm) in size.
We address the fundamental question of which size a metallic nano-particle needs to have before its surface chemical properties can be considered to be those of a solid, rather than those of a large molecule. Calculations of adsorption energies for carbon monoxide and oxygen on a series of gold nanoparticles ranging from 13 to 1,415 atoms, or 0.8-3.7 nm, have been made possible by exploiting massively parallel computing on up to 32,768 cores on the Blue Gene/P computer at Argonne National Laboratory. We show that bulk surface properties are obtained for clusters larger than ca. 560 atoms (2.7 nm). Below that critical size, finite-size effects can be observed, and we show those to be related to variations in the local atomic structure augmented by quantum size effects for the smallest clusters.
Profiles of dark matter-dominated halos at the group and cluster scales play an important role in modern cosmology. Using results from two very large cosmological N-body simulations, which increase the available volume at their mass resolution by roughly two orders of magnitude, we robustly determine the halo concentration-mass (c-M) relation over a wide range of masses, employing multiple methods of concentration measurement. We characterize individual halo profiles, as well as stacked profiles, relevant for galaxy-galaxy lensing and next-generation cluster surveys; the redshift range covered is 0 ≤ z ≤ 4, with a minimum halo mass of M 200c ∼ 2×10 11 M . Despite the complexity of a proper description of a halo (environmental effects, merger history, nonsphericity, relaxation state), when the mass is scaled by the nonlinear mass scale M (z), we find that a simple non-power-law form for the c-M/M relation provides an excellent description of our simulation results across eight decades in M/M and for 0 ≤ z ≤ 4. Over the mass range covered, the c-M relation has two asymptotic forms: an approximate power law below a mass threshold M/M ∼ 500 − 1000, transitioning to a constant value, c 0 ∼ 3 at higher masses. The relaxed halo fraction decreases with mass, transitioning to a constant value of ∼ 0.5 above the same mass threshold. We compare Navarro-Frenk-White (NFW) and Einasto fits to stacked profiles in narrow mass bins at different redshifts; as expected, the Einasto profile provides a better description of the simulation results. At cluster scales at low redshift, however, both NFW and Einasto profiles are in very good agreement with the simulation results, consistent with recent weak lensing observations.
Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell-and GPU-accelerated systems, standard multicore node clusters, and Blue Gene systems. HACC's design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.
We describe the Outer Rim cosmological simulation, one of the largest high-resolution N-body simulations performed to date, aimed at promoting science to be carried out with large-scale structure surveys. The simulation covers a volume of (4.225Gpc) 3 and evolves more than one trillion particles. It was executed on Mira, a Blue-Gene/Q system at the Argonne Leadership Computing Facility. We discuss some of the computational challenges posed by a system like Mira, a many-core supercomputer, and how the simulation code, HACC, has been designed to overcome these challenges. We have carried out a large range of analyses on the simulation data and we report on the results as well as the data products that have been generated. The full data set generated by the simulation totals more than 5PB of data, making data curation and data handling a large challenge in of itself. The simulation results have been used to generate synthetic catalogs for large-scale structure surveys, including DESI and eBOSS, as well as CMB experiments. A detailed catalog for the LSST DESC data challenges has been created as well. We publicly release some of the Outer Rim halo catalogs, downsampled particle information, and lightcone data. Subject headings: methods: N-body -cosmology: large-scale structure of the universe
There is growing concern that I/O systems will be hard pressed to satisfy the requirements of future leadership-class machines. Even current machines are found to be I/O bound for some applications. In this paper, we identify existing performance bottlenecks in data movement for I/O on the IBM Blue Gene/P (BG/P) supercomputer currently deployed at several leadership computing facilities. We improve the I/O performance by exploiting the network topology of BG/P for collective I/O, leveraging data semantics of applications and incorporating asynchronous data staging. We demonstrate the efficacy of our approaches for synthetic benchmark experiments and for application-level benchmarks at scale on leadership computing systems.
Remarkable observational advances have established a compelling cross-validated model of the Universe. Yet, two key pillars of this model -dark matter and dark energyremain mysterious. Sky surveys that map billions of galaxies to explore the 'Dark Universe', demand a corresponding extremescale simulation capability; the HACC (Hybrid/Hardware Accelerated Cosmology Code) framework has been designed to deliver this level of performance now, and into the future. With its novel algorithmic structure, HACC allows flexible tuning across diverse architectures, including accelerated and multi-core systems.On the IBM BG/Q, HACC attains unprecedented scalable performance -currently 13.94 PFlops at 69.2% of peak and 90% parallel efficiency on 1,572,864 cores with an equal number of MPI ranks, and a concurrency of 6.3 million. This level of performance was achieved at extreme problem sizes, including a benchmark run with more than 3.6 trillion particles, significantly larger than any cosmological simulation yet performed.The expansion of the Universe is encoded in the timedependence of the scale factor a(t) governed by the cosmological model, the Hubble parameter, H =ȧ/a, G is Newton's constant, ρ c is the critical density, Ω m , the average mass density as a fraction of ρ c , ρ m (x) is the local mass density, and δ m (x) is the dimensionless density contrast, ρ c = 3H 2 /8πG, δ m (x) = (ρ m (x) − ρ m )/ ρ m , (3)
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.