We present a submatrix update algorithm for the continuous-time auxiliary field method that allows the simulation of large lattice and impurity problems. The algorithm takes optimal advantage of modern CPU architectures by consistently using matrix instead of vector operations, resulting in a speedup of a factor of ≈ 8 and thereby allowing access to larger systems and lower temperature. We illustrate the power of our algorithm at the example of a cluster dynamical mean field simulation of the Néel transition in the three-dimensional Hubbard model, where we show momentum dependent self-energies for clusters with up to 100 sites.
New tools enable new ways of working, and materials science is no exception. In materials discovery, traditional manual, serial, and human-intensive work is being augmented by automated, parallel, and iterative processes driven by Artificial Intelligence (AI), simulation and experimental automation. In this perspective, we describe how these new capabilities enable the acceleration and enrichment of each stage of the discovery cycle. We show, using the example of the development of a novel chemically amplified photoresist, how these technologies’ impacts are amplified when they are used in concert with each other as powerful, heterogeneous workflows.
Quantum chemistry simulations on a quantum computer suffer from the overhead needed for encoding the fermionic problem in a system of qubits. By exploiting the block diagonality of a fermionic Hamiltonian, we show that the number of required qubits can be reduced while the number of terms in the Hamiltonian will increase. All operations for this reduction can be performed in operator space. The scheme is conceived as a pre-computational step that would be performed prior to the actual quantum simulation. We apply this scheme to reduce the number of qubits necessary to simulate both the Hamiltonian of the two-site Fermi-Hubbard model and the hydrogen molecule. Both quantum systems can then be simulated with a twoqubit quantum computer. Despite the increase in the number of Hamiltonian terms, the scheme still remains a useful tool to reduce the dimensionality of specific quantum systems for quantum simulators with a limited number of resources.
The dynamical cluster approximation (DCA) is a systematic extension beyond the single site approximation in dynamical mean field theory (DMFT), to include spatially non-local correlations in quantum many-body simulations of strongly correlated systems. We extend the DCA with a continuous lattice self-energy in oder to achieve better convergence with cluster size. The new method, which we call DCA + , cures the cluster shape dependence problems of the DCA, without suffering from causality violations of previous attempts to interpolate the cluster self-energy. A practical approach based on standard inference techniques is given to deduce the continuous lattice self-energy from an interpolated cluster self-energy. We study the pseudogap region of a hole-doped two-dimensional Hubbard model and find that in the DCA + algorithm, the self-energy and pseudogap temperature T * converge monotonously with cluster size. Introduction of a continuous lattice self-energy eliminates artificial long-rage correlations and thus significantly reduces the sign problem of the quantum Monte Carlo cluster solver in the DCA + algorithm compared to the normal DCA. Simulations with much larger cluster sizes thus become feasible, which, along with the improved convergence in cluster size, raises hope that precise extrapolations to the exact infinite cluster size limit can be reached for other physical quantities as well. INTRODUCTION:
Mantle convection is the fundamental physical process within earth's interior responsible for the thermal and geological evolution of the planet, including plate tectonics. The mantle is modeled as a viscous, incompressible, non-Newtonian fluid. The wide range of spatial scales, extreme variability and anisotropy in material properties, and severely nonlinear rheology have made global mantle convection modeling with realistic parameters prohibitive. Here we present a new implicit solver that exhibits optimal algorithmic performance and is capable of extreme scaling for hard PDE problems, such as mantle convection. To maximize accuracy and minimize runtime, the solver incorporates a number of advances, including aggressive multi-octree adaptivity, mixed continuous-discontinuous discretization, arbitrarilyhigh-order accuracy, hybrid spectral/geometric/algebraic multigrid, and novel Schur-complement preconditioning. These features present enormous challenges for extreme scalability. We demonstrate that-contrary to conventional wisdom-algorithmically optimal implicit solvers can be designed that scale out to 1.5 million cores for severely nonlinear, ill-conditioned, heterogeneous, and anisotropic PDEs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.