The computational solution of the governing balance equations for mass, momentum, heat transfer and magnetic induction for resistive magnetohydrodynamics (MHD) systems can be extremely challenging. These difficulties arise from both the strong nonlinear, nonsymmetric coupling of fluid and electromagnetic phenomena, as well as the significant range of time-and length-scales that the interactions of these physical mechanisms produce. This paper explores the development of a scalable, fully-implicit stabilized unstructured finite element (FE) capability for 3D incompressible resistive MHD. The discussion considers the development of a stabilized FE formulation in context of the variational multiscale (VMS) method, and describes the scalable implicit time integration and direct-to-steady-state solution capability. The nonlinear solver strategy employs Newton-Krylov methods, which are preconditioned using fully-coupled algebraic multilevel preconditioners. These preconditioners are shown to enable a robust, scalable and efficient solution approach for the large-scale sparse linear systems generated by the Newton linearization. Verification results demonstrate the expected order-of-accuracy for the stabilized FE discretization. The approach is tested on a variety of prototype problems, that include MHD duct flows, an unstable hydromagnetic Kelvin-Helmholtz shear layer, and a 3D island coalescence problem used to model magnetic reconnection. Initial results that explore the scaling of the solution methods are also presented on up to 128K processors for problems with up to 1.8B unknowns on a CrayXK7.
The magnetohydrodynamics (MHD) equations model a wide range of plasma physics applications and are characterized by a nonlinear system of partial differential equations that strongly couples a charged fluid with the evolution of electromagnetic fields. After discretization and linearization, the resulting system of equations is generally difficult to solve due to the coupling between variables, and the heterogeneous coefficients induced by the linearization process. In this paper, we investigate multigrid preconditioners for this system based on specialized relaxation schemes that properly address the system structure and coupling. Three extensions of Vanka relaxation are proposed and applied to problems with up to 170 million degrees of freedom and fluid and magnetic Reynolds numbers up to 400 for stationary problems and up to 20,000 for time-dependent problems.
Residual neural networks (ResNets) are a promising class of deep neural networks that have shown excellent performance for a number of learning tasks, e.g., image classification and recognition. Mathematically, ResNet architectures can be interpreted as forward Euler discretizations of a nonlinear initial value problem whose time-dependent control variables represent the weights of the neural network. Hence, training a ResNet can be cast as an optimal control problem of the associated dynamical system. For similar time-dependent optimal control problems arising in engineering applications, parallel-in-time methods have shown notable improvements in scalability. This paper demonstrates the use of those techniques for efficient and effective training of ResNets. The proposed algorithms replace the classical (sequential) forward and backward propagation through the network layers by a parallel nonlinear multigrid iteration applied to the layer domain. This adds a new dimension of parallelism across layers that is attractive when training very deep networks. From this basic idea, we derive multiple layer-parallel methods. The most efficient version employs a simultaneous optimization approach where updates to the network parameters are based on inexact gradient information in order to speed up the training process. Using numerical examples from supervised classification, we demonstrate that the new approach achieves similar training performance to traditional methods, but enables layerparallelism and thus provides speedup over layer-serial methods through greater concurrency. in particular deep residual networks (ResNets) [36], have been breaking human records in various contests and are now central to technology such as image recognition [38,43,45] and natural language processing [6,15,41].The abstract goal of machine learning is to model a function f :for input-output pairs (y, c) from a certain data set Y × C. Depending on the nature of inputs and outputs, the task can be regression or classification. When outputs are available for all samples, parts of the samples, or are not available, this formulation describes supervised, semi-supervised, and unsupervised learning, respectively. The function f can be thought of as an interpolation or approximation function.In deep learning, the function f involves a DNN that aims at transforming the input data using many layers. The layers successively apply affine transformations and element-wise nonlinearities that are parametrized by the network parameters θ. The training problem consists of finding the parameters θ such that (1.1) is satisfied for data elements from a training data set, but also holds for previously unseen data from a validation data set, which has not been used during training. The former objective is commonly modeled as an expected loss and optimization techniques are used to find the parameters that minimize the loss.Despite rapid methodological developments, compute times for training state-of-the-art DNNs can still be prohibitive, measured in the orde...
The scalable iterative solution of strongly coupled three-dimensional incompressible resistive magnetohydrodynamics (MHD) equations is very challenging because disparate time scales arise from the electromagnetics, the hydrodynamics, as well as the coupling between these systems. This study considers a mixed finite element discretization of a dual saddle point formulation of the incompressible resistive MHD equations using a stable nodal (Q2/Q1) discretization for the hydrodynamics and a stable edge-node discretization of a reduced form of the Maxwell equations. This paper presents new approximate block factorization preconditioners for this system which reduce the system to approximate Schur complement systems that can be solved using algebraic multilevel methods. These preconditioners include a new augmentation-based approximation for the magnetic induction saddle point system as well as efficient approximations of the Schur complements that arise from the complex coupling between the Navier-Stokes equations and the Maxwell equations.
Abstract. The magnetohydrodynamics (MHD) equations are used to model the flow of electrically conducting fluids in such applications as liquid metals and plasmas. This system of non-self adjoint, nonlinear PDEs couples the Navier-Stokes equations for fluids and Maxwell's equations for electromagnetics. There has been recent interest in fully coupled solvers for the MHD system because they allow for fast steady-state solutions that do not require pseudo-time stepping. When the fully coupled system is discretized, the strong coupling can make the resulting algebraic systems difficult to solve, requiring effective preconditioning of iterative methods for efficiency. In this work, we consider a finite element discretization of an exact penalty formulation for the stationary MHD equations. This formulation has the benefit of implicitly enforcing the divergence free condition on the magnetic field without requiring a Lagrange multiplier. We consider extending block preconditioning techniques developed for the Navier-Stokes equations to the full MHD system. We analyze operators arising in block decompositions from a continuous perspective and apply arguments based on the existence of approximate commutators to develop new preconditioners that account for the physical coupling. This results in a family of parameterized block preconditioners for both Picard and Newton linearizations. We develop an automated method for choosing the relevant parameters and demonstrate the robustness of these preconditioners for a range of the physical non-dimensional parameters and with respect to mesh refinement. Key words. magnetohydrodynamics, iterative methods, preconditionersAMS subject classifications. 76W05, 65F08, 65M221. Introduction. The magnetohydrodynamics (MHD) model describes the flow of electrically conducting fluids in the presence of magnetic fields. A principal application of MHD is the modeling of plasma physics, ranging from plasma confinement for thermonuclear fusion to astrophysical plasma dynamics [13]. MHD is also used to model the flow of liquid metals, for instance in magnetic pumps, liquid metal blankets in fusion reactor concepts, and aluminum electrolysis [19]. The model consists of a non-self-adjoint, nonlinear system of partial differential equations (PDEs) that couple the Navier-Stokes equations for fluid flow to a reduced set of Maxwell's equations for electromagnetics. Because multiple physical processes are represented in the model, the PDEs can span over a range of length-and time-scales, making the equations difficult to solve and requiring a robust, accurate means of approximating the solution. Decoupled solution methods which solve the fluid and magnetic systems separately and possibly couple the systems by an outer iteration have been commonly employed as solvers for the transient and steady MHD systems. In the context of transient systems these methods are commonly used in operator splitting techniques, for steady state solves a fixed point iteration serves to couple the system (see e.g. [2], and th...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.