“…DNNs possess attractive properties: potentially exponential convergence, breaking of the curse-of-dimensionality, and an ability to handle data sampled from function spaces with limited regularity, such as shock and contact discontinuities [1,2,3,4,5,6,7,8,9]. Practically however, challenges regarding the training of DNNs often prevent the realization of convergent schemes for forward problems [10,11,12,13]. For inverse problems however, a number of methods have emerged that train neural networks to simultaneously match target data while minimizing a PDE residual [14,15,16], which have found application across a wide range of applied mathematics problems [17,18,19,20,21].…”