In solid mechanics, data-driven approaches are widely considered as the new paradigm that can overcome the classic problems of constitutive models such as limiting hypothesis, complexity, and accuracy. However, the implementation of machine-learned approaches in material modeling has been modest due to the high-dimensionality of the data space, the significant size of missing data, and limited convergence. This work proposes a framework to hire concepts from polymer science, statistical physics, and continuum mechanics to provide super-constrained machine-learning techniques of reduced-order to partly overcome the existing difficulties. Using a sequential order-reduction, we have simplified the 3D stress–strain tensor mapping problem into a limited number of super-constrained 1D mapping problems. Next, we introduce an assembly of multiple replicated neural network learning agents (L-agents) to systematically classify those mapping problems into a few categories, each of which were described by a distinct agent type. By capturing all loading modes through a simplified set of dispersed experimental data, the proposed hybrid assembly of L-agents provides a new generation of machine-learned approaches that simply outperform most constitutive laws in training speed, and accuracy even in complicated loading scenarios. Interestingly, the physics-based nature of the proposed model avoids the low interpretability of conventional machine-learned models.
Proper approximation of the inverse Langevin function (ILF) is a well-recognized problem with significant relevance in many fields ranging from polymer physics to turbulence mechanics. While several estimations of the ILF has been developed recently, the accuracy/complexity trade-off has remained a major challenge in ILF estimation. Accurate estimations are computationally too expensive, and the low-cost estimations lack high accuracy. Here, a novel approach is developed that can provide a family of approximation functions for ILF with different degrees of accuracy. In the present paper, a simple procedure is presented, which can take current approximation functions with asymptotic behavior and enhance them by the addition of a power series of their induced error. The total error is thus correlated with a number of terms in the power series. We further propose different approaches to reduce the terms of the power series and increase the accuracy, the proposed approach is applied to four different classes of ILF approximations and shows significant improvement. The accuracy/complexity trade-off for the family of ILF approximations generated by the proposed approach is compared against those of other approaches to show the advantage of the proposed model. The level of error of this method can reach as low as 0.02%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.