The recent successes of the Materials Genome Initiative have opened up new opportunities for data-centric informatics approaches in several subfields of materials research, including in polymer science and engineering. Polymers, being inexpensive and possessing a broad range of tunable properties, are widespread in many technological applications. The vast chemical and morphological complexity of polymers though gives rise to challenges in the rational discovery of new materials for specific applications. The nascent field of polymer informatics seeks to provide tools and pathways for accelerated property prediction (and materials design) via surrogate machine learning models built on reliable past data. We have carefully accumulated a data set of organic polymers whose properties were obtained either computationally (bandgap, dielectric constant, refractive index, and atomization energy) or experimentally (glass transition temperature, solubility parameter, and density). A fingerprinting scheme that captures atomistic to morphological structural features was developed to numerically represent the polymers. Machine learning models were then trained by mapping the fingerprints (or features) to properties. Once developed, these models can rapidly predict properties of new polymers (within the same chemical class as the parent data set) and can also provide uncertainties underlying the predictions. Since different properties depend on different length-scale features, the prediction models were built on an optimized set of features for each individual property. Furthermore, these models are incorporated in a user-friendly online platform named Polymer Genome (). Systematic and progressive expansion of both chemical and property spaces are planned to extend the applicability of Polymer Genome to a wide range of technological domains.
Although it is widely accepted that molecular mechanisms play an important role in the initial establishment of retinotopic maps, it has also long been argued that activity-dependent factors act in concert with molecular mechanisms to refine topographic maps. Evidence of a role for retinal activity in retinotopic map refinement in mammals is limited, and nothing is known about the effect of spontaneous retinal activity on the development of receptive fields in the superior colliculus. Using anatomical and physiological methods with two genetically manipulated mouse models and pharmacological interventions in wild-type mice, we show that spontaneous retinal waves instruct retinotopic map refinement in the superior colliculus of the mouse. Activity-dependent mechanisms may play a preferential role in the mapping of the nasal-temporal axis of the retina onto the colliculus, because refinement is particularly impaired along this axis in mutants without retinal waves. Interfering with both axon guidance cues and activity-dependent cues in the same animal has a dramatic cumulative effect. These experiments demonstrate how axon guidance cues and activity-dependent factors combine to instruct retinotopic map development.
Simulations based on solving the Kohn-Sham (KS) equation of density functional theory (DFT) have become a vital component of modern materials and chemical sciences research and development portfolios. Despite its versatility, routine DFT calculations are usually limited to a few hundred atoms due to the computational bottleneck posed by the KS equation. Here we introduce a machine-learning-based scheme to efficiently assimilate the function of the KS equation, and by-pass it to directly, rapidly, and accurately predict the electronic structure of a material or a molecule, given just its atomic configuration. A new rotationally invariant representation is utilized to map the atomic environment around a grid-point to the electron density and local density of states at that grid-point. This mapping is learned using a neural network trained on previously generated reference DFT results at millions of grid-points. The proposed paradigm allows for the high-fidelity emulation of KS DFT, but orders of magnitude faster than the direct solution. Moreover, the machine learning prediction scheme is strictly linear-scaling with system size.
Polymer Genome is a web-based machine-learning capability to perform near-instantaneous predictions of a variety of polymer properties. The prediction models are trained on (and interpolate between) an underlying database of polymers and their properties obtained from first principles computations and experimental measurements. In this contribution, we first provide an overview of some of the critical technical aspects of Polymer Genome, including polymer data curation, representation, learning algorithms, and prediction model usage. Then, we provide a series of pedagogical examples to demonstrate how Polymer Genome can be used to predict dozens of polymer properties, appropriate for a range of applications. This contribution is closed with a discussion on the remaining challenges and possible future directions.
The presence of ferroelectric polarization in 2D materials is extremely rare due to the effect of the surface depolarizing field. Here, we use first-principles calculations to show the largest out-of-plane polarization observed in a monolayer in functionalized MXenes (ScCO). The switching of polarization in this new class of ferroelectric materials occurs through a previously unknown intermediate antiferroelectric structure, thus establishing three states for applications in low-dimensional nonvolatile memory. We show that the armchair domain interface acts as an 1D metallic nanowire separating two insulating domains. In the case of the van der Waals bilayer we observe, interestingly, the presence of an ultrathin 2D electron/hole gas (2DEG) on the top/bottom layers, respectively, due to the redistrubution of charge carriers. The 2DEG is nondegenerate due to spin-orbit coupling, thus paving the way for spin-orbitronic devices. The coexistence of ferroelectricity, antiferroelectricity, 2DEG, and spin-orbit splitting in this system suggests that such 2D polar materials possess high potential for device application in a multitude of fields ranging from nanoelectronics to photovoltaics.
Solubility parameter models are widely used to select suitable solvents/nonsolvents for polymers in a variety of processing and engineering applications. In this study, we focus on two well-established models, namely, the Hildebrand and Hansen solubility parameter models. Both models are built on the basis of the notion of "like dissolves like" and identify a liquid as a good solvent for a polymer if the solubility parameters of the liquid and the polymer are close to each other. Here we make a critical and quantitative assessment of the accuracy/utility of these two models by comparing their predictions against actual experimental data. Using a data set of 75 polymers, we find that the Hildebrand model displays a predictive accuracy of 60% for solvents and 76% for nonsolvents. The Hansen model leads to a similar performance; on the basis of a data set of 25 polymers for which Hansen parameters are available, we find that it has an accuracy of 67% for solvents and 76% for nonsolvents. The availability of the Hildebrand parameters for a large polymer data set makes it a widely applicable capability, as the Hildebrand parameter for a new polymer may be determined using this data set and machine learning methods as we have done before; the predicted Hildebrand parameter for a new polymer may then be used to determine suitable solvents and nonsolvents. Such predictions are difficult to make with the Hansen model, as the data set of Hansen parameters for polymers is rather small. Nevertheless, the Hildebrand approach must be used with caution. Our analysis shows that while the Hildebrand model has a predictive accuracy of 70−75% for nonpolar polymers, it performs rather poorly for polar polymers (with an accuracy of 57%). Going forward, determination of solvents and nonsolvents for polymers may benefit by developing classification models built directly on the basis of available experimental data sets rather than utilizing the solubility parameter approach, which is limited in versatility and accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.