2019
DOI: 10.1038/s41524-019-0209-9
|View full text |Cite
|
Sign up to set email alerts
|

Machine learning enables polymer cloud-point engineering via inverse design

Abstract: Inverse design is an outstanding challenge in disordered systems with multiple length scales such as polymers, particularly when designing polymers with desired phase behavior. We demonstrate high-accuracy tuning of poly(2-oxazoline) cloud point via machine learning. With a design space of four repeating units and a range of molecular masses, we achieve an accuracy of 4°C root mean squared error (RMSE) in a temperature range of 24-90°C, employing gradient boosting with decision trees. The RMSE is >3x better th… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
67
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
2

Relationship

5
4

Authors

Journals

citations
Cited by 78 publications
(75 citation statements)
references
References 35 publications
0
67
0
Order By: Relevance
“…Inspired by recent studies on inverse design of polymers and inorganic solids [23][24][25] , as well as on using machine learning to understand PSCs' properties [26][27][28] , we present a machine-learning framework to investigate LD organic-inorganic perovskites serving as a capping layer for MAPbI 3 . We elucidate which properties of capping layers are responsible for enhancing stability, and the underlying mechanisms whereby they work.…”
mentioning
confidence: 99%
“…Inspired by recent studies on inverse design of polymers and inorganic solids [23][24][25] , as well as on using machine learning to understand PSCs' properties [26][27][28] , we present a machine-learning framework to investigate LD organic-inorganic perovskites serving as a capping layer for MAPbI 3 . We elucidate which properties of capping layers are responsible for enhancing stability, and the underlying mechanisms whereby they work.…”
mentioning
confidence: 99%
“…Often, process optimization is performed using black-box optimization methods, (e.g., Design of Experiments 1 , Bayesian Optimization 2,3 , Particle Swarm Optimization 4 ), in which selected variables are modified systematically within a range and the system's response surface is mapped to reach an optimum. These methods have shown potential for inverse design of materials and systems in a cost-effective manner, and are usually postulated as ideal methods for future selfdriving laboratories [5][6][7][8][9][10][11][12] . However, traditional black-box optimization approaches have limitations: the maximum achievable performance improvement is limited by the designer's choice of variables and their ranges, the parameter space is artificially constrained, and insights into the root causes of underperformance are severely limited, often requiring secondary characterization methods or batches composed of combinatorial variations of the base samples.…”
Section: Introductionmentioning
confidence: 99%
“…For instance, some studies suggest that Bayesian optimization with an adaptive kernel might discover finer regression features 25 . However, most prior work focuses on either optimization which lacks interpretability and transferability when the target changes 5 or inverse design using regression which uses a static dataset 20 . Algorithm selection using information criteria such as Akaike information criterion (AIC) 26 and Bayesian information criterion (BIC) 27 could be used to maximize time-and resource-efficiency of closed-loop laboratories, e.g., by leveraging co-evolution, physicsfusion, and related strategies [28][29] .…”
Section: Discussionmentioning
confidence: 99%
“…Moreover, its performance depends on the initial choice of model hyperparameters and on 4 the definition of the loss, a scalar vector which quantifies how close the output parameter is from the target. To extract knowledge from the data, other studies use neural networks to train a regression model and perform inverse design from a fixed dataset 16,[20][21] . While a neural network can learn complex functions even from a full optical spectrum, it has many hyperparameters and requires a large training dataset, which makes it difficult to integrate into a machine-driven experimental loop with limited initial data and expensive evaluations, and is inefficient to use at the early stage of sampling to explore the parameter space.…”
Section: Introductionmentioning
confidence: 99%