Despite its potential to transform society, materials
research
suffers from a major drawback: its long research timeline. Recently,
machine-learning techniques have emerged as a viable solution to this
drawback and have shown accuracies comparable to other computational
techniques like density functional theory (DFT) at a fraction of the
computational time. One particular class of machine-learning models,
known as “generative models”, is of particular interest
owing to its ability to approximate high-dimensional probability distribution
functions, which in turn can be used to generate novel data such as
molecular structures by sampling these approximated probability distribution
functions. This review article aims to provide an in-depth understanding
of the underlying mathematical principles of popular generative models
such as recurrent neural networks, variational autoencoders, and generative
adversarial networks and discuss their state-of-the-art applications
in the domains of biomaterials and organic drug-like materials, energy
materials, and structural materials. Here, we discuss a broad range
of applications of these models spanning from the discovery of drugs
that treat cancer to finding the first room-temperature superconductor
and from the discovery and optimization of battery and photovoltaic
materials to the optimization of high-entropy alloys. We conclude
by presenting a brief outlook of the major challenges that lie ahead
for the mainstream usage of these models for materials research.