Global optimization of aerodynamic shapes requires a large number of expensive CFD simulations because of the high dimensionality of the design space. One means to combat that problem is to reduce the dimension of the design space-for example, by constructing low dimensional parametric functions (such as PARSEC and others)-and then optimizing over those parameters instead. Such approaches require first a parametric function that compactly describes useful variation in airfoil shape-a non-trivial and error-prone task. In contrast, we propose to use a deep generative model of aerodynamic designs (specifically airfoils) that reduces the dimensionality of the optimization problem by learning from shape variations in the UIUC airfoil database. We show that our data-driven model both (1) learns realistic and compact airfoil shape representations and (2) empirically accelerates optimization convergence by over an order of magnitude.
This paper shows how to measure the intrinsic complexity and dimensionality of a design space. It assumes that high-dimensional design parameters actually lie in a much lower-dimensional space that represents semantic attributes—a design manifold. Past work has shown how to embed designs using techniques like autoencoders; in contrast, the method proposed in this paper first captures the inherent properties of a design space and then chooses appropriate embeddings based on the captured properties. We demonstrate this with both synthetic shapes of controllable complexity (using a generalization of the ellipse called the superformula) and real-world designs (glassware and airfoils). We evaluate multiple embeddings by measuring shape reconstruction error, pairwise distance preservation, and captured semantic attributes. By generating fundamental knowledge about the inherent complexity of a design space and how designs differ from one another, our approach allows us to improve design optimization, consumer preference learning, geometric modeling, and other design applications that rely on navigating complex design spaces. Ultimately, this deepens our understanding of design complexity in general.
Deep generative models are proven to be a useful tool for automatic design synthesis and design space exploration. When applied in engineering design, existing generative models face three challenges: (1) generated designs lack diversity and do not cover all areas of the design space, (2) it is difficult to explicitly improve the overall performance or quality of generated designs, and (3) existing models generally do not generate novel designs, outside the domain of the training data. In this article, we simultaneously address these challenges by proposing a new determinantal point process-based loss function for probabilistic modeling of diversity and quality. With this new loss function, we develop a variant of the generative adversarial network, named “performance augmented diverse generative adversarial network” (PaDGAN), which can generate novel high-quality designs with good coverage of the design space. By using three synthetic examples and one real-world airfoil design example, we demonstrate that PaDGAN can generate diverse and high-quality designs. In comparison to a vanilla generative adversarial network, on average, it generates samples with a 28% higher mean quality score with larger diversity and without the mode collapse issue. Unlike typical generative models that usually generate new designs by interpolating within the boundary of training data, we show that PaDGAN expands the design space boundary outside the training data towards high-quality regions. The proposed method is broadly applicable to many tasks including design space exploration, design optimization, and creative solution recommendation.
Many engineering problems require identifying feasible domains under implicit constraints. One example is finding acceptable car body styling designs based on constraints like aesthetics and functionality. Current active-learning based methods learn feasible domains for bounded input spaces. However, we usually lack prior knowledge about how to set those input variable bounds. Bounds that are too small will fail to cover all feasible domains; while bounds that are too large will waste query budget. To avoid this problem, we introduce Active Expansion Sampling (AES), a method that identifies (possibly disconnected) feasible domains over an unbounded input space. AES progressively expands our knowledge of the input space, and uses successive exploitation and exploration stages to switch between learning the decision boundary and searching for new feasible domains. We show that AES has a misclassification loss guarantee within the explored region, independent of the number of iterations or labeled samples. Thus it can be used for real-time prediction of samples' feasibility within the explored region. We evaluate AES on three test examples and compare AES with two adaptive sampling methods -the Neighborhood-Voronoi algorithm and the straddle heuristic -that operate over fixed input variable bounds.
Real-world designs usually consist of parts with interpart dependencies, i.e., the geometry of one part is dependent on one or multiple other parts. We can represent such dependency in a part dependency graph. This paper presents a method for synthesizing these types of hierarchical designs using generative models learned from examples. It decomposes the problem of synthesizing the whole design into synthesizing each part separately but keeping the interpart dependencies satisfied. Specifically, this method constructs multiple generative models, the interaction of which is based on the part dependency graph. We then use the trained generative models to synthesize or explore each part design separately via a low-dimensional latent representation, conditioned on the corresponding parent part(s). We verify our model on multiple design examples with different interpart dependencies. We evaluate our model by analyzing the constraint satisfaction performance, the synthesis quality, the latent space quality, and the effects of part dependency depth and branching factor. This paper’s techniques for capturing dependencies among parts lay the foundation for learned generative models to extend to more realistic engineering systems where such relationships are widespread.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.