2018 AIAA/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference 2018
DOI: 10.2514/6.2018-1648
|View full text |Cite
|
Sign up to set email alerts
|

Deep Autoencoder for Off-Line Design-Space Dimensionality Reduction in Shape Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
3
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 22 publications
(13 citation statements)
references
References 24 publications
0
12
0
Order By: Relevance
“…The effectiveness and efficiency of the ADR method are affected, in general, by the nonlinearities involved in the process. In order to quantify their effects, ongoing research focuses on nonlinear extensions of the current methodology [28][29][30].…”
Section: Discussionmentioning
confidence: 99%
“…The effectiveness and efficiency of the ADR method are affected, in general, by the nonlinearities involved in the process. In order to quantify their effects, ongoing research focuses on nonlinear extensions of the current methodology [28][29][30].…”
Section: Discussionmentioning
confidence: 99%
“…Thus, researchers also apply non-linear methods to reduce the dimensionality of design spaces. This non-linearity can be achieved by (1) applying linear reduction techniques locally to construct a non-linear global manifold [14,15,16,17,12]; (2) using kernel methods with linear reduction techniques (i.e., using linear methods in a Reproducing Kernel Hilbert Space that then induces non-linearity in the original design space) [1,12]; (3) latent variable models like Gaussian process latent variable model (GPLVM) and generative topographic mapping (GTM) [18]; and 4) neural networks based approaches such as self-organizing maps [19] and autoencoders [20,1,12,21].…”
Section: Design Space Dimensionality Reductionmentioning
confidence: 99%
“…Usually dimensionality reduction techniques allow inverse transformations from the latent space back to the de-sign space, thus can synthesize new designs from latent variables [11,1,12,21]. For example, under the PCA model, the latent variables define a linear combination of principal components to synthesize a new design [11]; for local manifold based approaches, a new design can be synthesized via interpolation between neighboring points on the local manifold [17]; and under the autoencoder model, the trained decoder maps any given point in the latent space to a new design [20,21]. Researchers have also employed generative models such as kernel density estimation [23], Boltzmann machines [24], variational autoencoders (VAEs) [25], and generative adversarial nets (GANs) [26,27] to learn the distribution of samples in the design space, and synthesize new designs by drawing samples from the learned distribution.…”
Section: Data-driven Design Synthesismentioning
confidence: 99%
“…In a design context, these techniques are structured on the assumption that the geometric variability in design space is not the same in all directions and there are only a few inherent feature directions which materialise most improvement in the design and these inherent features can form the basis of a new lower-dimensional latent subspace [10]. In literature, such feature extraction has been achieved with (1) linear techniques to locally construct a nonlinear global manifold such as Principal Component Analysis (PCA) [9,11], (2) kernel function with linear reduction techniques such as Kernel PCA [5,12], or (3) with neural network-based approaches such as auto-encoders [13].…”
Section: Introductionmentioning
confidence: 99%