2023
DOI: 10.1016/j.jcp.2022.111801
|View full text |Cite
|
Sign up to set email alerts
|

Multiresolution convolutional autoencoders

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(6 citation statements)
references
References 33 publications
0
2
0
Order By: Relevance
“…Furthermore, AEs automatically learn relevant features from the data without the need for manual feature engineering which can save significant time and effort in pre-processing. This encourages the AEs to capture the crucial characteristics of the input data in its encoding, thereby learning a meaningful representation of the data in the latent code (Liu et al 2023).…”
Section: List Of Symbols Xmentioning
confidence: 99%
See 2 more Smart Citations
“…Furthermore, AEs automatically learn relevant features from the data without the need for manual feature engineering which can save significant time and effort in pre-processing. This encourages the AEs to capture the crucial characteristics of the input data in its encoding, thereby learning a meaningful representation of the data in the latent code (Liu et al 2023).…”
Section: List Of Symbols Xmentioning
confidence: 99%
“…Beyond these applications feature learning, AEs foster a deeper understanding of data through the creation of meaningful representations. They also find practical utility in semantic embedding for NLP and information retrieval tasks and effectively reducing file sizes without compromising quality in image and signal compression (Liu et al 2023). Furthermore, AEs contribute to privacy preservation techniques, such as differential privacy, by protecting sensitive data while enabling analysis and insights.…”
Section: List Of Symbols Xmentioning
confidence: 99%
See 1 more Smart Citation
“…The model is first trained using only low-resolution flow data, and then, the pre-trained weights are transferred in training with high-resolution data sets. Transfer learning over multiple levels of spatial-resolution flow field can improve the accuracy of super-resolution reconstruction [93], which is also related to multi-fidelity learning [94]. U-Net-based model (illustrated in Fig.…”
Section: Supervised Learningmentioning
confidence: 99%
“…The classical POD [21,22] is only limited to linear projections of the data onto the compressed space, causing general loss of information in highly complex physics. AEs [23][24][25][26], which are the NN alternative to the POD, are a better alternative allowing a nonlinear compression and recovery. This work contributes generally to the question of dimensionality reduction of large datasets defined on irregular spatial domains.…”
Section: Introductionmentioning
confidence: 99%