2022
DOI: 10.1109/tbdata.2021.3092174
|View full text |Cite
|
Sign up to set email alerts
|

A Multi-Branch Decoder Network Approach to Adaptive Temporal Data Selection and Reconstruction for Big Scientific Simulation Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 35 publications
0
2
0
Order By: Relevance
“…The third usage of autoencoders is data reduction. AE-SZ [26] and multi-branch decoder network [44] demonstrate the effectiveness of autoencoders for scientific data reduction. However, existing autoencoder-based works assume every data element is equally important without considering scientists' interest when generating the latent.…”
Section: Latent Representations In Scientific Visualizationmentioning
confidence: 97%
See 1 more Smart Citation
“…The third usage of autoencoders is data reduction. AE-SZ [26] and multi-branch decoder network [44] demonstrate the effectiveness of autoencoders for scientific data reduction. However, existing autoencoder-based works assume every data element is equally important without considering scientists' interest when generating the latent.…”
Section: Latent Representations In Scientific Visualizationmentioning
confidence: 97%
“…As machine learning techniques become increasingly more ubiquitous for scientific visualization and analysis, latent representations generated by autoencoders have attracted great attentions of researchers in recent years. Latent representations have been successfully demonstrated to retain essential information in the original data, and can be used for similarity analysis [11,12,18,25,28], generation of visualizations [6], synthesis of simulations [22,42,43], data reductions [26,44], and have been applied to multivariate volumetric data [28], streamlines and stream surfaces [18], isosurfaces [12], and particles [25].…”
Section: Introductionmentioning
confidence: 99%
“…Dimension‐reduction‐based compressors (e.g., TTHRESH [BRLP19]) reduce data dimensions by techniques such as higher‐order singular vector decomposition (HOSVD). Recently, neural networks have been widely used to reconstruct scientific data, such as autoencoders [LDZ*21, ZGS*22], super‐resolution networks [WGS*23, HZCW22], and implicit neural representations [XTS*22, LJLB21, WHW22, MLL*21, SMB*20]. Yet, most neural compressors do not offer explicit pointwise error control for scientific applications.…”
Section: Related Workmentioning
confidence: 99%