2023
DOI: 10.1109/tvcg.2022.3214420
|View full text |Cite
|
Sign up to set email alerts
|

Deep Hierarchical Super Resolution for Scientific Data

Abstract: Scene representation networks (SRNs) have been recently proposed for compression and visualization of scientific data. However, state-of-the-art SRNs do not adapt the allocation of available network parameters to the complex features found in scientific data, leading to a loss in reconstruction quality. We address this shortcoming with an adaptively placed multi-grid SRN (APMGSRN) and propose a domain decomposition training and inference technique for accelerated parallel training on multi-GPU systems. We also… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 75 publications
0
2
0
Order By: Relevance
“…Dimension‐reduction‐based compressors (e.g., TTHRESH [BRLP19]) reduce data dimensions by techniques such as higher‐order singular vector decomposition (HOSVD). Recently, neural networks have been widely used to reconstruct scientific data, such as autoencoders [LDZ*21, ZGS*22], super‐resolution networks [WGS*23, HZCW22], and implicit neural representations [XTS*22, LJLB21, WHW22, MLL*21, SMB*20]. Yet, most neural compressors do not offer explicit pointwise error control for scientific applications.…”
Section: Related Workmentioning
confidence: 99%
“…Dimension‐reduction‐based compressors (e.g., TTHRESH [BRLP19]) reduce data dimensions by techniques such as higher‐order singular vector decomposition (HOSVD). Recently, neural networks have been widely used to reconstruct scientific data, such as autoencoders [LDZ*21, ZGS*22], super‐resolution networks [WGS*23, HZCW22], and implicit neural representations [XTS*22, LJLB21, WHW22, MLL*21, SMB*20]. Yet, most neural compressors do not offer explicit pointwise error control for scientific applications.…”
Section: Related Workmentioning
confidence: 99%
“…For super resolution in which the dimension of the output R n is larger than that of the input R m , there are several ways to treat the difference of the dimensions between the input and the output. For example, the upsampling can be used inside a network to expand the dimension [79]. One can also implement a resize or interpolation function for the input data to align the size with that of the output [29,54,80].…”
Section: Convolutional Neural Networkmentioning
confidence: 99%
“…These methods learn the complex correspondence between low and high-resolution data, and have shown remarkable performance. [2,[13][14][15][16][39][40][41] Although widely used, several limitations still remain for deep learning-based super-resolution methods. First, current methods learn a deterministic one-to-one mapping between high and low-resolution data pairs.…”
Section: Introductionmentioning
confidence: 99%