Abstract:Neural networks are often quantized to use reduced-precision arithmetic, as it greatly improves their storage and computational costs. This approach is commonly used in image classification and natural language processing applications. However, using a quantized network for the reconstruction of HDR images can lead to a significant loss in image quality. In this paper, we introduce
QW-Net
, a neural network for image reconstruction, in which close to 95% of the computations can be imple… Show more
“…Li et al [98] applied neural architecture search to find efficient architectures through combining the knowledge of multiple intermediate features extracted from the heavyweight model. Thomas et al [140] presented QW-Net for image reconstruction, where about 95% of the computations can be implemented with 4-bit integers. We believe there is an opportunity to incorporate these techniques into DL models to improve training efficiency for large-scale scientific data analysis and visualization.…”
Since 2016, we have witnessed the tremendous growth of artificial intelligence+visualization (AI+VIS) research. However, existing survey papers on AI+VIS focus on visual analytics and information visualization, not scientific visualization (SciVis). In this paper, we survey related deep learning (DL) works in SciVis, specifically in the direction of DL4SciVis: designing DL solutions for solving SciVis problems. To stay focused, we primarily consider works that handle scalar and vector field data but exclude mesh data. We classify and discuss these works along six dimensions: domain setting, research task, learning type, network architecture, loss function, and evaluation metric. The paper concludes with a discussion of the remaining gaps to fill along the discussed dimensions and the grand challenges we need to tackle as a community. This state-of-the-art survey guides SciVis researchers in gaining an overview of this emerging topic and points out future directions to grow this research.
“…Li et al [98] applied neural architecture search to find efficient architectures through combining the knowledge of multiple intermediate features extracted from the heavyweight model. Thomas et al [140] presented QW-Net for image reconstruction, where about 95% of the computations can be implemented with 4-bit integers. We believe there is an opportunity to incorporate these techniques into DL models to improve training efficiency for large-scale scientific data analysis and visualization.…”
Since 2016, we have witnessed the tremendous growth of artificial intelligence+visualization (AI+VIS) research. However, existing survey papers on AI+VIS focus on visual analytics and information visualization, not scientific visualization (SciVis). In this paper, we survey related deep learning (DL) works in SciVis, specifically in the direction of DL4SciVis: designing DL solutions for solving SciVis problems. To stay focused, we primarily consider works that handle scalar and vector field data but exclude mesh data. We classify and discuss these works along six dimensions: domain setting, research task, learning type, network architecture, loss function, and evaluation metric. The paper concludes with a discussion of the remaining gaps to fill along the discussed dimensions and the grand challenges we need to tackle as a community. This state-of-the-art survey guides SciVis researchers in gaining an overview of this emerging topic and points out future directions to grow this research.
“…Hasselgren et al [HMS*20] and Munkberg et al [MH20] used the hierarchical kernel prediction architecture to denoise the re‐sampled Monte Carlo images and the samples‐splatted layers, respectively, and they achieved an interactive speed. Besides, Thomas et al [TVLF20] also utilized the hierarchical architecture with a feature extraction network, which is resilient to quantization errors, to explore the feasibility of a heavily quantized network for image reconstruction. Unlike them directly using the kernel prediction architecture, our approach extends it to real‐time denoising with 1‐spp input by operating on the encoding of the kernel map to reduce neural network inference overhead.…”
Real‐time Monte Carlo denoising aims at removing severe noise under low samples per pixel (spp) in a strict time budget. Recently, kernel‐prediction methods use a neural network to predict each pixel's filtering kernel and have shown a great potential to remove Monte Carlo noise. However, the heavy computation overhead blocks these methods from real‐time applications. This paper expands the kernel‐prediction method and proposes a novel approach to denoise very low spp (e.g., 1‐spp) Monte Carlo path traced images at real‐time frame rates. Instead of using the neural network to directly predict the kernel map, i.e., the complete weights of each per‐pixel filtering kernel, we predict an encoding of the kernel map, followed by a high‐efficiency decoder with unfolding operations for a high‐quality reconstruction of the filtering kernels. The kernel map encoding yields a compact single‐channel representation of the kernel map, which can significantly reduce the kernel‐prediction network's throughput. In addition, we adopt a scalable kernel fusion module to improve denoising quality. The proposed approach preserves kernel prediction methods’ denoising quality while roughly halving its denoising time for 1‐spp noisy inputs. In addition, compared with the recent neural bilateral grid‐based real‐time denoiser, our approach benefits from the high parallelism of kernel‐based reconstruction and produces better denoising results at equal time.
“…Reduced precision computing is a technique where smaller data types are used to reduce area usage, execution time, and power consumption within noise-tolerant applications without losing information [12]. It has been widely applied in different application domains, especially, in deep learning applications [13,14]. Existing studies propose the use of reduced precision also for the deconvolution kernel [15], apply mixed precision to other steps of the radio-astronomical imaging acquisition pipeline, e.g., correlator [16], or other radio-astronomy domains, e.g., computation of tomographic reconstructors [17].…”
Radio telescopes produce large volumes of data that need to be processed to obtain high-resolution sky images. This is a complex task that requires computing systems that provide both high performance and high energy efficiency. Hardware accelerators such as GPUs (Graphics Processing Units) and FPGAs (Field Programmable Gate Arrays) can provide these two features and are thus an appealing option for this application. Most HPC (High-Performance Computing) systems operate in double precision (64-bit) or in single precision (32-bit), and radio-astronomical imaging is no exception. With reduced precision computing, smaller data types (e.g., 16-bit) are used to improve energy efficiency and throughput performance in noise-tolerant applications. We demonstrate that reduced precision can also be used to produce high-quality sky images. To this end, we analyze the gridding component (Image-Domain Gridding) of the widely-used WSClean imaging application. Gridding is typically one of the most time-consuming steps in the imaging process and, therefore, an excellent candidate for acceleration. We identify the minimum required exponent and mantissa bits for a custom floating-point data type. Then, we propose the first custom floating-point accelerator on a Xilinx Alveo U50 FPGA using High-Level Synthesis. Our reduced-precision implementation improves the throughput and energy efficiency of respectively 1.84x and 2.03x compared to the single-precision floating-point baseline on the same FPGA. Our solution is also 2.12x faster and 3.46x more energy-efficient than an Intel i9 9900k CPU (Central Processing Unit) and manages to keep up in throughput with an AMD RX 550 GPU.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.