The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2020
DOI: 10.1145/3414685.3417786
|View full text |Cite
|
Sign up to set email alerts
|

A reduced-precision network for image reconstruction

Abstract: Neural networks are often quantized to use reduced-precision arithmetic, as it greatly improves their storage and computational costs. This approach is commonly used in image classification and natural language processing applications. However, using a quantized network for the reconstruction of HDR images can lead to a significant loss in image quality. In this paper, we introduce QW-Net , a neural network for image reconstruction, in which close to 95% of the computations can be imple… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
22
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(25 citation statements)
references
References 26 publications
0
22
0
Order By: Relevance
“…Li et al [98] applied neural architecture search to find efficient architectures through combining the knowledge of multiple intermediate features extracted from the heavyweight model. Thomas et al [140] presented QW-Net for image reconstruction, where about 95% of the computations can be implemented with 4-bit integers. We believe there is an opportunity to incorporate these techniques into DL models to improve training efficiency for large-scale scientific data analysis and visualization.…”
Section: Research Opportunitiesmentioning
confidence: 99%
“…Li et al [98] applied neural architecture search to find efficient architectures through combining the knowledge of multiple intermediate features extracted from the heavyweight model. Thomas et al [140] presented QW-Net for image reconstruction, where about 95% of the computations can be implemented with 4-bit integers. We believe there is an opportunity to incorporate these techniques into DL models to improve training efficiency for large-scale scientific data analysis and visualization.…”
Section: Research Opportunitiesmentioning
confidence: 99%
“…Hasselgren et al [HMS*20] and Munkberg et al [MH20] used the hierarchical kernel prediction architecture to denoise the re‐sampled Monte Carlo images and the samples‐splatted layers, respectively, and they achieved an interactive speed. Besides, Thomas et al [TVLF20] also utilized the hierarchical architecture with a feature extraction network, which is resilient to quantization errors, to explore the feasibility of a heavily quantized network for image reconstruction. Unlike them directly using the kernel prediction architecture, our approach extends it to real‐time denoising with 1‐spp input by operating on the encoding of the kernel map to reduce neural network inference overhead.…”
Section: Related Workmentioning
confidence: 99%
“…Reduced precision computing is a technique where smaller data types are used to reduce area usage, execution time, and power consumption within noise-tolerant applications without losing information [12]. It has been widely applied in different application domains, especially, in deep learning applications [13,14]. Existing studies propose the use of reduced precision also for the deconvolution kernel [15], apply mixed precision to other steps of the radio-astronomical imaging acquisition pipeline, e.g., correlator [16], or other radio-astronomy domains, e.g., computation of tomographic reconstructors [17].…”
Section: Introductionmentioning
confidence: 99%