2017
DOI: 10.1364/ol.42.001640
|View full text |Cite
|
Sign up to set email alerts
|

Image quality recovery in binary ghost imaging by adding random noise

Abstract: When the sampling data of ghost imaging are recorded with less bits, i.e., experiencing quantization, a decline in image quality is observed. The fewer bits that are used, the worse the image one gets. Dithering, which adds suitable random noise to the raw data before quantization, is proved to be capable of compensating image quality decline effectively, even for the extreme binary sampling case. A brief explanation and parameter optimization of dithering are given.

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
11
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 30 publications
(12 citation statements)
references
References 11 publications
0
11
0
Order By: Relevance
“…where δ is the quantizer step size, and the floor operator • returns the largest integer that does not exceed its argument [27]. In this paper, δ is satisfies the condition that max(I) − min(I) ≤ 2 n δ.…”
Section: Simulationsmentioning
confidence: 99%
See 1 more Smart Citation
“…where δ is the quantizer step size, and the floor operator • returns the largest integer that does not exceed its argument [27]. In this paper, δ is satisfies the condition that max(I) − min(I) ≤ 2 n δ.…”
Section: Simulationsmentioning
confidence: 99%
“…In this paper, we show that using speckles with low bit depths to recover the object, which means the requirement of cameras' hardware and the image storage space can be reduced. We implement a conventional speckle auto-correlation method to reconstruct the object's image and apply uniform quantization [27], [28] on the raw data to achieve sampling compression of bit depth. When the bit depth of the quantized speckle increase, the quality of the recovered image can increase.…”
Section: Introductionmentioning
confidence: 99%
“…Compromisingly, a large number of repeated measurements are required for reconstructing a high quality image [5][6][7], which has become a major drawback preventing GI from practical applications, especially real-time tasks, even with the help of compressive sensing technique [8]. Considering this, reducing the dynamic range of detectors or recording measurements with less bits, even 1-bit, would speed up GI process significantly when there are less data to be sampled, transported, stored, and calculated [9]. In fact, it is even more suitable for computational GI [10], where the reference camera is replaced by a spatial light modulator, thus the sampling process of reference camera is equivalent to the pattern modulation of spatial light modulator.…”
Section: Introductionmentioning
confidence: 99%
“…When we digitalize the signal into 1-bit, we create at each output a quantization error: the difference between the original signal and the binarization threshold. This quantization error does harm the image quality of GI [9]. Here comes the question -given a binary sampling GI scenario with certain characteristics (e.g.…”
Section: Introductionmentioning
confidence: 99%
“…To test the feasibility of the OCGI method, we run a simulation firstly in ideal conditions without environmental and system noise. To better judge the performance of various methods, i.e., the OCGI, OWGI, WGI, and PGI, we utilize four evaluating indicators of image quality, i.e., CNR, MSE, PSNR, and CC [18,[21][22][23]:…”
mentioning
confidence: 99%