2010
DOI: 10.1086/651281
|View full text |Cite
|
Sign up to set email alerts
|

Noise and Bias In Square-Root Compression Schemes

Abstract: We investigate data compression schemes for proposed all-sky diffraction-limited visible/NIR sky surveys aimed at the dark energy problem. We show that lossy squareroot compression to 1 bit of noise per pixel, followed by standard lossless compression algorithms, reduces the images to 2.5-4 bits per pixel, depending primarily upon the level of cosmic-ray contamination of the images. Compression to this level adds noise equivalent to ≤ 10% penalty in observing time. We derive an analytic correction to flux bias… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
21
0

Year Published

2010
2010
2024
2024

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(24 citation statements)
references
References 13 publications
1
21
0
Order By: Relevance
“…techniques (such as gzip) than do the floating-point originals (Gaztañaga et al 2001;Watson 2002;White & Greenfield 1999;Pence et al 2009;Bernstein et al 2009). In the Δ ¼ 0:5σ representation, after lossless compression, storage and transmission of the image "costs" only a few bits per noisedominated pixel.…”
Section: Discussionmentioning
confidence: 99%
“…techniques (such as gzip) than do the floating-point originals (Gaztañaga et al 2001;Watson 2002;White & Greenfield 1999;Pence et al 2009;Bernstein et al 2009). In the Δ ¼ 0:5σ representation, after lossless compression, storage and transmission of the image "costs" only a few bits per noisedominated pixel.…”
Section: Discussionmentioning
confidence: 99%
“…We use the compression scheme, including bias correction, as described in Bernstein et al (2010). We provide a brief description here.…”
Section: Compression Schemementioning
confidence: 99%
“…The codec process has a similar effect on the data to that of read noise in the readout electronics or Poisson statistics. Bernstein et al (2010) refine the basic square-root codec in equation 2with choices for A, B, and C, which maintain constant σ=N step at any signal level for given detector gain and read noise; a prescription for slight departures from (2) to produce a codec that has uniform behavior of N step as the signal increases; and a correction to the decompressed values, which eliminates small biases in the mean signal introduced by the codec process. We will primarily focus on an implementation of the square-root compression algorithm that yields b ¼ 1, which we naively expect to provide the best compromise between our desires for a high compression level but for low image degradation, but we will also do some tests with a coarser b ¼ 0:71 and a finer b ¼ 1:41 level of compression.…”
Section: Compression Schemementioning
confidence: 99%
See 2 more Smart Citations