1980
DOI: 10.1109/proc.1980.11745
|View full text |Cite
|
Sign up to set email alerts
|

Ordering techniques for facsimile coding: A review

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

1992
1992
2001
2001

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(6 citation statements)
references
References 6 publications
0
6
0
Order By: Relevance
“…This gives a better result than simple filtering would do, leading to improved sharpness and reduced risk of Moirt disturbance. Tests have been carried out on a representative halftone test image achieving compression rates of [16][17][18][19][20][21][22]. The image quality is preserved at normal viewing distance, though close examination reveals a slight degradation in sharpness and one slightly visible artifact.…”
Section: Discussionmentioning
confidence: 99%
“…This gives a better result than simple filtering would do, leading to improved sharpness and reduced risk of Moirt disturbance. Tests have been carried out on a representative halftone test image achieving compression rates of [16][17][18][19][20][21][22]. The image quality is preserved at normal viewing distance, though close examination reveals a slight degradation in sharpness and one slightly visible artifact.…”
Section: Discussionmentioning
confidence: 99%
“…This is the opposite conclusion* from that given in our earlier paper, which used a different source material. 1 For the pictures used in Ref. 1, we had found that ordering schemes with two sets of codes resulted in 10 to 18 percent higher entropies than the entropies obtainable with one set of codes.…”
Section: * This Algorithm Is Related To the One Proposed By Preu/i (Rmentioning
confidence: 92%
“…T h e predictor varies from picture to picture; however, the variation is not great, as shown in our earlier paper. 1 The predictor for a typical picture [ C C I T T picture 2 ( Fig. l b ) ] is shown in Table I.…”
Section: Prediction Algorithmmentioning
confidence: 99%
“…i.e., where t > kT and k = 1;2;3111: (15) In general, T = 1 or 2 iterations is sufficient for most input data distributions. Note that when a new period starts, initial vitality of each node normally will not equal 1=M(0) as in the first period.…”
Section: B Redistribution Of Learning Ratesmentioning
confidence: 99%