1972
DOI: 10.1109/tac.1972.1100094
|View full text |Cite
|
Sign up to set email alerts
|

Redundancy and data compression in recursive estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
5
0

Year Published

1975
1975
2023
2023

Publication Types

Select...
4
2
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(5 citation statements)
references
References 3 publications
0
5
0
Order By: Relevance
“…Such a reduction might be very important in systems in which the computing power is limited or too expensive. 20 In fact, all systems with limited resources that are located in a challenging environment and solve complex problems need to compress information. 21 In the nervous system, a computational benefit of information compression is that the transfer and utilization of a huge amount of sensory information would become much easier and less costly.…”
Section: Consciousness and Information Compressionmentioning
confidence: 99%
“…Such a reduction might be very important in systems in which the computing power is limited or too expensive. 20 In fact, all systems with limited resources that are located in a challenging environment and solve complex problems need to compress information. 21 In the nervous system, a computational benefit of information compression is that the transfer and utilization of a huge amount of sensory information would become much easier and less costly.…”
Section: Consciousness and Information Compressionmentioning
confidence: 99%
“…Such a decrease might be important for systems in which the computing power is limited or too expensive. 48 In fact, any system with limited resources that is located in a challenging environment and solve complex problems need to compress information. 49 Information available to our sensory receptors is highly redundant.…”
Section: Information Compressionmentioning
confidence: 99%
“…(31) We now consider the limiting value of the covariance of estimation errors, first by increasing the measurement frequency, and then by increasing the observation time. As the measurement frequency increases, each of the observed probabilities nR/n converges in probability to PR(t, 0).…”
Section: The Minimum Tansform Chi-square Local Processormentioning
confidence: 99%
“…Then both the ML and MTCS estimators converge to 0 [4], since 0 is a continuous function of nRIn. Alternatively, the discrete-time or continuous-time form for the information matrix [(30) and (31)] shows that, for a fixed observation time, doubling the frequency of (independent) measurements doubles the information matrix and, hence, The off-diagonal terms in the integrand are odd functions and integrate to zero. Thus, it is readily seen that I(T2 , 0) > I(T1, 0) for T2 > T1 , so that increasing the observation time will reduce the Cramer-Rao bound on the estimation errors.…”
Section: The Minimum Tansform Chi-square Local Processormentioning
confidence: 99%