2007
DOI: 10.1109/tit.2006.890728
|View full text |Cite
|
Sign up to set email alerts
|

On the Feedback Capacity of Power-Constrained Gaussian Noise Channels With Memory

Abstract: Abstract-For a stationary additive Gaussian-noise channel with a rational noise power spectrum of a finite-order L, we derive two new results for the feedback capacity under an average channel input power constraint. First, we show that a very simple feedback-dependent Gauss-Markov source achieves the feedback capacity, and that Kalman-Bucy filtering is optimal for processing the feedback. Based on these results, we develop a new method for optimizing the channel inputs for achieving the Cover-Pombra block-len… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

5
126
0

Year Published

2007
2007
2023
2023

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 67 publications
(131 citation statements)
references
References 35 publications
5
126
0
Order By: Relevance
“…We show that the results on feedback capacity in [11] (and parallel results in [15]) and the SNR constrained stabilization results of [8] are linked for the case of an MA1 channel and relative degree one, minimum phase plant with a single unstable pole at z = . In this case stabilization within an SNR constraint is possible precisely when the feedback capacity of the channel, CFB (as in (2)) satisfies C FB log 2 (jj): (3) Moreover, if stabilization is possible, it can be achieved by a linear scheme.…”
Section: Introductionmentioning
confidence: 67%
See 1 more Smart Citation
“…We show that the results on feedback capacity in [11] (and parallel results in [15]) and the SNR constrained stabilization results of [8] are linked for the case of an MA1 channel and relative degree one, minimum phase plant with a single unstable pole at z = . In this case stabilization within an SNR constraint is possible precisely when the feedback capacity of the channel, CFB (as in (2)) satisfies C FB log 2 (jj): (3) Moreover, if stabilization is possible, it can be achieved by a linear scheme.…”
Section: Introductionmentioning
confidence: 67%
“…These results focus primarily on autoregressive noise coloring, and use the linear coding structure of [12] to provide a lower bound on the feedback channel capacity. The authors of [15] discuss Kalman-Bucy filtering in relation to feedback communication over Gaussian channels with memory.…”
Section: Introductionmentioning
confidence: 99%
“…Subsequent works included the formulation of capacity as an infinite-horizon dynamic program for channels where the state is a function of the input [16], Markov channels [17] and Gaussian channels with memory [18]. To apply known algorithms from DP, such as Value and Policy iteration, quantization is required and, therefore, only lower bounds were derived in the above papers.…”
Section: Introductionmentioning
confidence: 99%
“…For channels with memory, bounds have been developed for the feedback capacity [3], [4], [5], [6], [7]. In [8], the optimal feedback source distribution is derived in terms of a state-space channel representation and Kalman filtering. The maximal information rate for stationary sources is derived in an analytically explicit form in [9].…”
Section: Introductionmentioning
confidence: 99%
“…The delayedfeedback information rate of the original Gaussian noise channel equals the instantaneous-feedback information rate of the derived state-space channel. By generalizing the methodology and results derived in [9], [8], we show that 1) a feedback-dependent Gauss-Markov source is optimal for achieving the delayed-feedback capacity, and the necessary Markov memory length equals the larger of a) the moving average (MA) noise spectral order, and b) the sum of the feedback delay and the autoregressive (AR) noise spectral order; 2) a state estimator (Kalman-Bucy filter) for the derived state-space channel model is optimal for processing the (delayed) feedback information, and the solution of its steady-state Riccati equation delivers the maximal information rate for stationary sources. Notation: Random variables are denoted by upper-case letters, e.g., X t , and their realizations are denoted using lower case letters, e.g., x t .…”
Section: Introductionmentioning
confidence: 99%