2002
DOI: 10.1109/tit.2002.800489
|View full text |Cite
|
Sign up to set email alerts
|

Universal linear least squares prediction: upper and lower bounds

Abstract: We consider the problem of sequential linear prediction of real-valued sequences under the square-error loss function. For this problem, a prediction algorithm has been demonstrated [1]-[3] whose accumulated squared prediction error, for every bounded sequence, is asymptotically as small as the best fixed linear predictor for that sequence, taken from the class of all linear predictors of a given order. The redundancy, or excess prediction error above that of the best predictor for that sequence, is upper-boun… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
42
0

Year Published

2008
2008
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 38 publications
(42 citation statements)
references
References 13 publications
0
42
0
Order By: Relevance
“…in the upper bound cannot be improved [52]. It is also shown in [52] that one can reach the optimal upper bound (with exact scaling terms) by using a slightly modified version of (2.1)…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…in the upper bound cannot be improved [52]. It is also shown in [52] that one can reach the optimal upper bound (with exact scaling terms) by using a slightly modified version of (2.1)…”
Section: Related Workmentioning
confidence: 99%
“…Note that the extension (2.3) of (2.1) is a forward algorithm (Section 5 of [53]) and one can show that, in the scalar case, the predictions of (2.3) are always bounded (which is not the case for (2.1)) [52].…”
Section: Related Workmentioning
confidence: 99%
“…As an example, in [2] we investigated linear regression of real-valued data under the square error loss. We presented a regression algorithm whose accumulated error is asymptotically as small as the best fixed linear regressor for that sequence, taken from the class of all linear regressors of a given order.…”
Section: Introductionmentioning
confidence: 99%
“…Unlike [2], [4], here we try to exploit the time varying nature of the best choice of algorithm for any given realization, since the choice of best algorithm from a class of static algorithms can change over time. Nevertheless, instead of trying to find the best partition (possible best switching points) or best number of transitions, our objective is simply to achieve the performance of the best partition directly.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation