2008
DOI: 10.1109/tsp.2007.901161
|View full text |Cite
|
Sign up to set email alerts
|

Universal Switching Linear Least Squares Prediction

Abstract: Abstract-We consider sequential regression of individual sequences under the square error loss. Using a competitive algorithm framework, we construct a sequential algorithm that can achieve the performance of the best piecewise (in time) linear regression algorithm tuned to the underlying individual sequence. The sequential algorithm we construct does not need the data length, number of piecewise linear regions, or the locations of the transition times, however, it can asymptotically achieve the performance of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
39
0

Year Published

2008
2008
2018
2018

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 33 publications
(39 citation statements)
references
References 38 publications
(21 reference statements)
0
39
0
Order By: Relevance
“…Hence, as a corollary to the theorem, taking the expectation of both sides of (14) with respect to any distribution on yields the following: Corollary: (15) Equation (14) is true for all , and given for any , , , i.e., (16) since (14) is true for the minimizing and equalizer vectors. Taking the expectation of both sides of (16) and minimizing with respect to and , , yields the corollary.…”
Section: Mse Performance Of the Context Tree Equalizermentioning
confidence: 97%
See 2 more Smart Citations
“…Hence, as a corollary to the theorem, taking the expectation of both sides of (14) with respect to any distribution on yields the following: Corollary: (15) Equation (14) is true for all , and given for any , , , i.e., (16) since (14) is true for the minimizing and equalizer vectors. Taking the expectation of both sides of (16) and minimizing with respect to and , , yields the corollary.…”
Section: Mse Performance Of the Context Tree Equalizermentioning
confidence: 97%
“…Context trees and context tree weighting are extensively used in data compression [15], coding and data prediction [16]- [18]. In the context of source coding and universal probability assignment, the context tree weighting method is mainly used to calculate a weighted mixture of probabilities generated by the piecewise Markov models represented on the tree [15].…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…For presentation purposes, we assume that d t ∈ [−1, 1], however, our derivations hold for any bounded but arbitrary desired data sequences. In our framework, we do not use any statistical assumptions on the input feature vectors or on the desired data such that our results are guaranteed to hold in an individual sequence manner [48].…”
Section: Related Workmentioning
confidence: 99%
“…We derive wire delay and slew models by using least-squares regression (LSQR) [18] to fit values from the ST. We generate training samples from benchmark netlists that have no slack violations and contain heterogeneous mixes of cell sizes and types. From the ST, we obtain delay and slew at every pin in the netlist, and fit our models to the data from ST. We use 50% of these data points for training, derive models using LSQR, test the models on all data points, and compute the estimation errors.…”
Section: A Experiments 1: Accuracy Of Learning-based Interconnect Modelsmentioning
confidence: 99%