2020
DOI: 10.48550/arxiv.2004.00891
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Kernel Autocovariance Operators of Stationary Processes: Estimation and Convergence

Abstract: We consider autocovariance operators of a stationary stochastic process on a Polish space that is embedded into a reproducing kernel Hilbert space. We investigate how empirical estimates of these operators converge along realizations of the process under various conditions. In particular, we examine ergodic and strongly mixing processes and prove several asymptotic results as well as finite sample error bounds with a detailed analysis for the Gaussian kernel. We provide applications of our theory in terms of c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(9 citation statements)
references
References 63 publications
(100 reference statements)
0
9
0
Order By: Relevance
“…It is important to note although Assumptions 1 and 2 are more restrictive than the measure-theoretic frameworks of [15,16], they do not impose conditions on the conditional distribution P (Y |X) or the CME, but instead prescribe the relationship between the kernel and the measure ν = P X . A crucial feature of our analysis involves establishing explicit learning rates that are adaptive to this relationship between kernel complexity (Assumptions 1 and 2) and the CME continuity (Assumption 3).…”
Section: Discussion/comparison Of Assumptionsmentioning
confidence: 99%
See 2 more Smart Citations
“…It is important to note although Assumptions 1 and 2 are more restrictive than the measure-theoretic frameworks of [15,16], they do not impose conditions on the conditional distribution P (Y |X) or the CME, but instead prescribe the relationship between the kernel and the measure ν = P X . A crucial feature of our analysis involves establishing explicit learning rates that are adaptive to this relationship between kernel complexity (Assumptions 1 and 2) and the CME continuity (Assumption 3).…”
Section: Discussion/comparison Of Assumptionsmentioning
confidence: 99%
“…To demonstrate the generality of Assumption 3, we consider an example involving Markov transition operators, which have recently found an elegant connection to CMEs (see e.g. [14,15]). Although this example is presented to provide a concrete comparison with existing applications of conditional embeddings in the literature [5,11], it should be noted that the argument applies to any setting where the input and output variables range over the same measure space (i.e.…”
Section: Example: Markov Operatorsmentioning
confidence: 99%
See 1 more Smart Citation
“…4 demonstrates the performance for x ref = 0 with models that were obtained using only m = 25 training samples for K 0 and K 1 , respectively, where almost perfect agreement with the solution using the full system is achieved. In contrast, the eDMDc approximation fails for System (13), even when initializing with the optimal solution from the full system.…”
Section: Application To the Duffing Equation (Ode)mentioning
confidence: 99%
“…Error bounds for Koopman eigenvalues in terms of the finitedata estimation error were derived in [12], but the estimation error itself was not quantified. In [13], concentration inequalities were applied to bound the estimation error for the covariance and cross-covariance operators involved in Koopman estimation. In conclusion, to the best of our knowledge 2 , [14] is the only work providing a rigorous bound for the approximation error of a dynamical system governed by a nonlinear ODE.…”
Section: Introductionmentioning
confidence: 99%