2017
DOI: 10.1201/9781315136288
|View full text |Cite
|
Sign up to set email alerts
|

A Course in Large Sample Theory

Abstract: Although learning from data is effective and has achieved significant milestones, it has many challenges and limitations. Learning from data starts from observations and then proceeds to broader generalizations. This framework is controversial in science, yet it has achieved remarkable engineering successes. This paper reflects on some epistemological issues and some of the limitations of the knowledge discovered in data. The document discusses the common perception that getting more data is the key to achievi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
55
0

Year Published

2018
2018
2018
2018

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 185 publications
(59 citation statements)
references
References 48 publications
0
55
0
Order By: Relevance
“…according to Lemmas 3, 4 and √ n − N s =  n−N s n √ n. Since  p s converges in probability to p s (ϑ 0 ) as n increases, (1 − p s (ϑ 0 ))/(1 −  p s ) converges in probability to 1 as n increases (see Theorem 6'(a) in [8]) and so by Slutsky's theorem (see [8, p. 40…”
Section: Theoremmentioning
confidence: 89%
“…according to Lemmas 3, 4 and √ n − N s =  n−N s n √ n. Since  p s converges in probability to p s (ϑ 0 ) as n increases, (1 − p s (ϑ 0 ))/(1 −  p s ) converges in probability to 1 as n increases (see Theorem 6'(a) in [8]) and so by Slutsky's theorem (see [8, p. 40…”
Section: Theoremmentioning
confidence: 89%
“…The third equality follows from multiplying out. By the Corollary to Theorem 13 in Ferguson (1996), the asymptotic distribution of √ N · (m(X) − η) is N(0, 1/(2f (η)) 2 ). For F ∈ F s this has the following three implications: First, Lemma A.1(c) in Appendix A), I can apply Theorem 7 in Ferguson (1996) to obtain that the asymptotic distribution of √ Nh(m(X)) is…”
Section: Havementioning
confidence: 96%
“…This is a direct consequence from the asymptotic distribution of √ Nm(X) being normal (see, e.g., the Corollary to Theorem 13 in Ferguson, 1996). Consider γ ∈ (γ * , 1) and N > N γ .…”
mentioning
confidence: 88%
“…Notice that each s jk is a 2nd-order sample moment. Standard large sample theory shows that the asymptotic variance of √ ns jk is given by γ jk = Var(y j0 y k0 ) (e.g., Ferguson 1996), where y j0 = y j − μ j and y k0 = y k − μ k . Because the exact expression for Var( √ ns jk ) is rather complicated and its difference from γ jk is in the order of 1/N , we treat γ jk as the variance of √ ns jk for simplicity.…”
Section: Relative Errors In S J Kmentioning
confidence: 99%