2001
DOI: 10.1111/0002-9092.00262
|View full text |Cite
|
Sign up to set email alerts
|

The Model Specification Problem from a Probabilistic Reduction Perspective

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2009
2009
2022
2022

Publication Types

Select...
7
1

Relationship

3
5

Authors

Journals

citations
Cited by 26 publications
(20 citation statements)
references
References 7 publications
0
19
0
Order By: Relevance
“…See Suppes 1966. 5 Spanos and McGuirk have given a detailed analysis of "mis-specification testing," which is a set of tests designed to detect which of a family of formal models is appropriate to the data under analysis (Spanos & McGuirk, 2001). 6 Deborah Mayo's work on statistical inference takes an approach focused on testing, but focuses on severe tests as criteria for rigorous learning through experimentation (Mayo, 2018).…”
Section: Theory Testing and Confirmationmentioning
confidence: 99%
“…See Suppes 1966. 5 Spanos and McGuirk have given a detailed analysis of "mis-specification testing," which is a set of tests designed to detect which of a family of formal models is appropriate to the data under analysis (Spanos & McGuirk, 2001). 6 Deborah Mayo's work on statistical inference takes an approach focused on testing, but focuses on severe tests as criteria for rigorous learning through experimentation (Mayo, 2018).…”
Section: Theory Testing and Confirmationmentioning
confidence: 99%
“…In particular, the nominal error probabilities are likely to be very different from the actual ones, rendering the inference results unreliable. Large discrepancies can easily arise in practice even in cases of “minor” departures from the model assumptions; see Spanos and McGuirk ().…”
Section: Statistical Adequacy and Its Role In Inferencementioning
confidence: 99%
“…For instance, the most widely invoked robustness claim relates to dealing with departures from the homoskedasticity and no‐autocorrelation assumptions (see Table ), by using OLS estimators in conjunction with robust standard errors (SEs), known as heteroskedasticity/autocorrelation consistent (HAC) SE; see Hansen () and Greene (). It turns out, however, that the use of such robust SEs often gives rise to unreliable inferences because the relevant actual error (types I and II) probabilities are very different from the nominal ones for a given sample size n and the discrepancy does not improve as n increases; see Spanos and McGuirk ().…”
Section: Traditional Perspective On Misspecificationmentioning
confidence: 99%
“…This is because when any of the statistical assumptions are invalid for data Z 0 , inferences based on the estimated model are often unreliable because the nominal and actual error probabilities are likely to be di¤erent. The surest way to lead an inference astray is to apply a :05 signi…cance test when the actual type I error is closer to 1:0; see Spanos and McGuirk (2001).…”
Section: The European Cvar Perspectivementioning
confidence: 99%
“…It is important to stress that respeci…cation in this context does not refer to 'error-…xing'widely used in traditional textbook econometrics, but postulating more appropriate probabilistic structure for fZ t ; t2Ng that would render data Z 0 a typical realization thereof. This is because the traditional 'error-…xing' strategies, such as error-autocorrelation correction and heteroskedasticity/autocorrelation consistent standard errors (see Kennedy, 2008), often render statistical unreliability worse, not better; see Spanos and McGuirk (2001), Spanos (2006a).…”
Section: Can the Two Perspectives Be Reconciled?mentioning
confidence: 99%