2001
DOI: 10.1080/13615930120086078
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating Goodness-of-Fit Measures for Synthetic Microdata

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
67
0
3

Year Published

2010
2010
2024
2024

Publication Types

Select...
6
4

Relationship

1
9

Authors

Journals

citations
Cited by 89 publications
(70 citation statements)
references
References 19 publications
0
67
0
3
Order By: Relevance
“…Other authors are also seeking to address this gap in the literature and there have been some useful recent papers in this regard. (Voas and Williamson 2001) provide a useful synopsis of the 'ideal' parameters that we require validation to cover and in particular an intuitive consideration of what levels of error or variance are acceptable in different circumstances. Scarborough et al (2009), Smith et al (2009, and Birkin and Clarke (2010) describe their processes for validation particular spatial microsimulation models, as does Anderson (2007;, who additionally emphasises the need for further validation work especially considering the use of building confidence intervals to assess goodness of fit.…”
Section: Discussionmentioning
confidence: 99%
“…Other authors are also seeking to address this gap in the literature and there have been some useful recent papers in this regard. (Voas and Williamson 2001) provide a useful synopsis of the 'ideal' parameters that we require validation to cover and in particular an intuitive consideration of what levels of error or variance are acceptable in different circumstances. Scarborough et al (2009), Smith et al (2009, and Birkin and Clarke (2010) describe their processes for validation particular spatial microsimulation models, as does Anderson (2007;, who additionally emphasises the need for further validation work especially considering the use of building confidence intervals to assess goodness of fit.…”
Section: Discussionmentioning
confidence: 99%
“…a We also tested these variables in a non linear way, but this did not improve the model. An often used measure to evaluate the outcomes of simulation models is the standardized absolute error measure (SAE) as described by Voas and Williamson (2001). The measure sums the discrepancies (TAE ¼ total absolute error) divided by the number of expected farms:…”
Section: Simulating the Farmersmentioning
confidence: 99%
“…Table 7 in the appendix lists error measurements for testing the performance of synthetic population techniques. Voas and Williamson (2001) proposed some criteria on the decision of error measurements.…”
Section: Validation Stepmentioning
confidence: 99%