This paper gives a technique to extrapolate the anticipated accuracy of a prediction of land-use and land-cover change (LUCC) to any point in the future. The method calibrates a LUCC model with information from the past in order to simulate a map of the present, so that it can compute an objective measure of validation with empirical data. Then it uses that observed measurement of predictive accuracy to anticipate how accurately the model will predict a future landscape. The technique assumes that the accuracy of the model will decay to randomness as the model predicts farther into the future and estimates how fast the decay in accuracy will occur based on prior model performance. Results are presented graphically in terms of percentage of pixels classified correctly so that nonexperts can interpret the accuracy visually. The percentage correct is budgeted by three components: agreement due to chance, agreement due to the predicted quantity of each land category, and agreement due to the predicted location of each land category. The percentage error is budgeted by two components: disagreement due to the predicted location of each land category and disagreement due to the predicted quantity of each land category. Therefore, model users can see the sources of the accuracy and error of the model. The entire analysis is computable for multiple resolutions, so users can see how the results are sensitive to changes in scale. We illustrate the method with an application of the land-use change model Geomod to Central Massachusetts, where the predictive accuracy of the model decays to 90% over fourteen years and to near complete randomness over 200 years.
Background Digital health interventions (DHIs) can improve the provision of health care services. To fully account for their effects in economic evaluations, traditional methods based on measuring health-related quality of life may not be appropriate, as nonhealth and process outcomes are likely to be relevant too. Purpose This systematic review identifies, assesses, and synthesizes the arguments on the analytical frameworks and outcome measures used in the economic evaluations of DHIs. The results informed recommendations for future economic evaluations. Data Sources We ran searches on multiple databases, complemented by gray literature and backward and forward citation searches. Study Selection We included records containing theoretical and empirical arguments associated with the use of analytical frameworks and outcome measures for economic evaluations of DHIs. Following title/abstract and full-text screening, our final analysis included 15 studies. Data Extraction The arguments we extracted related to analytical frameworks (14 studies), generic outcome measures (5 studies), techniques used to elicit utility values (3 studies), and disease-specific outcome measures and instruments to collect health states data (both from 2 studies). Data Synthesis Rather than assessing the quality of the studies, we critically assessed and synthesized the extracted arguments. Building on this synthesis, we developed a 3-stage set of recommendations in which we encourage the use of impact matrices and analyses of equity impacts to integrate traditional economic evaluation methods. Limitations Our review and recommendations explored but not fully covered other potentially important aspects of economic evaluations that were outside our scope. Conclusions This is the first systematic review that summarizes the arguments on how the effects of DHIs could be measured in economic evaluations. Our recommendations will help design future economic evaluations. Highlights Using traditional outcome measures based on health-related quality of life (such as the quality-adjusted life-year) may not be appropriate in economic evaluations of digital health interventions, which are likely to trigger nonhealth and process outcomes. This is the first systematic review to investigate how the effects of digital health interventions could be measured in economic evaluations. We extracted and synthesized different arguments from the literature, outlining advantages and disadvantages associated with different methods used to measure the effects of digital health interventions. We propose a methodological set of recommendations in which 1) we suggest that researchers consider the use of impact matrices and cost-consequence analysis, 2) we discuss the suitability of analytical frameworks and outcome measures available in economic evaluations, and 3) we highlight the need for analyses of equity impacts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.