Although multiregional input-output (MRIO) databases use data from national statistical offices, the reconciliation of various data sources results in significantly altered country data. This makes it problematic to use MRIO-based footprints for national policy-making. This paper develops a potential solution using the Netherlands as case study. The method ensures that the footprint is derived from an MRIO dataset (in our case the World Input-Output Database (WIOD)) that is made consistent with Dutch National accounts data. Furthermore, usage of microdata allows us to separate re-exports at the company level. The adjustment results in a foreign footprint in 2009 that is 22% lower than the original WIOD estimates and a significantly altered country allocation. We demonstrate that already in the data preparation phase due to the treatment of re-exports and margins, large differences arise with Dutch national statistics, which may help explain the variation in footprint estimates across MRIO databases.
2014),"A mathematical decision-making procedure to assist in the development of sustainability plans based on the STARS framework", SustainabilityIf you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service information about how to choose which publication to write for and submission guidelines are available for all. Please visit www.emeraldinsight.com/authors for more information. About Emerald www.emeraldinsight.com AbstractPurpose -The purpose of this paper is to study reducing the variation of environmental footprint estimates based on multiregional input-output (MRIO) databases. Footprint estimates from various MRIO databases sometimes vary significantly. As a result, conclusions about the absolute levels or trends of a footprint may be inconsistent. The sources of these variations are attributable to three phases in the footprint calculations: differences in data preparation, MRIO database construction and footprint calculation. Design/methodology/approach -This paper provides a literature overview and a breakdown of the computation of footprints based on MRIO database. Based on these insights, strategies that lead to lower variation in footprint estimates are formulated. Findings -Convergence of footprint estimates require enhanced cooperation amongst academics, among statisticians and between academics and statisticians. Originality/value -Reducing the variation in footprint estimates is a major challenge. This paper aims to contribute to this convergence in three ways. First, this paper provides the first overview of footprint work at statistical offices, government agencies and international organisations. These are the front-runners that may play a role in cooperating with academics (and other statistical offices) to resolve some of the issues. Second, a detailed analysis of the sources of the variation in estimates is provided. These problems are illustrated using examples from the various MRIO databases and the data of Statistics Netherlands. Third, strategies are discussed that might help reduce variation between footprint estimates.
A commonly known problem in population size estimation using registers, is that registers do not necessarily cover the whole population. This may be because they intend to cover part of the population (e.g., students), due to administrative delay or because part of the target population is not registered by default (e.g., illegal persons). One of the methods to estimate the population size in the presence of undercount is the capture-recapture method that combines the information of two or more samples. In the context of census estimation registers are used instead of samples. However, the method assumes that perfect linkage between the registers can be achieved. It is known that this assumption is often violated. In the setting of evaluating the population coverage of a census using a post-enumeration survey, a correction for linkage error was proposed. That correction was later generalized by relaxing some of the newly introduced conditions. However, the new correction method still implicitly assumed that the two registers are of equal size. We introduce a further generalization that includes both previously mentioned correction methods and at the same time deals with registers of different sizes. Specific parameter settings will correspond to the different correction methods. We show that the parameters of each method can be chosen such that the resulting estimates all equal the traditional Petersen estimate (1896) that would theoretically be obtained under truly perfect linkage.
The size of a partly observed population is often estimated with the capture-recapture model. An important assumption of this chat model is that sources can be perfectly linked. This assumption is of relevance if the identification of records is not obtained by some perfect identifier (such as an id code) but by indirect identifiers (such as name and address). In that case, the perfect linkage assumption is often violated, which in general leads to biased population size estimates. Initial suggestions to solve this use record linkage probabilities to correct the capture-recapture model. In this article we provide a general framework, based on the standard log-linear modelling approach, that generalises this work towards the inclusion of additional sources and covariates. We show that the method performs well in a simulation study.
Short-term business statistics at Statistics Netherlands are largely based on Value Added Tax (VAT) administrations. Companies may decide to file their tax return on a monthly, quarterly, or annual basis. Most companies file their tax return quarterly. So far, these VAT based short-term business statistics are published with a quarterly frequency as well. In this article we compare different methods to compile monthly figures, even though a major part of these data is observed quarterly. The methods considered to produce a monthly indicator must address two issues. The first issue is to combine a high- and low-frequency series into a single high-frequency series, while both series measure the same phenomenon of the target population. The appropriate method that is designed for this purpose is usually referred to as “benchmarking”. The second issue is a missing data problem, because the first and second month of a quarter are published before the corresponding quarterly data is available. A “nowcast” method can be used to estimate these months. The literature on mixed frequency models provides solutions for both problems, sometimes by dealing with them simultaneously. In this article we combine different benchmarking and nowcasting models and evaluate combinations. Our evaluation distinguishes between relatively stable periods and periods during and after a crisis because different approaches might be optimal under these two conditions. We find that during stable periods the so-called Bridge models perform slightly better than the alternatives considered. Until about fifteen months after a crisis, the models that rely heavier on historic patterns such as the Bridge, MIDAS and structural time series models are outperformed by more straightforward (S)ARIMA approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.