In order to reduce the dimensionality of parameter space and enhance out-of-sample forecasting performance, this research compares regularization techniques with Autometrics in time-series modeling. We mainly focus on comparing weighted lag adaptive LASSO (WLAdaLASSO) with Autometrics, but as a benchmark, we estimate other popular regularization methods LASSO, AdaLASSO, SCAD, and MCP. For analytical comparison, we implement Monte Carlo simulation and assess the performance of these techniques in terms of out-of-sample Root Mean Square Error, Gauge, and Potency. The comparison is assessed with varying autocorrelation coefficients and sample sizes. The simulation experiment indicates that, compared to Autometrics and other regularization approaches, the WLAdaLASSO outperforms the others in covariate selection and forecasting, especially when there is a greater linear dependency between predictors. In contrast, the computational efficiency of Autometrics decreases with a strong linear dependency between predictors. However, under the large sample and weak linear dependency between predictors, the Autometrics potency ⟶ 1 and gauge ⟶ α. In contrast, LASSO, AdaLASSO, SCAD, and MCP select more covariates and possess higher RMSE than Autometrics and WLAdaLASSO. To compare the considered techniques, we made the Generalized Unidentified Model for covariate selection and out-of-sample forecasting for the trade balance of Pakistan. We train the model on 1985–2015 observations and 2016–2020 observations as test data for the out-of-sample forecast.
This work compares Autometrics with dual penalization techniques such as minimax concave penalty (MCP) and smoothly clipped absolute deviation (SCAD) under asymmetric error distributions such as exponential, gamma, and Frechet with varying sample sizes as well as predictors. Comprehensive simulations, based on a wide variety of scenarios, reveal that the methods considered show improved performance for increased sample size. In the case of low multicollinearity, these methods show good performance in terms of potency, but in gauge, shrinkage methods collapse, and higher gauge leads to overspecification of the models. High levels of multicollinearity adversely affect the performance of Autometrics. In contrast, shrinkage methods are robust in presence of high multicollinearity in terms of potency, but they tend to select a massive set of irrelevant variables. Moreover, we find that expanding the data mitigates the adverse impact of high multicollinearity on Autometrics rapidly and gradually corrects the gauge of shrinkage methods. For empirical application, we take the gold prices data spanning from 1981 to 2020. While comparing the forecasting performance of all selected methods, we divide the data into two parts: data over 1981–2010 are taken as training data, and those over 2011–2020 are used as testing data. All methods are trained for the training data and then are assessed for performance through the testing data. Based on a root-mean-square error and mean absolute error, Autometrics remain the best in capturing the gold prices trend and producing better forecasts than MCP and SCAD.
The study investigates the query of structural break or unit root considering four macroeconomic indicators; unemployment rate, interest rate, GDP growth, and inflation rate of Pakistan. The previous studies create ambiguity regarding the stationarity and non-stationarity of these variables. We employ Zivot & Andrews (1992) unit root test and Step Indicator Saturation (SIS) method for multiple break detection in mean. GDP growth and inflation rate are stationary at level whereas unit root tests fail to reject the null hypothesis of the unemployment rate and interest rate at level. However, Zivot and Andrew unit root test with a single endogenous break indicates that the unemployment rate and interest rate are stationary at level with a single endogenous break. On the other hand, the SIS method reveals that the series are stationary with multiple structural breaks. It is inferred that it is inappropriate to take the first difference of the unemployment rate and interest rate to attain stationarity. The results of this study confirmed that there exist multiple breaks in the macroeconomic variables considered in the context of Pakistan.
Dynamic Stochastic General Equilibrium (DSGE) models are widely used as a tool for policy decision-making. These models lost their fame when they could not predict the crisis in 2008 and could not address policy problems afterward. Meanwhile, the Agent-Based Modelling (ABM) approach emerged as an alternative to DSGE models. Between 2000 and 2020, this study examined scholarly research on the topic of ABM in economics. The information is gathered using the SCOPUS database. Numerous bibliometric indicators are provided, including the total number of publications and citations. The study reveals that agent-based modelling in economics research has grown in recent years. The majority of active research occurs in countries such as the United States of America, and collaboration allows researchers to reach out to many more countries. ABM has the potential to be applied in a wide range of economic fields. ABM also necessitates research into its own development to be used to better understand economic phenomena.
This research compares factor models based on principal component analysis (PCA) and partial least squares (PLS) with Autometrics, elastic smoothly clipped absolute deviation (E-SCAD), and minimax concave penalty (MCP) under different simulated schemes like multicollinearity, heteroscedasticity, and autocorrelation. The comparison is made with varying sample size and covariates. We found that in the presence of low and moderate multicollinearity, MCP often produces superior forecasts in contrast to small sample case, whereas E-SCAD remains better. In the case of high multicollinearity, the PLS-based factor model remained dominant, but asymptotically the prediction accuracy of E-SCAD significantly enhances compared to other methods. Under heteroscedasticity, MCP performs very well and most of the time beats the rival methods. In some circumstances under large samples, Autometrics provides a similar forecast as MCP. In the presence of low and moderate autocorrelation, MCP shows outstanding forecasting performance except for the small sample case, whereas E-SCAD produces a remarkable forecast. In the case of extreme autocorrelation, E-SCAD outperforms the rival techniques under both the small and medium samples, but further augmentation in sample size enables MCP forecast more accurate comparatively. To compare the predictive ability of all methods, we split the data into two halves (i.e., data over 1973–2007 as training data and data over 2008–2020 as testing data). Based on the root mean square error and mean absolute error, the PLS-based factor model outperforms the competitor models in terms of forecasting performance.
Existence of outliers and structural breaks having mutually unknown nature, in time series data, offer challenges to data analysts in model identification, estimation and validation. Detection of these outliers has been an important area of research in time series since long. To analyze the impact of these structural breaks and outliers on model identification, estimation and their inferential analysis, we use two data generating processes: MA(1) and ARMA(1,1). The performance of the test statistics for detecting additive outlier(AO), innovative outlier(IO), level shift(LS) and transient change(TC) is investigated using simulation strategy through power of a test, empirical level of significance, empirical critical values, misspecification frequencies and sampling distribution of estimators for the two models. The empirical critical values are found higher than the theoretical cut-off points, empirical power of the test statistics is not satisfactory for small sample size, large cut-off points and large model coefficient. We have explored confusion between LS, AO, TC and IO at different critical values(c) by varying sample size. We have also collected empirical evidence from time series data for Pakistan using 3-stage iterative procedure to detect multiple outliers and structural breaks. We find that neglecting shocks lead to wrong identification, biased estimation and excess kurtosis. JEL Classification Codes: C15, C18, C63, C32, C87, C51, C52, C82 AMS Classification Codes: 62, 65, 91, DI, 62-08, 62J20, 00A72, 91-08, 91-10, 91-11 62P20, 91B82, 91B84, 62M07, 62M09, 62M10, 62M15, 62M20
In this article, we compare autometrics and machine learning techniques including Minimax Concave Penalty (MCP), Elastic Smoothly Clipped Absolute Deviation (E-SCAD), and Adaptive Elastic Net (AEnet). For simulation experiments, three kinds of scenarios are considered by allowing the multicollinearity, heteroscedasticity, and autocorrelation conditions with varying sample sizes and the varied number of covariates. We found that all methods show improved their performance for a large sample size. In the presence of low and moderate multicollinearity and low and moderate autocorrelation, the considered methods retain all relevant variables. However, for low and moderate multicollinearity, excluding AEnet, all methods keep many irrelevant predictors as well. In contrast, under low and moderate autocorrelation, along with AEnet, the Autometrics retain less irrelevant predictors. Considering the case of extreme multicollinearity, AEnet retains more than 93 percent correct variables with an outstanding gauge (zero percent). However, the potency of remaining techniques, specifically MCP and E-SCAD, tends towards unity with augmenting sample size but capturing massive irrelevant predictors. Similarly, in case of high autocorrelation, E-SCAD has shown good performance in the selection of relevant variables for a small sample, while in gauge, Autometrics and AEnet are performed better and often retained less than 5 percent irrelevant variables. In the presence of heteroscedasticity, all techniques often hold all relevant variables but also suffer from overspecification problems except AEnet and Autometrics which circumvent the irrelevant predictors and establish the true model precisely. For an empirical application, we take into account the workers’ remittance data for Pakistan along its twenty-seven determinants spanning from 1972 to 2020 for Pakistan. The AEnet selected thirteen relevant covariates of workers’ remittance while E-SCAD and MCP suffered from an overspecification problem. Hence, the policymakers and practitioners should focus on the relevant variables selected by AEnet to improve workers' remittance in the case of Pakistan. In this regard, the Pakistan government has devised policies that make it easy to transfer remittances legally and mitigate the cost of transferring remittances from abroad. The AEnet approach can help policymakers arrive at relevant variables in the presence of a huge set of covariates, which in turn produce accurate predictions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.