We define the Bernstein copula and study its statistical properties in terms of both distributions and densities+ We also develop a theory of approximation for multivariate distributions in terms of Bernstein copulas+ Rates of consistency when the Bernstein copula density is estimated empirically are given+ In order of magnitude, this estimator has variance equal to the square root of the variance of common nonparametric estimators, e+g+, kernel smoothers, but it is biased as a histogram estimator+ We would thank Mark Salmon for interesting us in the copula function
In many prediction problems, it is not uncommon that the number of variables used to construct a forecast is of the same order of magnitude as the sample size, if not larger. We then face the problem of constructing a prediction in the presence of potentially large estimation error. Control of the estimation error is either achieved by selecting variables or combining all the variables in some special way. This paper considers greedy algorithms to solve this problem. It is shown that the resulting estimators are consistent under weak conditions. In particular, the derived rates of convergence are either minimax or improve on the ones given in the literature allowing for dependence and unbounded regressors. Some versions of the algorithms provide fast solution to problems such as Lasso.Comment: Published at http://dx.doi.org/10.3150/14-BEJ691 in the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
For high dimensional data sets the sample covariance matrix is usually unbiased but noisy if the sample is not large enough. Shrinking the sample covariance towards a constrained, low dimensional estimator can be used to mitigate the sample variability. By doing so, we introduce bias, but reduce variance. In this paper, we give details on feasible optimal shrinkage allowing for time series dependent observations.
Standard-Nutzungsbedingungen:Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Zwecken und zum Privatgebrauch gespeichert und kopiert werden.Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich machen, vertreiben oder anderweitig nutzen.Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, gelten abweichend von diesen Nutzungsbedingungen die in der dort genannten Lizenz gewährten Nutzungsrechte. Terms of use: Documents in AbstractWe examine stock index and Treasury futures markets around releases of U.S. macroeconomic announcements. Seven out of 21 market-moving announcements show evidence of substantial informed trading before the official release time. Prices begin to move in the "correct" direction about 30 minutes before the release time. The pre-announcement price drift accounts on average for about half of the total price adjustment. These results imply that some traders have private information about macroeconomic fundamentals. The evidence suggests that the preannouncement drift likely comes from a combination of information leakage and superior forecasting based on proprietary data collection and reprocessing of public information.Keywords: Macroeconomic news announcements; financial markets; pre-announcement effect; drift; informed trading JEL classification: E44; G14; G15 ECB Working Paper 1901, May 2016 1 Non-technical SummaryMacroeconomic indicators play an important role in business cycle forecasting and are closely watched by financial markets. Some of these indicators appear to influence financial market prices even ahead of their official release time. This paper examines the prevalence of pre-announcement price drift in U.S. stock and bond markets and looks for possible explanations.We study the impact of announcements on second-by-second E-mini S&P 500 stock The difficulty of identifying the causes of pre-announcement drift stems from the relatively small number of announcements that actually move financial markets. Nevertheless, we find that an implementation of strict release procedures makes pre-release drift less likely. This applies in particular to data released under the Principal Federal Economic Indicator (PFEI) guidelines, which impose strict security procedures. There is no evidence that modifying the calculation of market expectations, e.g., a focus on the most recent survey responses, helps in predicting the commonly used announcement surprise.Public information, such as internet activity data, predicts the surprise in a few cases where the public information closely corresponds to the forecasting target. Analogously, improvements in data processing render privately collecting large amounts of comparable information feasible, which can be used for generating proprietary forecasts ahead of time.This early information -leaked or self-calculated -does not need to be precise in order to a have a large price impact. Un...
This paper studies a procedure to combine individual forecasts that achieve theoretical optimal performance. The results apply to a wide variety of loss functions and only require a tail condition on the data sequences. The theoretical results show that the bounds are also valid in the case of time varying combination weights.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.