Stable distributions have a wide sphere of application: probability theory, physics, electronics, economics, sociology. Particularly important role they play in financial mathematics, since the classical models of financial market, which are based on the hypothesis of the normality, often become inadequate. However, the practical implementation of stable models is a nontrivial task, because the probability density functions of α‐stable distributions have no analytical representations (with a few exceptions). In this work we exploit the parallel computing technologies for acceleration of numerical solution of stable modelling problems. Specifically, we are solving the stable law parameters estimation problem by the maximum likelihood method. If we need to deal with a big number of long financial series, only the means of parallel technologies can allow us to get results in a adequate time. We have distinguished and defined several hierarchical levels of parallelism. We show that coarse‐grained Multi‐Sets parallelization is very efficient on computer clusters. Fine‐grained Maximum Likelihood level is very efficient on shared memory machines with Symmetric multiprocessing and Hyper‐threading technologies. Hybrid application, which is utilizing both of those levels, has shown superior performance compared to single level (MS) parallel application on cluster of Pentium 4 HT nodes.
In this paper, we develop efficient parallel algorithms for the statistical processing of large data sets. Namely, we parallelize the maximum likelihood method for the estimation of parameters of the mixed-stable model. This method is known to be very computationally demanding. Financial German DAX stock index data are used as empirical data in this work. Several hierarchical levels of parallelism were distinguished, analyzed and implemented using OpenMP and MPI library. Parallel performance tests were conducted on the IBM SP6 supercomputer. Obtained performance results show that implemented parallel algorithms are very efficient and scalable on distributed and shared memory systems. Speedups up to 800 times were obtained for 1024 parallel processes. Noticeably, our parallel application is able to efficiently utilize the Simultaneous multithreading (Intel Hyper-Threading) technology in modern processors. This research demonstrates that the application of modern parallel technologies allows a fast and accurate estimation of mixed-stable parameters even for large amounts of data and promotes a wider use of stable modelling for the statistical data processing.
The paper extends the study of applying the mixed-stable models to the analysis of large sets of high-frequency financial data. The empirical data under review are the German DAX stock index yearly log-returns series. Mixed-stable models for 29 DAX companies are constructed employing efficient parallel algorithms for the processing of long-term data series. The adequacy of the modeling is verified with the empirical characteristic function goodness-of-fit test. We propose the smart-Δ method for the calculation of the α-stable probability density function. We study the impact of the accuracy of the computation of the probability density function and the accuracy of ML-optimization on the results of the modeling and processing time. The obtained mixed-stable parameter estimates can be used for the construction of the optimal asset portfolio.
This paper considers the problem of portfolio selection using high-frequency financial time series. Such time series often exhibit the stagnation effect when the assets' returns are not changing. This effect causes a lot of unusual difficulties in the analysis and modelling of such series. In classical statistics, when the distributional law has two first moments, i.e. mean and variance, the relationship between the two random variables is described by the covariance or correlation. However, if the financial data follow the stable law, and empirical studies often support this assumption, covariance and especially correlation often cannot be calculated. In this work, alternative relation measures are applied to deal with the portfolio selection problem using the mixed-stable modelling. The modelling is applied to the high-frequency financial time series obtained from the German DAX index intra-daily data. The performance of the mixed-stable model is compared with alternative approaches. The portfolio selection problem is formulated as the optimization problem, with covariances replaced by the generalized power-correlations. The results of the portfolio selection strategy without the relationship coefficients matrix are also presented.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.