When studying the multilinear PageRank problem, a system of polynomial equations needs to be solved. In this paper, we develop convergence theory for a modified Newton method in a particular parameter regime. The sequence of vectors produced by Newton-like method is monotonically increasing and converges to the nonnegative solution. Numerical results illustrate the effectiveness of this procedure.
The accurate prediction of stock prices is not an easy task. The long short-term memory (LSTM) neural network and the transformer are good machine learning models for times series forecasting. In this paper, we use LSTM and transformer to predict prices of banking stocks in China’s A-share market. It is shown that organizing the input data can help get accurate outcomes of the models. In this paper, we first introduce some basic knowledge about LSTM and present prediction results using a standard LSTM model. Then, we show how to organize the input data during the training period and give the comparison results for not only LSTM but also the transformer model. The numerical results show that the prediction results of LSTM and transformer can be improved after the input data are organized when training.
A large scale nonsymmetric algebraic Riccati equation XCX−XE−AX+ B = 0 arising in transport theory is considered, where the n × n coefficient matrices B, C are symmetric and low-ranked and A, E are rank one updates of nonsingular diagonal matrices. By introducing a balancing strategy and setting appropriate initial matrices carefully, we can simplify the large-scale structure-preserving doubling algorithm (SDA ls) for this special equation. We give modified large-scale structurepreserving doubling algorithm, which can reduce the flop count of original SDA ls by half. Numerical experiments illustrate the effectiveness of our method.
Convolutional neural network is an important model in deep learning. To avoid exploding/vanishing gradient problems and to improve the generalizability of a neural network, it is desirable to have a convolution operation that nearly preserves the norm, or to have the singular values of the transformation matrix corresponding to a convolutional kernel bounded around 1. We propose a penalty function that can be used in the optimization of a convolutional neural network to constrain the singular values of the transformation matrix around 1. We derive an algorithm to carry out the gradient descent minimization of this penalty function in terms of convolution kernels. Numerical examples are presented to demonstrate the effectiveness of the method.
In order to determine the stationary distribution for discrete time quasi-birth-death Markov chains, it is necessary to find the minimal nonnegative solution of a quadratic matrix equation. We apply the Newton-Shamanskii method for solving the equation. We show that the sequence of matrices generated by the Newton-Shamanskii method is monotonically increasing and converges to the minimal nonnegative solution of the equation. Numerical experiments show the effectiveness of our method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.