The stock price movement is a very interesting discussion today. Dynamic price changes every time requires deep analysis to determine trends and stock price predictions in the future. There have been many methods used to analyze and predict stock prices. This paper will analyze the acceleration of stock price changes using a mathematical approach, known as a second-order differential equation. The benefit of this research is to obtain a coefficient of change in stock prices that can be used to predict stock prices in the future. Stock prices that will be observed are stocks including the LQ45 category. Furthermore, program analysis is carried out using Matlab software. At the end of the study, the coefficient of price change for LQ45 stocks was generated through provided historical data.
The Takagi Sugeno Kang (TSK) fuzzy approach is popular since its output is either a constant or a function. Parameter identification and structure identification are the two key requirements for building the TSK fuzzy system. The input utilized in fuzzy TSK can have an impact on the number of rules produced in such a way that employing more data dimensions typically results in more rules, which causes rule complexity. This issue can be solved by employing a dimension reduction technique that reduces the number of dimensions in the data. After that, the resulting rules are improved with MBGD (Mini-Batch Gradient Descent), which is then altered with uniform regularization (UR). UR can enhance the classifier's fuzzy TSK generalization performance. This study looks at how the rough sets method can be used to reduce data dimensions and use Mini Batch Gradient Descent Uniform Regularization (MBGD-UR) to optimize the rules that come from TSK. 252 respondents' body fat data were utilized as the input, and the mean absolute percentage error (MAPE) was used to analyze the results. Jupyter Notebook software and the Python programming language are used for data processing. The analysis revealed that the MAPE value was 37%, falling into the moderate area. Doi: 10.28991/ESJ-2023-07-03-09 Full Text: PDF
Optimization is one of the factors in machine learning to help model training during backpropagation. This is conducted by adjusting the weights to minimize the loss function and to overcome dimensional problems. Also, the gradient descent method is a simple approach in the backpropagation model to solve minimum problems. The mini-batch gradient descent (MBGD) is one of the methods proven to be powerful for large-scale learning. The addition of several approaches to the MBGD such as AB, BN, and UR can accelerate the convergence process, hence, the algorithm becomes faster and more effective. This added method will perform an optimization process on the results of the data rule that has been processed as its objective function. The processing results showed the MBGD-AB-BN-UR method has a more stable computational time in the three data sets than the other methods. For the model evaluation, this research used RMSE, MAE, and MAPE.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.