One of the basic problems of Computer Science is sorting a list of items. It refers to the arrangement of numerical or alphabetical or character data in statistical order. Bubble, Insertion, Selection, Merge, and Quick sort are most common ones and they all have different performances based on the size of the list to be sorted. As the size of a list increases, some of the sorting algorithm turns to perform better than others and most cases programmers select algorithms that perform well even as the size of the input data increases. As the size of dataset increases, there is always the chance of duplication or some form of redundancies occurring in the list. For example, list of ages of students on a university campus is likely to have majority of them repeating. A new algorithm is proposed which can perform sorting faster than most sorting algorithms in such cases. The improved selection sort algorithm is a modification of the existing selection sort, but here the number of passes needed to sort the list is not solely based on the size of the list, but the number of distinct values in the dataset. This offers a far better performance as compared with the old selection sort in the case where there are redundancies in the list.
Background
When you make a forex transaction, you sell one currency and buy another. If the currency you buy increases against the currency you sell, you profit, and you do this through a broker as a retail trader on the internet using a platform known as meta trader. Only 2% of retail traders can successfully predict currency movement in the forex market, making it one of the most challenging tasks. Machine learning and its derivatives or hybrid models are becoming increasingly popular in market forecasting, which is a rapidly developing field.
Objective
While the research community has looked into the methodologies used by researchers to forecast the forex market, there is still a need to look into how machine learning and artificial intelligence approaches have been used to predict the forex market and whether there are any areas that can be improved to allow for better predictions. Our objective is to give an overview of machine learning models and their application in the FX market.
Method
This study provides a Systematic Literature Review (SLR) of machine learning algorithms for FX market forecasting. Our research looks at publications that were published between 2010 and 2021. A total of 60 papers are taken into consideration. We looked at them from two angles: I the design of the evaluation techniques, and (ii) a meta-analysis of the performance of machine learning models utilizing evaluation metrics thus far.
Results
The results of the analysis suggest that the most commonly utilized assessment metrics are MAE, RMSE, MAPE, and MSE, with EURUSD being the most traded pair on the planet. LSTM and Artificial Neural Network are the most commonly used machine learning algorithms for FX market prediction. The findings also point to many unresolved concerns and difficulties that the scientific community should address in the future.
Conclusion
Based on our findings, we believe that machine learning approaches in the area of currency prediction still have room for development. Researchers interested in creating more advanced strategies might use the open concerns raised in this work as input.
The process of generating histogram from a given image is a common practice in the image processing domain. Statistical information that is generated using histograms enables various algorithms to perform a lot of pre-processing task within the field of image processing and computer vision. The statistical subtasks of most algorithms are normally effectively computed when the histogram of the image is known. Information such as mean, median, mode, variance, standard deviation, etc. can easily be computed when the histogram of a given dataset is provided. Image brightness, entropy, contrast enhancement, threshold value estimation and image compression models or algorithms employ histogram to get the work done successfully. The challenge with the generation of the histogram is that, as the size of the image increases, the time expected to traverse all elements in the image also increases. This results in high computational time complexity for algorithms that employs the generation histogram as subtask. Generally the time complexity of histogram algorithms can be estimated as O(N 2) where the height of the image and its width are almost the same. This paper proposes an approximated method for the generation of the histogram that can reduce significantly the time expected to complete a histogram generation and also produce histograms that are acceptable for further processing. The method can theoretically reduce the computational time to a fraction of the time expected by the actual method and still generate outputs of acceptable level for algorithms such as Histogram Equalization (HE) for contrast enhancement and Otsu automatic threshold estimation.
For computer vision systems to effectively perform diagnoses, identification, tracking, monitoring and surveillance, image data must be devoid of noise. Various types of noises such as Salt-and-pepper or Impulse, Gaussian, Shot, Quantization, Anisotropic, and Periodic noises corrupts images making it difficult to extract relevant information from them. This has led to a lot of proposed algorithms to help fix the problem. Among the proposed algorithms, the median filter has been successful in handling salt-and-pepper noise and preserving edges in images. However, its moderate to high running time and poor performance when images are corrupted with high densities of noise, has led to various proposed modifications of the median filter. The challenge observed with all these modifications is the trade-off between efficient running time and quality of denoised images. This paper proposes an algorithm that delivers quality denoised images in low running time. Two state-of-the-art algorithms are combined into one and a technique called Mid-Value-Decision-Median introduced into the proposed algorithm to deliver high quality denoised images in real-time. The proposed algorithm, High-Performance Modified Decision Based Median Filter (HPMDBMF) runs about 200 times faster than the state-of-the-art Modified Decision Based Median Filter (MDBMF) and still generate equivalent output.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.