This paper reports empirical evidence that a neural network model is applicable to the prediction of foreign exchange rates. Time series data and technical indicators, such as moving average, are fed to neural networks to capture the underlying`rulesa of the movement in currency exchange rates. The exchange rates between American Dollar and "ve other major currencies, Japanese Yen, Deutsch Mark, British Pound, Swiss Franc and Australian Dollar are forecast by the trained neural networks. The traditional rescaled range analysis is used to test the`e$ciencya of each market before using historical data to train the neural networks. The results presented here show that without the use of extensive market data or knowledge, useful prediction can be made and signi"cant paper pro"ts can be achieved for out-of-sample data with simple technical indicators. A further research on exchange rates between Swiss Franc and American Dollar is also conducted. However, the experiments show that with e$cient market it is not easy to make pro"ts using technical indicators or time series input neural networks. This article also discusses several issues on the frequency of sampling, choice of network architecture, forecasting periods, and measures for evaluating the model's predictive power. After presenting the experimental results, a discussion on future research concludes the paper.
In this paper, we propose a method based on the Laplacian in the frequency domain for video text detection. Unlike many other approaches which assume that text is horizontally-oriented, our method is able to handle text of arbitrary orientation. The input image is first filtered with Fourier-Laplacian. K-means clustering is then used to identify candidate text regions based on the maximum difference. The skeleton of each connected component helps to separate the different text strings from each other. Finally, text string straightness and edge density are used for false positive elimination. Experimental results show that the proposed method is able to handle graphics text and scene text of both horizontal and nonhorizontal orientation.
Many digital images contain blurred regions which are caused by motion or defocus. Automatic detection and classification of blurred image regions are very important for different multimedia analyzing tasks. This paper presents a simple and effective automatic image blurred region detection and classification technique. In the proposed technique, blurred image regions are first detected by examining singular value information for each image pixels. The blur types (i.e. motion blur or defocus blur) are then determined based on certain alpha channel constraint that requires neither image deblurring nor blur kernel estimation. Extensive experiments have been conducted over a dataset that consists of 200 blurred image regions and 200 image regions with no blur that are extracted from 100 digital images. Experimental results show that the proposed technique detects and classifies the two types of image blurs accurately. The proposed technique can be used in many different multimedia analysis applications such as image segmentation, depth estimation and information retrieval.
The prevalent scene text detection approach follows four sequential steps comprising character candidate detection, false character candidate removal, text line extraction, and text line verification. However, errors occur and accumulate throughout each of these sequential steps which often lead to low detection performance. To address these issues, we propose a unified scene text detection system, namely Text Flow, by utilizing the minimum cost (min-cost) flow network model. With character candidates detected by cascade boosting, the min-cost flow network model integrates the last three sequential steps into a single process which solves the error accumulation problem at both character level and text line level effectively. The proposed technique has been tested on three public datasets, i.e, ICDAR2011 dataset, IC-DAR2013 dataset and a multilingual dataset and it outperforms the state-of-the-art methods on all three datasets with much higher recall and F-score. The good performance on the multilingual dataset shows that the proposed technique can be used for the detection of texts in different languages.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.