We have studied neural networks as models for time series forecasting, and our research compares the Box-Jenkins method against the neural network method for long and short term memory series. Our work was inspired by previously published works that yielded inconsistent results about comparative performance. We have since experimented with 16 time series of differing complexity using neural networks. The performance of the neural networks is compared with that of the Box-Jenkins method. Our experiments indicate that for time series with long memory, both methods produced comparable results. However, for series with short memory, neural networks outperformed the Box-Jenkins model. Because neural networks can be easily built for multiple-step-ahead forecasting, they may present a better long term forecast model than the Box-Jenkins method. We discussed the representation ability, the model building process and the applicability of the neural net approach. Neural networks appear to provide a promising alternative for time series forecasting. INFORMS Journal on Computing, ISSN 1091-9856, was published as ORSA Journal on Computing from 1989 to 1995 under ISSN 0899-1499.
We discuss the results of a comparative study of the performance of neural networks and conventional methods in forecasting time series. Our work was initially inspired by previously published works that yielded inconsistent results about comparative performance. We have experimented with three time series of different complexity using different feed forward, backpropagation neural network models and the standard Box-Jenkins model. Our experiments demonstrate that for time series with long memory, both methods produced comparable results. However, for series with short memory, neural networks outper formed the Box-Jenkins model. We note that some of the comparable results arise since the neural network and time series model appear to be functionally similar models. We have found that for time series of different complexities there are optimal neural network topologies and parameters that enable them to learn more efficiently. Our initial conclusions are that neural networks are robust and provide good long-term forecasting. They are also parsimonious in their data requirements. Neural networks represent a promising alternative for forecasting, but there are problems deter mining the optimal topology and parameters for efficient learning.
Labeling objects at the subordinate level typically requires expert knowledge, which is not always available from a random annotator. Accordingly, learning directly from web images for fine-grained visual classification (FGVC) has attracted broad attention. However, the existence of noise in web images is a huge obstacle for training robust deep neural networks. In this paper, we propose a novel approach to remove irrelevant samples from the real-world web images during training, and only utilize useful images for updating the networks. Thus, our network can alleviate the harmful effects caused by irrelevant noisy web images to achieve better performance. Extensive experiments on three commonly used fine-grained datasets demonstrate that our approach is much superior to state-of-the-art webly supervised methods. The data and source code of this work have been made anonymously available at: https://github.com/z337-408/WSNFGVC.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.