In the last few decades many methods have become available for forecasting. As always, when alternatives exist, choices need to be made so that an appropriate forecasting method can be selected and used for the specific situation being considered. This paper reports the results of a forecasting competition that provides information to facilitate such choice. Seven experts in each of the 24 methods forecasted up to 1001 series for six up to eighteen time horizons. The results of the competition are presented in this paper whose purpose is to provide empirical evidence about dgerences found to exist among the various extrapolative (time series) methods used in the competition. KEYWORDS Forecasting Time series Evaluation AccuracyComparison Empirical studyForecasting is an essential activity both at the personal and organizational level. Forecasts can be obtained by:(a) purely judgemental approaches; It is important to understand that there is no such thing as the best approach or method as there is no such thing as the best car or best hi-fi system. Cars or hi-fis differ among themselves and are bought by people who have different needs and budgets. What is important, therefore, is not to look for 'winners' or 'losers', but rather to understand how various forecasting approaches and methods differ from each other and how information can be provided so that forecasting users can be able to make rational choices for their situation.Empirical studies play an important role in better understanding the pros and cons of the various forecasting approaches or methods (they can be thought of as comparable to the testsconducted by consumer protection agencies when they measure the characteristics of various products).In forecasting, accuracy is a major, although not the only factor (see note by Carbone in this issue of the Journal of Forecasring) that has been dealt with in the forecasting literature by empirical or experimental studies. Summaries of the results of published empirical studies dealing with accuracy can be found in Armstrong (1978), Makridakis and Hibon (1979), and Slovic (1972). The general conclusions from these three papers are: (a) Judgemental approaches are not necessarily more accurate than objective methods: (b) Causal or explanatory methods are not necessarily more accurate than extrapolative methods: and (c) More complex or statistically sophisticated methods are not necessarily more accurate than simpler methods. The present paper is another empirical study concerned mainly with the post-sample forecasting accuracy of extrapolative (time series) methods. The study was organized as a 'forecasting competition' in which expert participants analysed and forecasted many real life time series.This paper extends and enlarges the study by Makridakis and Hibon (1979). The major differences between the present and the previous study owe their origins to suggestions made during a discussion of the previous study at a meeting of the Royal Statistical Society (see Makridakis and Hibon. 1979) and in privat...
Technical expertise, human judgment, and the time spent by an analyst are often believed to be key factors in determining the accuracy of forecasts obtained with the use of a time series forecasting method. A control experiment was designed to empirically test these beliefs. It involved the participation of experts and persons with limited training. Forecasts were generated for 25 time series with the use of the Box-Jenkins, Holt-Winters and Carbone-Longini filtering methods. Results of the nonparametric tests used to compare the forecasts confirmed that technical expertise, judgmental adjustment, and individualized analyses were of little value in improving forecast accuracy as compared to black box approaches. In addition, simpler methods were found to provide significantly more accurate forecasts than the Box-Jenkins method when applied by persons with limited training.forecasting/time series
Pressing changes are needed in the administration of real estate taxation that will not only ensure that all properties be assessed accurately and equitably, but will enable taxpayers to perceive that they are being treated fairly. In this paper, we examine what properties an automated mass appraisal system should exhibit so as to meet efficacy, equity and public acceptability criteria. A new automated system designed on the basis of these properties which utilizes feedback control and pattern recognition concepts is presented. Results of an empirical study using Pittsburgh data supports the feasibility of the proposed system.
The influence of word familiarity on word recognition has been very well established in the literature. Familiarity can be measured in a number of ways, typically in the form of written frequency, subjective ratings, or age of acquisition. In general, words that are more familiar are recognized more rapidly than those words that are less familiar (e.g., word frequency- Alegre & Gordon, 1999;Connine & Mullennix, 1990; age of acquisition-Dewhurst, Hitch, & Barry, 1998;Gerhand & Barry, 1998). When other factors such as word length are held constant, high frequency words or words acquired at an earlier age are recognized faster than low frequency words or words acquired at a later age. It should be noted that typically, the earliest words acquired are learned through conversation. Moreover, throughout a lifetime, most individuals (presumably) encounter words more often in speech than in texts, and this reality highlights the importance of suitable spoken counts to analyze speechbased word familiarity. Although written frequency counts are readily available (most notably, Francis & Ku era, 1982;Ku era & Francis, 1967), few (if any) spoken counts exist for American English. The present paper reports the construction of a 1.6 million word spoken frequency database tagged for speaker attributes such as gender and age.The use of spoken word frequency counts is conspicuously absent in the literature, presumably due to a lack of appropriate frequency counts for American English. The most notable spoken frequency database is based on a British English corpus of 190,000 words (which included 10,630 different words) that were recorded without the direct knowledge of the speaker (Brown, 1984). Given the inherent difficulty of speech transcription for the purpose of generating spoken counts, the Brown (1984) corpus is commendable, although still considerably smaller in scope than typical written frequency databases (e.g., Ku era & Francis, 1967, collected over 1 million words representative of over 40,000 different words). The discrepancy in scope between spoken and written counts in conjunction with the absence of a large-scale spoken frequency database in American English motivated the construction of a new spoken frequency database. Spoken English CorpusOur spoken frequency counts were derived from the Michigan Corpus of Academic Spoken English (MICASE). The corpus is available online, and includes 152 transcriptions of lectures, meetings, advisement sessions, public addresses, and other educational conversations recorded at the University of Michigan (Simpson, Swales, & Briggs, 2002). On average, each of the 152 transcriptions contains approximately 11,000 words spoken by students, faculty, and other staff members in a variety of academic fields. The speakers ranged in age and gender and a majority of the speakers were educated native speakers of American English with a small percentage of nonnative speakers. In total, the transcripts derive from approximately 190 hours of recordings made between 1997 and 2001. Further info...
Experiments contrasted judgmental and objective foncast methods. Judgmental methods included "eyeball" extrapolation of time-series plots and judgmental adjustment. Objective methods included Box-Jenkins (ELI), Carbone-Longini AEP filtering (CL), Hdt-Winters (HW), and other smoothing techniques. Objective methods pnmd more accurate than eycbau atrapolation. However, judgmental adjustment improved the accuracy of some objective f o m t s . Subject A m Foncprting.
Forecasting methods currently available assume that established patterns or relationships will not change during the post-sample forecasting phase. This, however, is not a realistic assumption for business and economic series. This paper describes a new approach to forecasting which takes into account possible pattern changes beyond the historical data. This approach is based on the development of two models: one short, the other long term. These models are then reconciled to produce the final forecasts by setting certain parameters as a function of the number, extent, and duration of pattern changes that have occurred in the past. The proposed method has been applied to the 111 series used in the M-Competition. Post-sample forecasting accuracy comparisons show the superiority of the proposed approach over the most accurate methods in the M-Competition.forecasting/time series
During the last decade knowledge on forecasting methods and applications has grown rapidly. However, this growth has occurred without a central focus. Contributions to the practice and methods of forecasting have been spread across many disciplines. Researchers and practitioners in one area are often unaware of the advances made in other areas. Efforts are frequently duplicated, and key findings sometimes pass unnoticed.The Journal qf Forecasfing will provide a centralized focus on recent developments in the art and science of forecasting. It will bring practice and theory together. I t intends to become a communication forum for practitioners of forecasting, users of forecasts, and researchers involved with forecasting in the social, behavioural, management and engineering sciences. It is not intended to become a journal publishing technical papers only, or advocating a single approach or area over others. Its purpose is to be truly interdisciplinary and bridge the gap between theory and practice. In this respect, papers from all areas of forecasting are welcome. These areas include, but are not limited to, the following: SPYROS MAKRIDAKIS J . SCOTT ARMSTRONG ROBERT CARBONE ROBERT FILDES
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.