Broadband ISDN has made possible a variety of new multimedia services, but also created new problems for congestion control, due to the bursty nature of W i c sources. Traflic @ction has been shown to be able to alleviate this problem in [l, 21. The M i c prediction model in their framework is a special case of the Box-Jenkins' ARIMA models. In this paper we would like to go one step further and propose a new approach, the neural network approach, for trait prediction. A (1, 5, 1) back-propagation feedforward neural network is trained to capture the linear and nonlinear regulanties in several time serieses. A comparison between the results 6om the neural network approach and the Box-Jenkins approach is also provided. The non-linearity used in this paper is chaotic. We have designed a set of experiments to show that neural networks' prediction performance is only slightly affected by the intensity of the stochastic component (noise) in a time series. We have also demonstrated that a neural network's performance should be measured against the variance of the noise to gain more insight into its behavior and W c t i o n performance. Based on the experimental results we then conclude that the neural network approach is an attractive alternative to traditional regression techniques as a tool for trafic prediction. L IntroductionThe advent of broadband integrated services digital networks (l3-ISDN) has made possible a variety of new multimedia services, but it has also created new problems for congestion control. Most of the problems arise because of the bursty nature of the traEic sources. It has been argued that traditional reactive congestion control is not suitable for broadband integrated networks due to the effects of high-speed channels[3, 41. Recently, Lazar et al. have proposed a new congestion control scheme, called proactive control, to Overcome this difficulty [l, 21. The core of their proactive congestion control scheme lies in the trafiic predictor. The traffic predictor in their framework is a seasonal autoregressive (AR) model, which is a special case of the BoxJenkins' auto-regressive integrated moving average (ARLMA) models.Among traditional regression techniques the ARIh4A models are said to be optimal forecasts [5]. They are optimal in the sense that no other univariate forecasts have a smaller mean squared error (MSE). However, this comparison is valid only for those univariate models that are linear combinations of the past values in the time series, with fixed coefficients. The seasonal AR model used in [ 1,2] likewise can only capture the linear relationship among the cell arrivals of t r s c sources. Non-linear regression techniques do exist, but they require much more computational and intellectual efforts. This fact 0-7803-0917-0/93$03.~ Q 1993 IEEE severely limits their practicality. In this paper we would like to propose a new approach, the neural network approach, for traflic prediction. It is well known that neural networks are capable of performing non-linear mappings between realvalued i...
Texts of a particular type evidence a discernible, predictable schema. These schemata can be delineated, and as such provide models of their respective text-types which are of use in automatically structuring texts. We have developed a Text Structurer module which recognizes text-level structure for use within a larger information retrieval system to delineate the discourse-level organization of each document's contents. This allows those document components which are more likely to contain the type of information suggested by the user's query to be selected for higher weighting. We chose newspaper text as the first text type to implement. Several iterations of manually coding a randomly chosen sample of newspaper articles enabled us to develop a newspaper text model. This process suggested that our intellectual decomposing of texts relied on six types of linguistic information, which were incorporated into the Text Structurer module. Evaluation of the results of the module led to a revision of the underlying text model and of the Text Structurer itself.
The text categorization module described here provides a front-end filtering function for the larger DR-LINK text retrieval system [Liddy and Myaeing 1993]. The model evaluates a large incoming stream of documents to determine which documents are sufficiently similar to a profile at the broad subject level to warrant more refined representation and matching. To accomplish this task, each substantive word in a text is first categorized using a feature set based on the semantic Subject Field Codes (SFCs) assigned to individual word senses in a machine-readable dictionary. When tested on 50 user profiles and 550 megabytes of documents, results indicate that the feature set that is the basis of the text categorization module and the algorithm that establishes the boundary of categories of potentially relevant documents accomplish their tasks with a high level of performance. This means that the category of potentially relevant documents for most profiles would contain at least 80% of all documents later determined to be relevant to the profile. The number of documents in this set would be uniquely determined by the system's category-boundary predictor, and this set is likely to contain less than 5% of the incoming stream of documents.
In this paper we describe our neuro-genetic approach to developing a multi-agent system (MAS) which forages as well as meta-searches for multi-media information in online information sources on the ever-changing World Wide Web. We present EVA, an intelligent agent system that supports 1) multiple Web agents working together concurrently and collaboratively to achieve their common goal, 2) the evolution of these Web agents and the user profiles to achieve a better filtering, classification, and categorization performance, and 3) longer-term adaptation by using our unique neuro-genetic algorithm. Individual Web agents use neural networks for local searching and learning. Genetic algorithms are used to facilitate the evolution of agents on a global scale. NLP technology allows users to write sophisticated queries, and allows the system to extract important information from the user queries and the retrieved documents. The new text categorization technology used by EVA, which is also based on the neuro-genetic algorithm, can learn to automatically categorize and classify Web pages with high accuracy, using as few terms as possible. Additionally, we have developed a technique for integrating meta-searching and Web-crawling to produce intelligent agents that can retrieve documents more efficiently, and a self-feedback or automatic relevance feedback mechanism to automatically train the Web agents, without human intervention. This algorithm, together with the neuro-genetic algorithm, has greatly enhanced the autonomy of the Web agents.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.