The Pandemonium system of reflective MINOS agents solves problems by automatic dynamic modularization of the input space. The agents contain feedforward neural networks which adapt using the backpropagation algorithm. We demonstrate the performance of Pandemonium on various categories of problems. These include learning continuous functions with discontinuities, separating two spirals, learning the parity function, and optical character recognition. It is shown how strongly the advantages gained from using a modularization technique depend on the nature of the problem. The superiority of the Pandemonium method over a single net on the first two test categories is contrasted with its limited advantages for the second two categories. In the first case the system converges quicker with modularization and is seen to lead to simpler solutions. For the second case the problem is not significantly simplified through flat decomposition of the input space, although convergence is still quicker.
Learning from examples has a number of distinct algebraic forms, depending on what is to be learned from the available information. One of these forms is [Formula: see text], where the input-output tuple (x, y) is the available information, and G represents the process determining the mapping from x to y. Various models, y = f(x), of G can be constructed using the information from the (x, y) tuples. In general, and for real-world problems, it is not reasonable to expect the exact representation of G to be found (i.e. a formula that is correct for all possible (x, y)). The modeling procedure involves finding a satisfactory set of basis functions, their combination, a coding for (x, y) and then to adjust all free parameters in an approximation process, to construct a final model. The approximation process can bring the accuracy of the model to a certain level, after which it becomes increasingly expensive to improve further. Further improvement may be gained through constructing a number of agents {α}, each of which develops its own model fα. These may then be combined in a second modeling phase to synthesize a team model. If each agent has the ability for internal reflection the combination in a team framework becomes more profitable. We describe reflection and the generation of a confidence function: the agent's estimate of the correctness of each of its predictions. The presence of reflective information is shown to increase significantly the performance of a team.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.