SUMMARYIn neural networks based on a distributed information representation, when different patterns are to be recalled from the same input by association depending on the context, the usual method is to concatenate the pattern representing the context with the input pattern. However, there is a serious problem in this approach, and strong constraints are imposed on the number of inputs and the number of context patterns. This paper tries to apply a different method of contextual modification to nonmonotonic neural networks in order to construct a context-dependent associative model that can solve the longstanding problem. In the proposed model, the number of associations that can be learned increases almost in proportion to the number of elements, regardless of the numbers of the input and context patterns. In addition, the state transitions among attractors can be controlled flexibly by switching the context, which enables the model to simulate the behavior of any finite automaton without inducing an explosive increase in the number of elements or the training time. The model also has a high generalization power ability based on fully distributed representations and has potential for overcoming the limitations of conventional symbol processing.