A new learning algorithm is presented that may have applications in the theory of natural and artificial intelligence.In order to construct machines that could display an intelligence somewhat like that of animals or man, or to understand how the brain functions, one must develop a model of the organization of memory. Several models have been proposed [1,[3][4][5]9, and other references therein]. Here we propose still another model, which is new in that it tends to explain how a brain learns to interpret correctly (or act purposefully upon) the immense amount of information which it obtains continuously from the senses.For simplicity, we assume at first that the brain serves only to answer questions that admit a "yes" or ''no"y answer and that the questions are sequences of 0's and l's of a fixed length m. Our questions depict the totality of data (parameters) that the brain gets at a given time (ours is a discrete time model) and, hence, m is very large (see miscellaneous remarks, below: remarks 1 and 2 concerning the possible meanings of m). At the beginning the brain does not know what to say and answers arbitrarily, but it gets a "reward" when the answer is right and a "punishment" when the answer is wrong. Thus, it gets post facto information what is the right answer and attempts to produce correct answers to the questions that follow. By the problem of organization of memory, we understand the problem of defining an algorithm to answer new questions on account of past experience. Various statistical estimation procedures and classical methods of interpolation and approximation of functions seemingly would apply to this problem. But as information theory showsj, they loose their power when the dimension m is large, say m > 100. Our algorithm is free from this defect, because we exploit the following natural assumption: Few of the parameters are important for solving any given question (see remarks 6 and 7 below concerning the validity and the interpretation of this assumption), although different parameters are needed to solve different questions. The brain probably develops a method of selecting the important parameters of any given question; so does our algorithm. We shall not attempt in this paper to define a neural network capable of performing our algorithm nor theorize on the question how the nervous tissue could do it since, although easy, this would seem to us premature (see EXPERIMENTS below).