Neurules are a kind of hybrid rules that combine a symbolic (production rules) and a connectionist (adaline unit) representation. One way that the neurules can be produced is from training examples=patterns, extracted from empirical data. However, in certain application fields not all of the training examples are available a priori. A number of them become available over time. In those cases, updating the neurule base is necessary. In this paper, methods for updating a hybrid rule base, consisting of neurules, to reflect the availability of new training examples are presented. They can be considered as a type of incremental learning method that retains the entire induced hypothesis and all past training examples. The methods are efficient, since they require the least possible retraining effort and the number of neurules produced is kept as small as possible. Experimental results that prove the above argument are presented.