Incorporation of a new knowledge into neural networks with simultaneous preservation of the previous one is known to be a nontrivial problem. This problem becomes even more complex when new knowledge is contained not in new training examples, but inside the parameters (connection weights) of another neural network. Here we propose and test two methods allowing combining the knowledge contained in separate networks. One method is based on a simple operation of summation of weights of constituent neural networks. Another method assumes incorporation of a new knowledge by modification of weights nonessential for the preservation of already stored information. We show that with these methods the knowledge from one network can be transferred into another one non-iteratively without requiring training sessions. The fused network operates efficiently, performing classification far better than a chance level. The efficiency of the methods is quantified on several publicly available data sets in classification tasks both for shallow and deep neural networks.Keywords: knowledge fusion, transfer learning, convolutional neural networks, non-iterative learning constructing ensembles, there could be a necessity to save the storage and computational resources by combining several networks into a single one, transferring information from one network into another. Unfortunately, there is very limited literature devoted to the discussion of this particular problem. Most likely, this relates to the common interpretation of neural networks as black boxes that do not allow access to the internally stored information. Thus, so far, the exchange of knowledge between neural networks was considered close to impossible. Nevertheless, there have been several approaches that allow neural networks to train other networks. For this purpose, Zeng & Martinez (2000) used pseudo training set sampled from a distribution of the original training set. Other approaches, such as Model Compression or Model Distillation, were suggested in