In real-world applications, we can encounter situations when a well-trained model has to be used to predict from a damaged dataset. The damage caused by missing or corrupted values can be either on the level of individual instances or on the level of entire features. Both situations have a negative impact on the usability of the model on such a dataset. This paper focuses on the scenario where entire features are missing which can be understood as a specific case of transfer learning. Our aim is to experimentally research the influence of various imputation methods on the performance of several classification models. The imputation impact is researched on a combination of traditional methods such as k-NN, linear regression, and MICE compared to modern imputation methods such as multi-layer perceptron (MLP) and gradient boosted trees (XGBT). For linear regression, MLP, and XGBT we also propose two approaches to using them for multiple features imputation. The experiments were performed on both real world and artificial datasets with continuous features where different numbers of features, varying from one feature to 50%, were missing. The results show that MICE and linear regression are generally good imputers regardless of the conditions. On the other hand, the performance of MLP and XGBT is strongly dataset dependent. Their performance is the best in some cases, but more often they perform worse than MICE or linear regression. While solving a classification task one often faces demanding preprocessing of data. One of the preprocessing steps is the treatment of missing values. In practice, we struggle with randomly located single missing data in instances or with missing entire features. In real-world scenarios, e.g. [1, 2, 3], we have to deal with missing data. Missing values can also be part of a cold-start problem. Imputation treatments for missing values have been widely investigated [4,5,6] and plenty of methods how to reconstruct missing data were designed, but these methods are not directly designated for entire missing features reconstruction.This work focuses on the influence of missing entire features and possibilities of their reconstruction for usage in predictive modeling. We consider the following scenario: a classification model is trained on a dataset containing a complete set of continuous features but has to be used for prediction of classes of a dataset with some entire features missing. Entire feature reconstruction and its usage in an already learned model in order to perform with a reconstructed dataset distinguishes our work from others. Our point of interest is to find out how missing features impact the accuracy of the classification model, what possibilities of missing entire features reconstruction exist, and how the model performs A PREPRINT with imputed data. In our work, the reconstruction of missing features, i.e. data imputation, is the very first task of transfer learning methods [7], where the identification of identical, missing, and new features is crucial.Experimental results ...