In recent years, deep learning methods have been widely applied in remote sensing image classification tasks, providing valuable information for natural monitoring and spatial planning. In an actual application like this, acquiring massive labeled data for deep convolutional networks is costly and difficult especially in the situation that the data sources are diverse and the requirements are changing. Transfer learning methods have already shown superior performance on exploiting domain invariance features in existing data for deep network-based categorization tasks. However, the data imbalance between source and target domains may bring negative transfer and weaken the classifier's ability. Moreover, it is still a difficult problem to extract object-level visual features among easy-mixed categories. In this context, Multi-adversarial Object-level Attention Network (MOAN) is proposed for partial transfer learning and selecting useful features. On the one hand, we present an improved object-level attention proposal network (OANet) for perceiving structural features of the main object in the picture, and weakening the unrelated regions. On the other hand, the extracted features are further enhanced by multi-adversarial framework in order to promote positive transfer, selecting and mapping valuable cross domain features from shared categories and suppressing others. This adversarial learning module can also generate pseudo tags for the samples in target domain so as to perceive integral visual signals, similar to the process in source domain. In addition, virtual adversarial training method is introduced in MOAN so as to regularize the model and maintain stability. Experimental analyses show that our MOAN can significantly promote positive transfer and restrain negative transfer in unsupervised classification problems. MOAN has good performances such as higher accuracies and lower loss values on several benchmark data sets.