Reproducing Kernel Hilbert Space (RKHS) subspace learning is very popular among the domain adaption, which learns a latent RKHS subspace for the source domain and target domain, so that their distribution gap becomes smaller than in the original data space. There is a famous probability theory: two second-order moment random variables are equal if and only if their mean squared error (MSE) is zero. In this paper, firstly, we use second-order moment random variables to model the source domain and target domain. Then, we prove that a second-order moment random variable is still second-order moment after it is transformed into the RKHS subspace. Finally, we propose the MSE criterion to measure the distribution difference between source domain and target domain. To our best knowledge, we are the first to apply the MSE to RKHS subspace learning. And the experiments show the superiority of MSE criterion, which performs better than the common Maximum Mean Difference (MMD) and the Covariance Matrix (CovM) criteria. Furthermore, considering the robustness of the RKHS subspace learning framework to the data dimension, we propose the domain adaption framework of the progressive RKHS subspace learning (pRKHS-DA), which continuously updates the learned RKHS subspace. Each update takes the previous learned subspace as the starting point. The idea of pRKHS-DA is proposed for the first time in this paper. Finally, this paper proposes MSEpRKHS_DA model based on MSE criterion and pRKHS-DA framework. And experiments show that our model achieves higher classification accuracy than some state-of-the-art methods.INDEX TERMS Domain adaption, MSE criterion, RKHS subspace learning. TABLE 7. The performance of our model and the baseline methods on 8 domain adaption tasks of ORL dataset.