Review-aware Rating Regression (RaRR) suffers the severe challenge of extreme data sparsity as the multi-modality interactions of ratings accompanied by reviews are costly to obtain. Though some studies of semi-supervised rating regression are proposed to mitigate the impact of sparse data, they bear the risk of learning from noisy pseudo-labelled data. In this paper, we propose a simple yet effective paradigm, called co-training-teaching (
CoT
2
), for integrating the merits of both co-training and co-teaching towards robust semi-supervised RaRR.
CoT
2
employs two predictors trained with different feature sets of textual reviews, each of which functions as both ”labeler” and ”validator”. Specifically, one predictor (labeler) first labels unlabelled data for its peer predictor (validator); after that, the validator samples reliable instances from the noisy pseudo-labelled data it received and sends them back to the labeler for updating. By exchanging and validating pseudo-labelled instances, the two predictors are reinforced by each other in an iterative learning process. The final prediction is made by averaging the outputs of both the refined predictors. Extensive experiments show that our
CoT
2
considerably outperforms the state-of-the-art recommendation techniques in the RaRR task, especially when the training data is severely insufficient.