Class incremental learning (CIL) requires a model to learn the knowledge of new classes without overwriting that of old classes. The main challenge thus lies in catastrophic forgetting. Among all advances in addressing this challenge, rehearsal-based methods are the most widely-used due to their convenience and effectiveness. However, the (classification) scores bias between the old and new classes, known as the main cause of catastrophic forgetting for rehearsal-based methods, is still not fully addressed. Although some recent strategies are proposed to reduce the scores bias, they either take extra training time or sacrifice too much performance on the current task. In this paper, we propose a novel Robust Self-Taught Task-Wise Reweighting (R-STAR) method, which can act as a flexible and key component for improving existing rehearsal-based methods. Concretely, on top of the standard training process, it measures the forgetting degree of the model over the augmented buffer (for robust evaluation) on each task. Further, following the self-taught paradigm, it directly activates the task-wise forgetting degree into a reweighting ratio for scores bias reduction during the inference stage. Extensive experiments show that our R-STAR can improve most rehearsal-based methods with remarkable margins, but with (almost) no extra training cost or excessive performance sacrifice on the new task. Moreover, it also shows its advantages over existing scores bias correction strategies.