“…Recently, many approaches have been developed to alleviate the overconfidence problem by calibrating the confidence, i.e., matching the accuracy and confidence scores to reflect the predictive uncertainty [46]. Specifically, one category of approaches [22,43,50,51,57,65,70,72,74,79] aim to learn well-calibrated models during training. For instance, mixup [65], label smoothing [51] and focal loss [50] have been demonstrated to be effective for confidence calibration.…”