“…Specifically, Ours-L achieves 69.2% and 70.6% mIoU on the PASCAL VOC val set with DeepLabV2 initialized with ImageNet and MS COCO pre-trained weights, respectively, which recover 90.7% and 91.0% of the upper bound of their fully-supervised counterparts. Our methods also achieve comparable performance with recent state-of-the-art WSSS methods us-ing extra saliency maps, such as NSROM (Yao et al, 2021), DRS (Kim et al, 2021), EPS (Lee et al, 2021c), AuxSegNet (Xu et al, 2021), and EDAM (Wu et al, 2021). Our method also outperforms recent methods with superior backbone networks, such as PMM (Li et al, 2021b), which uses Res2Net101 (Gao et al, 2021) as the backbone for semantic segmentation.…”