Although deep learning enables excellent visual perception performance for autonomous driving, its robustness to heavy weather is still worthy of attention since it is prone to forgetting previously learned information. In this paper, we focus on the deraining task from images based on single images of street scenes to improve the perception of autonomous driving in the rain, in which we degrade the rain image to a clean background image by using a deep unfolding network (DUN) combined with the proximal gradient descent (PGD) algorithm and introducing a gradient estimation strategy and a proximal mapping module. In the gradient descent module, we flexibly perform gradient descent on complex images by selectively replacing the degradation matrix. And in the proximal mapping module, we introduce an internal feature fusion module to fuse each stage's local and global features to improve feature extraction efficiency, and an inter-stage feature fusion module to fuse each stage with the condensed features of the previous stage to reduce information loss during iteration. Finally, we evaluated our method on a synthetic dataset and also utilized real complex rain images for qualitative analysis. In addition, we combined high-level perception tasks, i.e., target detection and semantic segmentation, for autonomous driving to compare the perceptual effectiveness of autonomous driving before and after removing the rain. Experimental results demonstrate that our model not only outperforms existing efficient rain removal networks and produces a noticeable improvement in visual quality, but also significantly enhances the perception performance of autonomous driving in rainy weather for both the combined target detection task and the semantic segmentation task.INDEX TERMS Deraining, PGD, deep unfolding network, driving automatically in rain.